Achieving Full Stack Automation Through Kubernetes

The open source revolution is back in full swing with the rise of Kubernetes. Flexibility and agility are the key factors to making the most of the cloud, multicloud, or hybrid cloud era. Kubernetes makes that easier by granting DevOps teams greater control across their infrastructure. But easier does not necessarily mean easy — there are still hurdles to overcome.

Connecting Kubernetes to underlying databases has proven difficult for organizations looking to take a holistic approach to their cloud operations. The dynamic agility of Kubernetes, while great across public cloud environments, tends to be troublesome for legacy databases that are still tied to traditional bare-metal or virtual machine instances. This mismatch between the instant scalability of containerized environments and the clunkier database deployments can lead to significant problems as demand and use of a cloud application fluctuates.

Key roadblocks often include:

  • High operational costs: Manually deploying and managing hundreds of database clusters across multiple geographies increases cost, effort, and complexity.
  • Vendor lock-in: A lack of standardization to ensure data can be moved freely and safely between cloud providers has made it difficult to switch providers quickly or work with multiple providers.
  • Delayed time to market: Customers with applications using microservice architectures have difficulties managing and scaling database clusters in siloed systems, extending development times and making it harder to support their applications.

Developers need to achieve agility and automated orchestration across multi-cloud environments, and with the right application of Kubernetes, they can overcome these challenges to do that. There are a few key focus areas.

Head in the Clouds

It is really easy to rush head first into the alluring buzz of the cloud trend. But doing it right takes a more grounded approach. Cloud native applications have a lot of benefits over their more limited counterparts including instant capacity scaling based on user demand and delivery from the edge for increased speed and reliability.

The problem is that the “cloud native” aspect of the application usually stops at the application without concern for all of the other aspects that make it work. Many legacy databases are locked into a specific deployment strategy on bare-metal servers or virtual machines. They were built for an earlier era when databases only operated in on-premises environments and have been slow to adapt. This makes it is difficult to adapt to the increased demand of busy seasons — such as the holidays for retailers or summer for travel and hospitality companies — as frequent changes and more entries strain the limits of the available computing power.

It is generally a “no-brainer” to run applications in the cloud. Many applications are stateless allowing for rapid recovery should the application instance crash. Stateless workloads are generally easy to port from one cloud provider to the next. Databases, on the other hand, are stateful, requiring sophisticated management and orchestration to preserve availability and consistency should instances crash. This can make migrating databases from the on-premise environment of their inception the cloud a daunting task, especially as the amount of data increases from terabytes to petabytes and beyond.

DevOps teams need to make some changes to their database deployments so they can be as agile as the applications they are supporting. This means switching to a thinking full stack including stateless and stateful applications.

The Full Stack and Nothing but the Stack

Container orchestration systems like Kubernetes can help solve these problems, especially for applications that are already using containers for the application itself. There has been a long misconception that containers are not up to the challenge of supporting a full-fledged database that has slowed adoption. In the past, this was a fair charge to levy against containers. However, with the advent of Kubernetes and the increased sophistication in managing stateful workloads now available (e.g. custom controllers and persistent volumes), deploying distributed databases in containers is a viable option. The ability of container environments to adapt to changing needs at a rapid pace, spinning up more container clusters at a moment’s notice makes them well suited to tasks at cloud scale.

Kubernetes makes the cloud platform agnostic, meaning it does not matter if you use on-prem, AWS, Azure, Joe’s Home Cloud Platform or a combination. The end results will be a smooth and seamless operation for all users. An application built in Kubernetes will run anywhere without any changes, due to the strict CNCF guidelines that platform vendors must follow.

By automating common cloud scenarios, containers can take the burden of monitoring and management off of operations teams. Kubernetes helps make it easy to manage the entire application deployment process through a single dashboard including any networking or storage elements.

Customized responses to common situations such as demand spikes can be programmed into the stack through Kubernetes. Then the orchestration system will execute them automatically when the criteria arise. This kind of instant reaction is only possible when both the application and the database are running on the same platform as a service (PaaS). It makes capacity scaling a nearly perfect linear process that rises and falls with demand instead of stepping up with chunks of computing power that leave unused capacity.

This process can also help ensure continuity and uptime for the application and the supporting database. If connectivity issues or system errors from one instance start to impact user experience, the orchestration system can detect that problem and respond instantly.

Show Me the Data

The end result of all these benefits is that DevOps teams that adopt Kubernetes for their applications and their databases will be able to deliver much better user experiences and adapt much more quickly to changing needs. Agility is increasingly important as developer teams strive to meet market pressure. This method frees them up to spend more time on the application itself instead of managing the day-to-day minutia of the database deployment. They also have full control over the database and the data in it. They do not need to rely on external experts or vendors, but can self-service to extract meaning from the data.

The rise of Kubernetes has enabled a new era of rapid application deployments that savvy businesses can leverage to meet their goals. But it is important not to rush off into the stratosphere and leave the cloud behind for important application support systems like databases. Doing it right requires patience and care to consider all the factors.

This article originated from http://thenewstack.io/achieving-full-stack-automation-through-kubernetes/

Anil Kumar is a Tigera guest blogger. He is the Director of Product Management at Couchbase. Anil’s career spans more than 15+ years building software products across various domains including enterprise software, mobile services, and voice and video services.

————————————————-

Free Online Training
Access Live and On-Demand Kubernetes Tutorials

Calico Enterprise – Free Trial
Solve Common Kubernetes Roadblocks and Advance Your Enterprise Adoption

Join our mailing list

Get updates on blog posts, workshops, certification programs, new releases, and more!

X