I am embracing managed Kubernetes services and here’s my journey. While I attended KubeCon 2018 ready to soak up all I could about Kubernetes and the cloud-native ecosystem, I sought to learn as much as I could to aid me in running my clusters day to day. More importantly, though, I experienced a fundamental shift in what I see as the future of Kubernetes, and what getting started in Kubernetes looks like for companies today. Having first encountered Kubernetes clusters in production around a year ago, I seemed to be thrown into some deep water, with some waves crashing around me. I learned all I could as fast as I could, both about Kubernetes in general and the environment in which we operated. While our clusters are cloud hosted, they were not making use of a managed service, as they were built before our cloud provider offered a managed service. A previous engineer had built our clusters and spearheaded the move from VMs and containers to a Kubernetes production environment.
With this as my starting point, and due to our current Kubernetes clusters’ version being a little behind, there was a not insignificant part of my day spent with the care and feeding of the environment. I loved diving into our environment and investigating with what was wrong with nodes, deployments, networking, etc., or doing our best to automate the addition or removal of nodes to the cluster. I was ensuring that the tool we used to increase our fault tolerance was itself highly available and fault tolerant. I became protective of my Kubernetes clusters and secure it in my position as their caretaker. I developed this mentality, due to interactions with some other engineers, as well as on my own, that managed services were the easy way out. If I wasn’t spending my day making sure our clusters were resilient, properly scaled and redundant, then what was my job? I was a DevOps engineer that specialized in Kubernetes, these were my clusters. To this day my LinkedIn headline says “Kubernetes Nerd” because that is what I think of myself as, but post-KubeCon, I’ve had a shift in what I want to spend my day doing, and how I want to interact with Kubernetes.
As Janet Kuo said during one of the KubeCon keynotes, “Kubernetes is now very, very boring.” I remember a bit of a murmur in the crowd as she said this, but after she explained, and the words sunk in, I think I agree. Kubernetes is still wicked cool, don’t get me wrong, but it’s boring in the sense that people use it, and it’s working. We’re ready for prime-time and that where we want this project to go. Kubernetes is a means to an end; the end user of your service won’t care if your application is running in a Kubernetes cluster if their experience doesn’t match their expectations. Eric St. Martin said, “Kubernetes isn’t the thing…It’s the thing that gets us to the thing.” You can’t take to market the fact that you have a Kubernetes cluster, because day to day, more and more people have Kubernetes clusters. What I once thought of as the biggest badge of pride, that I run Kubernetes in my day to day, while still cool, and still something I pride myself on, is not going to be something that your company is going to be putting on their website as a selling point. This is all behind the scenes and under the hood.
Looking around the Expo room at KubeCon, there was no shortage of vendors looking to aid you in getting your app/service/company into using Kubernetes or augmenting one part of your Kubernetes journey with their product. My biggest paradigm shift happened the moment I realized that, looking over the list of all the vendors in the room. I don’t want to spend my day caring and feeding for these clusters, I’ve done it, I’ve checked that box, I want to build the cool stuff inside of and on top of my clusters. This is where I feel the future of Kubernetes lies. Services meshes like Linkerd and Istio, Admission Controllers or Customer Resource definitions are the cool things you can put in and on top of your cluster, then making the product you deploy into your cluster. These are the things that are making me excited these days.
By no means do I mean that it’s worthless to run your own cluster, or to even learn the ins and outs of a self-managed Kubernetes install. I think learning the core aspects of running a Kubernetes cluster can only help you in your day to day and is still required in using managed services. But I also think that unless you need to self-host for compliance, regulation or some custom corner case that you need to fulfill, that there are plenty of managed services in many of the major cloud providers that can help you get into Kubernetes without being an expert. Kubernetes is a complex beast, and even the best installation methods can make some assumptions or leave some settings undone that the team needs to figure out:
- Is my control plane highly available?
- Have I ensured that etcd is resilient?
- Did I meet my security requirements?
It is important to note that managed Kubernetes offerings from cloud providers don’t do everything for you, there are still a lot of things you and your team need to investigate and figure out when making the Kubernetes move, or adopting it at a larger scale. But if at the minimum, these managed services can abstract away the management of your Master Nodes and/or etcd, then the average user that wasn’t thrown into the deep end of Kubernetes management has a much better chance of succeeding. There are amazing teams working on these managed services, let this collective knowledge and experience go to work for you,
The take away from this shouldn’t be to run out and spin up a 500 node GKE cluster if you’ve never used Kubernetes. Start with a Minikube install on your local machine to get a feel for what Kubernetes is and does. Go through Kelsey Hightower’s “Kubernetes the Hard Way” if you want to be thorough and learn what it takes, and in fact, I recommend this approach. If you see a lot of the work with what it takes to really make a Kubernetes cluster, then you can be even more appreciative of what managed Kuberentes services grant you in terms of freedom from managing the management.
Seth McCombs is a Tigera guest blogger. He is an SRE/DevOps/Infrastructure engineer and all around container advocate, with a love of Open Source and Cloud Native.