No matter where you are in your Kubernetes journey, eventually you’ll have to connect your k8s cluster to external resources like databases, cloud services, and third-party APIs. A majority of existing workloads are non-Kubernetes, and at some point, your Kubernetes applications will need to communicate with them.
Before you can do that, Security teams, as well as database and application owners, will require you to limit access to specific individuals or groups — and nearly every application has dependencies external to Kubernetes that require some level of access control. However, Kubernetes does not natively enable fine-grained egress access controls.
In this on-demand webinar, you will learn how to:
- Securely migrate k8s workloads/applications into production and control access to external resources
- Limit k8s egress to external end-points on a granular, per-pod basis
- Simplify this process using Calico Enterprise
The webinar is ideal for Platform Engineers, Cloud Engineers, and anyone else that is responsible for deploying and maintaining a Kubernetes Platform.
Welcome everyone to today’s webinar, Kubernetes Access Controls with Calico Enterprise. I’m your presenter, Andy Wright from Tigera. So today we’re going to cover three different topics. We’re going to cover the Kubernetes Journey and what we’ve discovered through hundreds of different engagements with companies who are adopting Kubernetes. We’re going to talk about some of the common challenges and we’re going to dive into the details of one of those challenges, which is pod-level access controls. And we’ll talk about how Calico Enterprise helps with that particular use case. So we see the Kubernetes journey is typically five different stages. And apologies for this. The webinar platform seems to be changing the formatting on this as well as a few of the other slides talking about the journey here. But in phase one, this is where there’s a lot of education going on. And so learning the basic concepts of Kubernetes and looking into some of the foundations of networking and storage, network security. And in this stage, generally what is happening is that you’re determining whether or not Kubernetes is a fit for you and whether it’s something that you’d like to spend any more time learning about. And if Kubernetes is a fit for you, generally we move into stage two. And stage two, this is where you’re going to have more of a lab environment, a sandbox environment, you might have a couple of cloud instances or some hardware on prem that you’re working with. But generally what you’re working on here is to deploy work in cluster and to start to design an operating models. So understand how would this be managed? How would we deploy to it? How would we troubleshoot issues? And during this point in time, you’re also starting to think about the architecture that you’d like to deploy to. So stage three is generally where we first start running real applications in Kubernetes. And in generally, we’re going to find something that’s not mission critical. It’s probably going to be a clone or something that’s running in parallel to the real application that end users are using. And in this stage you’re trying to gain confidence that you’ll be able to run these applications internally and you want to build up a little bit of muscle memory on how does Kubernetes work and how do we resolve issues if they happen? And how do we engage with other teams to get them to deploy their applications? And at the end of stage three, what we typically see is that whoever’s tamping this project will go to their management team and ask for budget and ask to create a real initiative to push Kubernetes out into more of a production type application. So that’s when we enter stage four and within stage four we take our first application and we’ll deploy that application to production. And generally this isn’t going to be multiple applications serving multiple users. It’s probably not going to be your mission critical application, but it will be a production application. And what we’ll do is basically prove this out and once we get into this stage, we’re probably going to be engaged with multiple different teams. If we’re running a production application, you’re going to have a network engineering team you’re working with, you’re going to have a security team you’re working with, in some cases a compliance team. You’ll be working with the dev ops engineers and platform engineering. So a lot of people get engaged in this stage and once it’s proven out, then usually other teams want to get in on this opportunity and they like to deploy to the Kubernetes platform and you’ll see more applications being rolled out. Now there’s different challenges or different needs that typically occur at the different stages of this Kubernetes journey. Across stage one and two, generally what’s needed is just basic training, learning best practices. You might be going to meet ups, you might be talking to other peers, you may engage with different consultants. You want to learn how it might integrate into your enterprise environment. How would this integrate into the Cloud environment we’re using? How would this integrate into the on prem environment? Generally you’re going to need to start to learn how do you troubleshoot things? And this is all just about the learning experience. As you move into more running a pilot or pre-production stage, what we see are really three different hurdles that folks have to get through. The first is what we call pod-level access to external resources. So Kubernetes applications, they may need to connect to things that are not inside of the cluster. This could be a database, this could be a Cloud service, this could be a variety of different… It could be another application and API. And that’s really the first challenge. And we’ll talk more about that today. We also see visibility and troubleshooting as an issue. So if a problem does occur within the cluster, how do you dig in and understand where the problem is and go quickly resolve it? And we also look at security controls and these can be fairly basic security controls. In many cases it could be we need isolation between a development and a production environment. It could be more advanced security controls as well. For example, this is an application that needs to be GDPR compliant or this platform needs to eventually have a SOC 2 certification. How do we do that? So today, what I’m going to talk about is the pod-level access. I will cover some of the other challenges in future webinars. And let me just walk through a little bit of the challenge. So, Kubernetes applications are, Kubernetes is going to be orchestrating containers and in most cases in the first steps of this journey, these aren’t necessarily microservices that are being deployed to Kubernetes. Oftentimes they are existing applications that are being containerized and being deployed to the platform. So in the case of new applications, they will oftentimes need storage. So they might need to be working with an S3 bucket, for example. They’ll need to be interacting with data. They’ll probably have a database and oftentimes they’ll be working with third party APIs. If you’re taking an existing application and migrating it to Kubernetes, oftentimes those applications are going to need to connect to a myriad of other resources that may be in your data center or in your Cloud. But the key point here is that these applications aren’t really an island within Kubernetes. They generally have to connect outside. And so for an example, you may want to have one specific pod that connects out to an RDS database that has customer data, for example. You may have another pod that connects out to an S3 bucket. There are some problems with the webinar platform here. And oftentimes that data is going to be housed inside of a security group. You might also have pods that connect out the 3rd party APIs. For example, you might need some telco type services and connect out to Twilio. Now the challenge here is allowing access because you probably, within your operating model, don’t want all pods to be able to connect to that database, for example, right? You want to be able to specify which ones should be allowed to. And if you look at the case of an AWS, for example, and working with security groups, your option for access controls is to either put in a CIDR range, an IP address or security group. So which do you use? If you want to use a CIDR range then you basically give the entire cluster access and it’s that resource. So for example, anything that would be deployed to Kubernetes would be allowed to connect to that RDS database or that S3 bucket. And that’s probably not the best option because in most cases you want to be able to control access and you’ll have security teams and data owners who will want to make sure that only the specific workloads that are allowed to connect can connect. So IP address is an interesting one to use. However, Kubernetes will dynamically assign those IP addresses to pods. And so as you do things like scale up, you scale down, if you experienced an issue and Kubernetes redeploys a pod, the security group is going to start blocking access. And what will happen is you’ll actually experience some outages. And so this option is probably not a really good option either. Using a security group, you can put your cluster in a security group. However, now the entire cluster again has access, which it’s not going to be secured. It’s not typically going to be allowed to roll out into a production type setting. On-prem, the same issues are the same. If you’re in different Clouds, the concept of security group be different firewall rules, network security groups on Azure, it’s all pretty much the same. If you’re on-prem, generally firewalls are used here and again, the firewalls are going to be really focused more on the IP addresses. And we’d already talked about this, but the IP address has really don’t work because the Kubernetes pods are dynamic. So this is where Calico Enterprise comes in to help solve this particular problem. Calico Enterprise is enterprise version of the free and open source Calico. What you get with it is a web user interface, which makes it easier to work and allows more people to get involved with the Kubernetes project. It provides visibility into the network and troubleshooting tools, a policy workflow which enables you to roll out changes into your cluster without incurring any kind of downtime and access controls outside the cluster, which is the focus of today’s discussion. We also offer some more advanced capabilities as you move into stages four and five such as compliance reporting, threat defense. We block a lot of the common hacks that you can find in the news, for example. And we also do multi-cluster federation where you get to the point that you have multiple teams running multiple clusters and you’d like to basically manage those all in a central way. So the solution that we’re offering with Calico Enterprise is how to limit egress from one specific pod out to one specific resource. And we want to do this based on network policy. So Calico Enterprises extended the network policy model to include egress rules that can connect to DNS endpoints. We’ll talk about that in a moment. Now if you notice this is an RDS instance and the best practice here is to connect to the instance using a fully qualified domain name. And the IP addresses of the underlying instance could be changing within the Cloud environment as well. But the FQDN fully qualified domain name will stay static. And so this is really the best practice to connect to any resource, whether that be on-prem or also in the Cloud. And within the Calico model and our network policies, we offer something in addition to what’s available in the open source, which is called allowed egress domains. So in this case, what you can do is you can define a fully qualified domain name as well as a port and you can allow one specific pod, or set up pods based on a label selector, access to that external resource. And so ultimately what this enables you to do is simplify a Kubernetes integration into your environment. And when you’re going to deploy and you need to connect to existing applications that are running either in the Cloud or on-prem, you’re going to need to be able to specify the specific workloads that connect to it. Generally, the database owners, as well as security team, don’t want just any workloads to be able to connect. So this keeps you compliant with the security requirements and it will also enable this incremental migration of existing applications. Oftentimes what we see is that there are initiatives to modernize and move to containers but it’s very, very hard to just rewrite these applications. So when they move into a containerized environment in Kubernetes, it’s generally going to need to be able to connect to some of those other applications outside of the cluster.