How To Extend Firewalls to Kubernetes to Stop Breaking Existing Security Architectures

 

Security teams use firewalls to secure their production environments, often using a zone-based architecture, and Kubernetes does not deploy well to that architecture. Application teams are launching new business-critical applications on Kubernetes and are aggressively moving to production. A clash is bound to happen.

In this webinar, we describe an approach to extend firewalls to Kubernetes that will accelerate deployment to production, save time & money, and preserve existing security processes and investments.

Michael: Hello, everyone, and welcome to today’s Tigera webinar, extending firewalls to Kubernetes to not break existing security architectures. I am pleased to introduce today’s speaker, who is Amit Gupta. He is the vice president of product management at Tigera.

Michael: Amit is responsible for the strategy and vision of Tigera’s products, and leading the delivery of the product roadmap. He has expertise in building software products and services across cloud security, cloud native applications, public and private cloud infrastructure, and he holds an MBA from Haas School of Business and a Bachelor of Technology in Mechanical Engineering from IIT Kanpur.

Michael: So, before I hand the webinar over to Amit, I have a few housekeeping items that I’d like to run over. Today’s webinar will be available on-demand after this live session is over, takes about five minutes and then you can watch it again if you want to, and it’ll be accessible through the same link that you got to, to get here.

Michael: We’d love to hear from you. We love interactivity, we like questions, and I encourage you to ask as many questions as you want, and please, ask them when you have them and we will try to answer them when they come, so that they’re relevant. That being said, we will hold a time at the end for Q and A. If you have questions, we can then finish off any questions we didn’t get to, or if you have new questions we’ll get to them there.

Michael: Without further ado, let me get it over to our presenter today, Amit. Amit?

Amit: Thank you, Michael, and good morning everybody. Very excited about this topic we have today. As Michael mentioned, feel free to ask questions during the presentation, but for today’s session, what I’ve done is I’ve split the discussion into three parts.

Amit: The first one will talk about how users, customers, are managing network security using zone-based architectures, some topics that might be very relevant for some of you. So, we’ll start with a quick primer on that.

Amit: The second part, we will focus primarily on if you are running a containerized-based workload, particularly on a Kubernetes-based infrastructure, how does that break or create challenges for you as you are implementing zone-based security for these workloads and applications.

Amit: And then last, but not the least, we will walk through a specific approach on how you can actually extend your existing security architecture to Kubernetes-based workloads, at the same time keep your investments in your existing tools and processes, but have a more fine-grained security and compliance posture for these workloads.

Amit: So, that’s how we have planned the agenda for today. There’s a lot of topics here, so I’ll jump right into this.

Amit: Let’s start with a very brief and a quick primer on security zones. In your specific organization, in your security teams or your networking teams, you may have a slightly different naming convention, but here, at least in this illustration, I have taken some basic examples of what we are seeing across various Kubernetes teams and users, enterprises deploying security zones.

Amit: So, in this example you can see that my entire infrastructure is split across six different zones. You could have only three different zones in your environment, but let’s run through this one by one.

Amit: Starting with the first one, this is your public Internet, this is what you call your untrusted zone. This is where you are, essentially, protecting traffic coming from the bad guys into your internal networks.

Amit: The second aspect here is your semi-trusted, your DMZ. Typically, this is where you are putting or hosting your user-facing workloads, where you’re accepting traffic from untrusted zones. This is also the zone where you would be hosting your partner-facing workloads, any business apps that you need to be able to give access to your partners who are outside your corporate network. You would be hosting those workloads in this specific zone.

Amit: The next one is the trusted zone, and this is where you are hosting applications, typically middleware components, typically business apps that are essentially just need to be accessed inside your network for your internal users. You will also be hosting, in this trusted zone, applications that provide the middleware component to your user-facing workloads.

Amit: The third zone, which is marked as restricted zone in this illustration, is essentially where you’re hosting your crown jewels. This is where you put high-risk applications, highly regulated infrastructure, and any customer data, any confidential data that is relevant for your specific organization, that’s what you would put in a restricted zone.

Amit: In addition to these three zones that we talked about, your DMZ, trusted zone, and restricted zone, we often see examples like the management zone. This is where, in this specific security zone, you are hosting workloads that are related to your IT infrastructure. This could be virtualization infrastructure, this could be backup infrastructure, your monitoring components, your infrastructure services, and they’re all sitting in this management zone.

Amit: And then, last but not the least, is the audit zone. This is where you’re hosting, let’s say, some of your security ops infrastructure, your security incident and event management tool, your event correlation systems, your telemetric data, your centralized logging infrastructure.

Amit: Again, the specific zones and how they are grouped, quite frankly, varies across various different enterprises, and how you designed the infrastructure.

Amit: Now, let’s go from there to how do actually these zones get applied and used in an environment. The most common approach that folks will use is there will be, essentially, a profiling of your application based on the type of application, the risk profile, the type of access and who needs to be able to access these resources. Once you profile the application, however and whatever the criteria are, then you essentially host those applications based on those similar risk profiles.

Amit: For example, if I have a certain component of my application that needs to be accessed through public internet, well I’m going to host that application into my DMZ. Or, if I have a crown jewel, or a enterprise app that I absolutely do not to be anywhere near to public Internet, I’m probably going to host that into restricted zone. Or, an application that needs to be able to access a customer database that I have within my enterprise, I’ll probably host that in my restricted zone, and so on.

Amit: Most enterprises would essentially profile their applications, and then host these applications into these respective security zones. The one other aspect that is important to note is how do you actually do the access control and visibility across these different security zones. There are quite a few models out there, the most common one that we see among our customer base is you would use a set of firewalls to be able to prevent or allow access from one zone to another, do that [inaudible 00:08:59] access control, and then also use the same firewalls to monitor the traffic going between one zone to another.

Amit: So, that’s the most common model. Now, if some of you are wondering, well, that’s a really [inaudible 00:09:13] kind of architecture, and we’re hosting our applications in the cloud and that’s not applicable for us. Well, let me show you what we have seen across the cloud, as well.

Amit: Irrespective of what tool or technology you are using to do access control and monitoring across these different zones, there are models in cloud, as well, where you can enforce and implement these zones in your cloud-based hosted applications.

Amit: I’m not going to go through a detailed architecture of how it is done in this example, in Amazon Web Services, but this is just an example to show you that people are implementing a zone-based network security architecture even in the cloud. They may or may not be using, let’s say virtual firewalls. They might just be doing that through security groups in the cloud-based infrastructure. Here are similar examples from Azure, and then I have one more illustration that shows how you will set up a zone-based architecture in Google Cloud Platform.

Amit: There’s various different ways you can implement a zone-based security, either on-prem or in the cloud, and we’ll double click on specific areas where a containerized-based workload creates some challenges, but I want to quickly summarize why you’re putting this zone-based architecture in place, and the key aspects there.

Amit: Start with, primarily, you are establishing access control and boundaries for various risk and security zones that you have defined for your applications. You are enforcing controls around how to manage inter-zone communication and enforce policies that require you to isolate workload from one zone to another. You’re leveraging these tools and processes to monitor any network traffic across these zones, primarily because you’re looking for, you’re trying to detect any violations there, or troubleshoot any network connectivity issues.

Amit: And then last, but not the least, is if there are, indeed, gaps and violations, then you want to be able to remediate and contain communication, both inbound and outbound from these zones. That’s the basic set of controls and visibilities that you have put in place.

Amit: Now, let’s go to the second part of the discussion, where we will spend time on understanding how do these zones actually apply to a Kubernetes-based infrastructure, and what specific challenges users run into.

Amit: Before we go into the specific network security for Kubernetes workloads, I want to compare and contrast how containerized applications, running in a Kubernetes-based infrastructure, is different from your traditional applications running in a virtualized form factor, in a VMs or even in a bare-metal servers.

Amit: Typically, these traditional applications, they are long-lived. They are stateful, they are not changing as frequently as we are seeing in these modern, microservices-based applications. So, there’s a well-defined, rather static identity for these applications on the network.

Amit: When you compare that to microservices-based applications, or microservices-based architecture running in a Kubernetes-based infrastructure, these applications, the form factor is containerized, they are highly dynamic, they are stateless for the most part, they are ephemeral, and they are short-lived.

Amit: What this implies is that any approach, or any architecture that relies on the network identity of these applications to be IP address, those approaches are not going to work very well to enforce any kind of security architecture, let alone security zones.

Amit: That’s the key point I want to make here. Now, let’s jump into what does it do to your security zones? In this illustration, I’m showing you a Kubernetes cluster. That cluster is hosting workloads from three different zones. You have some workloads that are categorized for DMZ, then there’s workloads that are categorized for trusted zone, and there are some workloads running that are categorized, or have been profiled, as restricted workloads.

Amit: You have your user traffic coming in from outside world, and that traffic has to go through your firewall, and the security that you have put in those firewalls. By the way, this is in the cloud, these firewalls could be native security that you get from the cloud provider, could be security groups and whatever your cloud provider is providing. Once the traffic goes through this security layer, it hits whatever ingress solution you are using inside your clusters, and then goes to your DMZ.

Amit: Now, from there on, if some of these workloads running in your DMZ zone need to talk to your trusted workloads, typically you would, again, send the traffic through a firewall and then filter it through the firewall back to your trusted zones. Now, again, I’m using the word firewall here, but this applies to your cloud-based infrastructure as well, so you might be managing this through security groups, or firewall rules in Google Cloud Platform. We’ve also seen customers do quite unnatural things to enforce the security architecture in a Kubernetes-based environment. Sometimes they would separate out your DMZ workloads and trusted workloads into separate clusters, sometimes they would put them in a … use different IP pools for these different zones, and then manage security through these firewalls based on those IP pools.

Amit: All of those approaches are suboptimal, and create a strain and challenge in terms of how you automate deployment and security for these workloads.

Amit: Continuing on this illustration, if, let’s say, your trusted workloads actually do need to talk to some other cloud services that they may be consuming, or other parts of your software as a service infrastructure, any traffic leaving the trusted zone going outside to public world is, again, enforced through firewalls. Because these firewalls, or security groups, they don’t understand the identity of Kubernetes workloads, then what you end up doing is you create big set of IP ranges, open that for your firewall, or allow access from all your Kubernetes worker nodes to be able to consume these cloud service. In effect, you do not have a mechanism to enforce any fine-grained security there.

Amit: Last but not the least, if you need to send the traffic from trusted workloads to restricted zone, you’ll follow the same approach as send the traffic through the firewall, and create some latency for your application, but that’s really the model most folks have used.

Amit: There are three main technical problems with this architecture. First is these firewalls or security groups, they do not understand the identity of Kubernetes workloads, so you have to rely on a non-cloud-native mechanism to segment the traffic, either IP addresses, or you taint your nodes to only run certain zone workloads on your Kubernetes worker nodes. So, some non-cloud-native operational procedures, you would have to deploy.

Amit: The second aspect is because these tools and technologies, they don’t understand the Kubernetes identity, your visibility of traffic inside these clusters is quite limited, especially any traffic going across these zones. The third aspect is it creates several performance issues and operational issues as you are hosting these dynamic workloads in these clusters.

Amit: Now, how it manifests for you from a security perspective is that now the granular security architecture that you have in place for your traditional applications or non-Kubernetes-based infrastructure cannot be enforced in a Kubernetes-based environment, because these security zones that you are now replicating through firewalls or through security groups is going to have to be very coarse grained.

Amit: The second aspect is any fine-grained access control that you are implementing for applications consuming cloud services and so on cannot be enforced in a Kubernetes world. It doesn’t have to be just the cloud services, no Kubernetes application is running in an isolated environment, they need to have access, or they may have workloads that they need to access that are in your data centers but outside your Kubernetes-based infrastructure. How do you enforce granular access control for those workloads?

Amit: And then last but not the least is you do not get any workload context or metadata for your network traffic, so you are always missing the context of which part, which namespace, which application is consuming or is tied to this particular network traffic.

Amit: The compliance actually goes across all of these aspects, so how you enforce your security control set, and how do you manage your evidence data and your audit information to prove compliance? While you may have been doing that for your traditional applications through those firewalls or security groups, it’s hard to enforce that and manage compliance for Kubernetes workloads using those existing toolset.

Amit: So, that was a quick summary of the specific challenges you run into when you are enforcing a zone-based architecture, a zone-based network security architecture, in a Kubernetes environment. Now, what we’ll go through is we’ll walk you through a particular design pattern, how you can … Excuse me. How you can enforce these security zones in your Kubernetes-based environment, and continue to use your existing tools, processes, and models in place.

Amit: Let’s walk through this again. Here is where you are, with respect to your current security zone architecture. A couple of things to call out here. In this model, as you can see, any traffic that is going from a pod to another pod inside the same zone, you literally have no visibility or access control there.

Amit: The first step there is you put network policies in place for each of these workloads, and manage security and visibility for all your traffic within the zone using Tigera Secure, and I’ll walk you through how that’s done. So that’s the step one to that, is within the zone, all your traffic, you basically whitelist, you get the visibility from Tigera Secure, and now you know exactly what’s going on inside your cluster, and specifically inside each zone.

Amit: For your traffic that leaves the zone, so for traffic from DMZ to trusted zone, or trusted to restricted zone, you can, again, implement the same setup policies, a fine-grained access control all enforced at the source on the pod, using Tigera Secure. If, let’s say, you have an application where front-end can talk to back-end and back-end can talk to the database, the policies associated with your coarse-grained zones, and also your fine-grained policies with respect to each application, those could be defined on individual pods and applications.

Amit: Now, you can still continue to use your firewall or security groups if you want to, for coarse-grained security. If you’ve got IP pools that are associated with a specific zone, you can continue to allow coarse-grained traffic from one IP ranges to another IP range, but fine-grained security could be enforced using Tigera Secure across those zones.

Amit: And then the third aspect there is, essentially, any traffic that is outbound to either cloud services outside your network, cloud services or API endpoints, or traffic that is outside your Kubernetes cluster but still within your datacenters, you could enforce fine-grained security and visibility for that traffic using Tigera Secure.

Amit: To summarize here, the key distinction between what the security and compliance posture you had prior to implementing Tigera Secure is you now have fine-grained security, you have deep workload context visibility for all the traffic, and you can enforce some of the compliance control set at the same time, keeping your existing zone-based architecture.

Amit: Essentially, you will be able to extend a full zone-based network security architecture. However, you’ve defined, in a fine-grained way, onto your individual Kubernetes workloads, you will also be able to enable fine-grained network security policies for egress access to cloud services and API endpoints, or your infrastructure inside the datacenter, and then you get full workload context on that.

Amit: So, that’s a very typical design approach that we deploy to enforce a security zone architecture in your environment. Now, let me walk you through a little bit about Tigera Secure, and how do we implement this design pattern in your environment.

Amit: Before I go into the details of Tigera Secure product, it’s important to call out that our security architecture and the solution we provide works across your entire infrastructure, irrespective of where you’re running your Kubernetes clusters. You or your dev teams may be hosting these clusters across the major cloud providers, or inside your datacenters, using one of the most common commercial distributions out there, we’ll provide you a federated security across all your clusters. So, if you want to enforce policies that red can talk to green, it doesn’t really matter where red and green pods are running, we’ll be able to provide you a granular security enforcement and visibility across all your clusters.

Amit: There are three key aspects of our solution that enables this zone-based architecture across your environment. The first is a true zero-trust network security architecture, where you whitelist all the rules and all communication for your Kubernetes-based workloads. We provide deep visibility and threat detection for your clusters, and last but not the least a continuous compliance across all your Kubernetes environments. So, unlike your traditional approach, where you have a snapshot-based assessment of your compliance ruleset, we provide continuous enforcement and continuous visibility across that infrastructure.

Amit: So, let’s go into each of these solution areas in detail. When we talk about zero-trust network security, there are four major principles of our solution. The first is establishing trust on individual pods and services based on multiple sources of identity data. We look at cryptographic identity of the workload, and also the Kubernetes identity of the workload, before we establish trust, and say this is, indeed, a accounting service or an accounting pod.

Amit: The second aspect is implementing a model of least privilege, and the way we do that is we provide a unified policy model, where you can define fine-grained security rules all the way from layer three to layer seven for your application. So, you can go as granular as your security architecture demands, all the way down to specific URLs and web methods in defining the whitelist rules.

Amit: The third aspect is the defense in depth architecture, where we do enforcement of these whitelist rules at multiple points of the infrastructure. So, in the unfortunate situation where if your infrastructure gets compromised, your application stays secure, or if a particular component of an application gets compromised, your infrastructure stays secure, and the rest of the infrastructure and applications continue to stay isolated.

Amit: And then last but not the least is all data in transition across these clusters will be encrypted. That gives you a very good security and compliance posture, in terms of all your Kubernetes-based environment.

Amit: The second aspect is around visibility and threat detection. Oftentimes, you would need access or visibility inside your clusters for various use cases. You’re either looking for any indicators of compromise or indicators of risk inside your clusters, you are trying to troubleshoot a network connectivity issue and figuring out if you have some network fabric issues, or you’ve got the policies wrong, or you just need some log data and audit data for your compliance or for your evidence information for your toolset.

Amit: So, we provide deep visibility, starting with the first thing, which is very rich flow logs, and the flow logs provided by Tigera Secure, they have the entire workload context, including your Kubernetes metadata. So, for each connection inside your cluster, you’ll be able to see which source/destination pod, namespace, labels, policies applied, policy action, connection bytes, connection count, a lot of rich context available for each of the connection inside your clusters.

Amit: We also monitor your infrastructure, your cluster traffic, and your audit logs to detect any kind of anomalies or malicious traffic. If we see any kind of port scans, any kind of IP sweeps, any kind of recon activity inside your cluster, we’ll set that up as an anomaly alert for you to further research. We will also integrate with your threat feeds, and we’ll also provide a threat feed built-in into the product, and we’ll block any kind of known malicious traffic, giving you a very comprehensive threat defense inside your clusters.

Amit: We also provide a virtualization interface. So, for example, if your developers need to troubleshoot a particular service incident, and they’re trying to figure out if they have a misconfiguration or a misconfigured policy, they can navigate down to their individual workload in a [inaudible 00:31:56] namespace, and figure out which policies are applied. It’s very handy when you’re trying to troubleshoot any issues or service incidents.

Amit: And then last but not the least, all the telemetry, all these alerts, all the data, can be sent to your security operations center, to your log application platform, or whatever your security orchestration workflows are, it could be integrated right into that.

Amit: The third aspect of our solution is around compliance. As you are preparing or planning your architecture for, let’s say, PCI, SOC 2, ISO 27001 or whatever the compliance regulation or control framework that you use, the first aspect is you take that control set, or the compliance requirements, and map that to specific security controls that you will enforce in your Kubernetes-based workloads. We provide a mapping from various different compliance frameworks to specific network security controls.

Amit: We provide you Kubernetes RBAC-driven controls, and access controls around managing these security controls. So, if you are setting a set of compliance controls from your security teams, how do you make sure that your developers have [inaudible 00:33:30] workloads and policies they don’t tamper with the security controls around that, so we provide granular RBAC controls around that.

Amit: Any non-compliance to these security controls that you have implemented would essentially be raised as a real-time alert on any rule violation. It’s relatively easy for you to track any non-compliance events, rather than relying on a periodic assessment of your environment that is done on a weekly, daily, or monthly basis.

Amit: And then last but not the least is we provide detailed guidance and evidence reports, so as you are preparing for your audits for specific compliance programs, you have detailed visibility into your in-scope assets, the policies, and the rules enforced for those assets. So, that gives you a very strong set of capabilities to manage compliance for your infrastructure.

Amit: So, that’s, in a nutshell, a quick summary of all the capabilities and how you implement a zone-based security architecture using Tigera Secure.

Michael: Oh, we do have a question. So, Amit, can you do the questions? The question is, is this based on Kubernetes network security policies as foundation for the controlling flows?

Amit: Yeah, absolutely. So, yes, that’s definitely true. So we, in Tigera Secure, we implement security architecture based on Kubernetes network security policies. In addition to that, we also have Calico network security policies, which actually provide you even broader set of capabilities. If you want to be able to define your network security rules from layer three to layer seven, you could do that through Calico policies.

Amit: But, fundamentally to your question, yes. This architecture and this solution is implemented as the Kubernetes network policies, and that’s definitely the foundation for managing the flows, and also providing the visibility.

Michael: So, thank you for attending, and we will see you at our next webinar coming up soon. Thank you, and goodbye.