Internet-facing applications are some of the most targeted workloads by threat actors. Securing this type of application is a must in order to protect your network, but this task is more complex in Kubernetes than in traditional environments, and it poses some challenges. Not only are threats magnified in a Kubernetes environment, but internet-facing applications in Kubernetes are also more vulnerable than their counterparts in traditional environments. Let’s take a look at the reasons behind these challenges, and the steps you should take to protect your Kubernetes workloads.
Threats are magnified in a Kubernetes environment
One of the fundamental challenges in a Kubernetes environment is that there is no finite set of methods that exist in terms of how workloads can be attacked. This means there are a multitude of ways an internet-facing application could be compromised, and a multitude of ways that such an attack could propagate within the environment.
Kubernetes is designed in such a way that allows anything inside a cluster to communicate with anything else inside the cluster by default, essentially giving an attacker who manages to gain a foothold unlimited access and a large attack surface. Because of this design, any time you have an internet-facing application it is best practice to assume compromise and isolate individual microservices from everything else in the cluster. Otherwise, you’re basically inviting an attacker in.
This issue of propagation and lateral movement does not surface as much in traditional environments for a couple of reasons:
- Ability to create security policies in advance – Most applications in traditional architectures are static, so security teams have the luxury of creating security policies and rules in advance, which affords a bit of a barrier.
- Security zoning – Security zoning is a best practice in traditional architectures to ensure there is a controlled zone (the DMZ) and a restricted zone with a high level of scrutiny.
While the static nature of traditional environments helps security teams protect against the threat of propagation and lateral movement, the dynamic nature of Kubernetes has really opened it up to these dangers. And while a modern shift-left security approach empowers engineers and allows for automation of the CI/CD pipeline, it makes it difficult to predict exactly what will be deployed inside clusters, and when.
Internet-facing applications in Kubernetes are more vulnerable
There are a few reasons why internet-facing applications in Kubernetes are more vulnerable to cyber attacks and data breaches than the same type of application in traditional environments.
- Everything is orchestrated and dynamic – The dynamic nature of the Kubernetes ecosystem prevents you from designing static security policies and rules ahead of time.
- Microservices increase the potential attack surface – The workloads deployed in Kubernetes containers use a service-oriented architecture where the application is split up into different microservices, each making API calls.
- Any microservice deployed in Kubernetes could be internet facing – Whether it’s an application that anyone on the internet can access (e.g. a shopping cart or customer registration service) or an application that needs access to external resources like a web API or a database in AWS, as soon as a microservice accesses something on the internet, it is vulnerable to attack.
The same challenge does exist outside of Kubernetes when applications need to communicate with external resources. However, because application workloads and architecture in traditional environments are more static, security teams have the luxury of designing security policies ahead of time, and can even design a workflow process to make changes to policy rules. With Kubernetes, security teams cannot predict which applications will need to call out; and when they do call out, it’s impossible to predetermine which IP addresses those calls are coming from or what they’re trying to reach out to.
Protecting your Kubernetes environment from threats
Building a multi-layered defense for your Kubernetes clusters will help protect your environment from threats.
- Scan for known vulnerabilities – This is basic good security hygiene, and the good news is that most platform vendors provide this with their platform. While scanning for known vulnerabilities is necessary, it is not sufficient on its own.
- Integrate threat feeds – There are many well-known and comprehensive threat feeds available, both open source and private. Integrating reliable threat feeds ensures traffic inside the cluster is free from those vulnerabilities.
- Establish granular workload access controls – At the north-south level, add controls to decide who from the outside world gets access to any services you’re publishing in the cluster. Within the cluster, define controls that dictate which services get to communicate externally and which external resources those services can access.
- Build microsegmentation with east-west controls – Any time you have an internet-facing application, build segmentation and protection around it to limit the blast radius of potential attacks coming from a compromised microservice.
- Anomaly detection – Assume that your cluster has already been compromised and monitor for anomalous behavior within the cluster.
You can take your security to the next level by encrypting all data in motion, so that even if there is a compromise, you’ll be protected. And if you have the resources, you could also look at patterns of data to determine anomalous behavior that should be inspected.
How Calico can help
Calico Enterprise and Calico Cloud include several features that can be used to secure internet-facing applications in a Kubernetes environment.
- Workload access controls (north-south controls) – Calico enables fine-grained access controls between your microservices and databases, cloud services, APIs, and other applications that may be protected behind a firewall. It also offers integrations to extend next-generation firewall capabilities to your microservices running on Kubernetes.
- Microsegmentation (east-west controls) – Calico provides a common segmentation model that works across all of your environments, and scales to meet the expansion or contraction of your microservices environment.
- Zero trust – Calico enables a zero-trust environment built on three core capabilities: encryption, least privilege access controls, and a defense-in-depth security strategy (ensuring connections have been authorized at the host and pod levels, and then again at the container level).
- Encryption of data in transit – Calico uses open-source WireGuard to implement data-in-transit encryption, eliminating the operational complexity involved with standard approaches such as TLS and IPsec. No matter where a threat originates, data encrypted by Calico is unreadable to anyone except the legitimate keyholder, thus protecting sensitive data should a breach occur.
- Intrusion detection – Calico offers anomaly detection using machine learning, as well as threat feed integration to identify IP addresses for known bad actors.
- Automated quarantine – Calico offers the ability to do deep packet inspection on suspicious activity, leverage feeds like SNORT to inspect traffic, and then automatically quarantine any compromised pods.
As one of the most common applications found on the web today, internet-facing applications will likely continue to be a popular target for cyber attacks, perhaps increasingly so. Given Kubernetes’s design and dynamic nature, it is extremely important to take steps to secure internet-facing applications in order to ensure your network is protected. While Kubernetes poses some challenges in this regard, implementing a multi-layered defense will give you the best chance at success.
To learn more about new cloud-native approaches for establishing security and observability with Kubernetes, check out the early release of this O’Reilly eBook, authored by Tigera.
Join our mailing list
Get updates on blog posts, workshops, certification programs, new releases, and more!