Kubernetes is dynamic and hard to secure or monitor using existing tools. This has a significant impact on your security and compliance controls.
- Traditional solutions like perimeter security, zone-based security, and static firewalls are not sufficiently scalable or flexible enough to meet security controls for Kubernetes.
- Monitoring tools do not provide context into microservices traffic.
- Compliance tools designed to provide audit trails were also designed for static applications and environments and won’t work properly for Kubernetes.
- Since microservices use the network, several teams – development, platform, networking, security – now need access to security tools that let them work together in an agile manner.
Kubernetes changes the way that we implement security controls. Here are some best practices for securing those environments.
- Deploy what you know
Not knowing what is inside containers could be problematic, especially if it has vulnerable code. Downloaded container images could contain vulnerable code, making run-time containers exploitable by attackers. Therefore, it is crucial to practice good security hygiene and understand what exactly is being deployed.
Container scanning can help you to an extent, screening out containers with known major CVEs. Assuming that your code does what you want it to do, you want to make sure that no one can change it in a way that you don’t know about. Use source code control systems like Git.
- Ensure deployments are in compliance
Since IP addresses are ephemeral in a Kubernetes environment, we need to use other attributes to identify and audit workloads. This might include metadata and labels that identify infrastructure that must align to your compliance controls. When you use labels or fingerprints as an identity, you can begin attaching the security controls you need for compliance.
The next step is to write policies that interact with your labels. If an element in your architecture is labeled for PCI, then it doesn’t matter what specific kind of element it is. Your PCI-related policies should apply automatically, no matter what.
Lastly, you need to enforce the concrete nature of your policies. Developers and other personnel should not be able to change your policies – only the security and/or compliance team can. If someone tries to change those policies, the security and/or compliance team should receive an alert.
- Make sure logs are meaningful and durable
You may have thousands or even tens of thousands of servers, each with hundreds of workloads. You need a solution that can correlate all of these workloads, so you can detect what’s really happening in your platform. As one example, your containers need to have a consistent sense of time. Without a sense of time, your logs won’t be in a sequential order, so you can’t determine cause or effect.
- Embrace Zero Trust network security
As we’ve mentioned, you need to assume that you have one or more compromises in your network. Acting accordingly means implementing multiple enforcement points in and outside of your pod and hosts. If a single enforcement point gets compromised, the infection won’t be able to compromise your security posture. By implementing multiple, interlocking, and partially redundant security controls at every level of your stack, you eliminate the possibility that a single compromise can own your entire system.
Orchestrate security just like you orchestrate code
Your orchestrator is in charge of the rest of your environment, so it should be in charge of security. Doing anything else will create an impedance between your policies and your security. Where the workload goes, the policy should follow.
Creating new microservices security models means adding more complexity to what are already very complex environments. Do it right, however, and you’ll be able to create a security model that’s even more secure than the model it’s replacing. For more information, check out our free webinar today.