Container adoption in IT industry is on a dramatic growth. The surge in container adoption is the driving force behind the eagerness to get on board with the most popular orchestration platform around, organizations are jumping on the Kubernetes bandwagon to orchestrate and gauge their container workloads. It allows for continuous integration and delivery; handles networking, service discovery, and storage; and has the capability to do all that in multi-cloud environments.
But as with any new, swiftly emerging technology, kubernetes comes with its share of security concerns.
As IT has become more engrossed in deploying with Kubernetes, we have started to see an upsurge in attacker interest in compromising Kubernetes clusters. If you want to escape becoming the next data breach organization in news, you must envisage like an attacker when safeguarding your system. In this post, I’ll discuss a few things to watch out for if you’re considering a move to Kubernetes, as well as some tips on ensuring that your infrastructure remains secure:
We know that configuration errors flourish in the cloud: One recent study found that 70–75% of companies have at least one serious AWS security misconfiguration. With containers being a somewhat new technology in software deployment, the possibility of misconfigurations due to inexperience is only multiplied. A lot of organizations will go ahead and adopt container orchestration platforms like Kubernetes before they truly comprehend the technology. Their inexperience leaves them especially vulnerable to making configuration mistakes as they deploy their applications.
As with any other application platform, Kubernetes work with secrets which should be never hardcoded, including passwords and API keys. 3rd party tools such as HashiCorp Vault are able to encrypt such sensitive information (passwords, API Keys etc.), but many Kubernetes users will rely on Kubernetes’ own mechanism to perform this function. There’s nothing wrong with this, except that any configuration errors could leave your private information dangerously exposed. Kubernetes has comprehensive documentation that you’ll need to review carefully and follow carefully to avoid such mistakes.
Don’t let them wander in through the front gate i.e API Server
The Kubernetes API service acts as the front gate to any kubernetes cluster. It is commonly exposed on every deployment since it’s required for management reasons. That’s why safeguarding the kubernetes API is extremely important. Fortunately, most Kubernetes deployments require authentication for this port. But it’s still possible to expose it unintentionally, as Tesla found out when it exposed the dashboard that formulates part of its main Kubernetes API service to the Internet without authentication.
The supplementary approach you can expose this is if the “insecure API service” is permitted. As the name proposes, this isn’t something you want to expose on an untrusted network. But we’ve seen this happen before, so it’s unquestionably something you should check.
You’re constantly vulnerable on the Internet
The first thing to recognize when thinking about securing cloud-based Kubernetes clusters is that attackers can find you on the Internet with noticeable easiness. When you’re spinning up a development or test system, it may feel as if there’s no need to focus too much on security, since you’re not really broadcasting the system’s existence. But with tools such as Shodan out there, it’s insignificant for attackers to locate favorable targets.
So, it’s easy for attackers to ascertain Kubernetes clusters since they usually listen on a range of well-defined and somewhat distinguishing ports. A good illustration of this is etcd, which Kubernetes uses as its cluster database. It listens on port 2379/TCP, which is indexed by Shodan, and so effortlessly discovered.
Unauthorized connections between/to containers
Negotiated containers can attempt to connect with other running pods on the same or other hosts to investigate or launch an attack. Although Layer 3 network controls whitelisting pod IP addresses can offer some fortification, attacks over trusted IP addresses can only be detected with Layer 7 network filtering. Data stealing is often done using an amalgamation of techniques, which can include a reverse shell in a pod connecting to a command/control server and network tunneling to hide confidential data.
The default behavior of many Kubernetes clusters (where a token that provides access to the Kubernetes API mounts into each container) can cause security issues, particularly if the token has cluster admin rights. This happens in those clusters where RBAC isn’t configured at all or not properly. In this configuration, an attacker who gains access to a single container in the cluster can easily escalate these privileges to gain full control of the entire cluster.
It is well worth confining this down as part of any production deployment. Preferably, not mounting a token is your paramount approach from a security viewpoint. But if you really need tokens in your setup, you must restrict the rights they have to the cluster resources.
The other thing to watch here is the use of “privileged” containers in your kubernetes deployments. A container running as privileged fundamentally disables the security mechanisms provided by Docker and allows code to run on the underlying system. It is therefore essential to avoid allowing any parts of your cluster to run with this approach.
Lock it Down
Your cluster is as secure as the system running the cluster: Before you start looking into Kubernetes security essentials, you should start with your system running Kubernetes. If the host (e.g. Kubernetes worker node) on which containers run is negotiated, all kinds of bad things can happen. These include — Privilege escalations to root, Shoplifting of secrets used for secure application or infrastructure access, Change of cluster admin privileges etc.
Go through some hardening guidelines for securing your OS in general.
Private Topology: If your infrastructure permits for private IP addresses you should host the cluster in a private subnet and only forward ports that are needed from the external world from your NAT gateway to your cluster. If you are running your cluster on a cloud provider AWS this can be achieved through a private VPC.
Liberal Use of RBAC: In many cases, there’s no better alternative for good security sanitization, especially when it comes to role-based access control in a Kubernetes environment. Eventually, you want to apply the principle of least access: Provide access only to the areas of your infrastructure that people need in order to do their jobs. Too much access can lead to unnecessary mistakes, even if the user has the best objectives.
Firewall Ports: This is a universal security best practice: Never expose a port, which doesn’t need exposure to the outside world or internal to your system.
Bastion Host: Do not provide straightforward public SSH access to each Kubernetes node, use a bastion host setup where you expose SSH only on one specific host from which you SSH into all other hosts. There are quite a few articles on the internet how to implement bastions.
CIS Scans: A master or a worker node and their control-plane components are checked by applying the CIS Kubernetes Benchmark which results in specific guidelines to secure your cluster setup. This should be the first step before going through any specific Kubernetes security issues or security improvements.
Network Policies: Network Policies are basically firewall rules for Kubernetes clusters. If you are using a network provider which provisions Network Policies, you should definitely use them to secure internal cluster communication and external cluster access. By default, there are no constraints in place to limit pods from communicating with each other.
Pod Security Policies: Pod security policies allow for governing security sensitive aspects of the pod specification. Most of your Pods don’t need privileged access or even host access, so it should be warranted that a Pod requesting such access needs to be whitelisted unambiguously. By default, no one should be able to request privileges above the default to avoid being exposed through misconfiguration or malicious content of a Docker image.
Proper Configuration is Bliss: One of the biggest cloud security issues, in general, is a misconfiguration. Last year research found that as many as 73% of companies had at least one critical cloud security misconfiguration, theoretically leaving systems open to the entire internet.
The Kubernetes configuration best practices page provides a great summary of how to address top configuration issues, with supplementary documentation.
Kubernetes is a rapidly developing technology that’s also quite complex, so secure default settings are vital to deploying it securely. As you go forward, respect the best practices listed above and make sure that you comprehend what you’re exposing, and follow good security practices when deploying your kubernetes clusters.
This article originated from http://medium.com/devopslinks/kubernetes-security-are-your-container-doors-open-2c4b99c8d786
Gourav Gulati is a Tigera blogger. He is an accomplished IT Infrastructure Solution Specialist and IT Operations Consultant with extensive result-driven experience in Design, Implementation, and Management of Cloud infrastructure and DevOps methodology across multiple domains.
Free Online Training
Access Live and On-Demand Kubernetes Tutorials
Calico Enterprise – Free Trial
Solve Common Kubernetes Roadblocks and Advance Your Enterprise Adoption
Join our mailing list
Get updates on blog posts, workshops, certification programs, new releases, and more!