How to secure Kubernetes at the infrastructure level: 10 best practices

Infrastructure security is something that is important to get right so that attacks can be prevented—or, in the case of a successful attack—damage can be minimized. It is especially important in a Kubernetes environment because, by default, a large number of Kubernetes configurations are not secure.

Securing Kubernetes at the infrastructure level requires a combination of host hardening, cluster hardening, and network security.

  • Host hardening – Secures the servers or virtual machines on which Kubernetes is hosted
  • Cluster hardening – Secures Kubernetes’s control plane components
  • Network security – Ensures secure integration of the cluster with surrounding infrastructure

Let’s dive into each of these and look at best practices for securing both self-hosted and managed Kubernetes clusters.

Host hardening

There are many techniques that can be used to ensure a secure host. Here are three best practices for host hardening.

Use a modern immutable Linux distribution

If you have the flexibility to choose an operating system (i.e. your organization doesn’t standardize on one operating system across all infrastructure), use a modern immutable Linux distribution, such as Flatcar Container Linux or Bottlerocket. This type of operating system is specifically designed for containers and offers several benefits, including:

  • Immutability – This type of operating system is immutable (i.e. the root filesystem is locked and can’t be changed by applications). It is more difficult for malicious applications to compromise the host since applications are isolated from the root filesystem.
  • Newer kernels – Newer kernels means that recent vulnerability fixes and the latest implementations of newer technologies like eBPF are likely to be included.
  • Ability to self-update to newer versions – This type of operating system is known to release upstream versions to address new security vulnerabilities in a prompt manner.

Avoid running non-essential processes on the hosts

Since every host process running on your system is a potential attack vector, you should only run essential processes. This is especially important if you are not using a modern immutable Linux distribution optimized for containers, since there may be nonessential processes running by default. Unless a process is needed for running Kubernetes itself, or for host management or security, it should not be running.

Configure the host with local firewall rules

Host-based firewalling is recommended because it restricts the IP address ranges and ports that can communicate with the host. While there are some traditional Linux admin tools, such as firewalld configuration and iptables rules, that can help with this, defining and managing these rules over time can be both difficult and time consuming—especially if you’re using a modern immutable Linux distribution. For this reason, I recommend using a Kubernetes network plugin that has the ability to apply network policies, especially to the hosts themselves (instead of to Kubernetes pods only). Kubernetes network plugins are often operating-system independent and can significantly simplify securing the hosts in your cluster.

Cluster hardening

Equally as important as hardening the hosts within a cluster is hardening the cluster itself. Here are four best practices for hardening your cluster.

Secure the Kubernetes datastore and API server

Any security measures you have implemented within the cluster will be useless if you don’t take care to secure the Kubernetes datastore, etcd. The etcd datastore houses all data pertaining to cluster configuration and desired state, so if an attacker were to gain access, they would have full control over your cluster. The etcd datastore can be secured using etcd’s own security features, which use keys and certificates based on the X.509 public key infrastructure (PKI) to encrypt data in transit with TLS.

Additionally, you could restrict access to the etcd datastore so that only nodes on which the Kubernetes API server is hosted (control nodes) have access. This can be done using network-level firewall rules.

The Kubernetes API server should also be secured using X.509 PKI and TLS. The API server is the core of Kubernetes’s control plane that facilitates communication between different parts of your cluster, end users, and external components. Since gaining access to the API server would allow an attacker to configure the entire Kubernetes cluster management lifecycle, its security should be a top priority along with the Kubernetes datastore.

Secure user interactions within the cluster

In order to ensure user interactions within the cluster are secure, each user should have their own separate account and the minimum amount of access necessary to perform their role (following the principle of least privilege). Role-based access control (RBAC) can be used to restrict user access, and an external authentication provider (e.g. a public cloud provider IAM system or an on-premises authentication service) can be used to authenticate user access. I recommend using an external authentication provider because Kubernetes’s built-in user authentication capabilities are limited. Also, many external providers tend to offer support for credential rotation, which is another best practice for hardening your cluster.

Rotate credentials regularly

It is important to rotate credentials (i.e. TLS certificates, service tokens, keys, etc.) often so that, even if an attacker gains access to a compromised credential, its use will be limited. The shorter the lifespan of a credential, the better; I recommend rotating credentials daily, or more than once a day if they are sensitive. Some degree of automation is recommended here, since trying to rotate credentials manually on a regular basis would be difficult. As I previously mentioned, many authentication providers offer credential rotation capabilities, including control over rotation frequency. You could also invest in some custom DevOps development to fully automate this task.

Enable encryption of sensitive data at rest

It is important to configure encryption of sensitive data, such as Kubernetes secrets, at rest. This ensures that, if an attacker gains access to the Kubernetes datastore (or an offline copy of the datastore), the secrets remain safe. While data at rest is not encrypted by Kubernetes by default, encryption can be enabled, but will only take place when data is written to the Kubernetes datastore. For this reason, you’ll need to rewrite all secrets in order to trigger their encryption in the datastore.

Various encryption methods and providers are supported by Kubernetes, with AES-CBC with PKCS #7-based encryption being the method I would recommend because of the strength and speed of its encryption.

In addition to these best practices, it is advisable to upgrade Kubernetes frequently to ensure your cluster is protected from newly discovered vulnerabilities.

Network security

Your cluster needs to be protected from attacks originating from outside the cluster. Likewise, the infrastructure outside the cluster needs to be protected from elements inside the cluster that could be compromised and pose a threat. Here are four network security best practices to ensure your cluster is securely integrated with surrounding infrastructure.

  • Restrict internet access to the cluster (if possible) – Sometimes a cluster needs to be internet accessible, either directly (due to a node’s public IP address) or indirectly (via a load balancer). When this is a requirement, the probability of an attack increases significantly. For this reason, I recommend not allowing access from the internet to the cluster unless absolutely necessary. If the cluster must be internet accessible, endeavor to expose only a small number of services and pods to the internet. Using an ingress controller is one way of accomplishing this.
  • Restrict workload and host communication to/from the cluster – Communication between the cluster and the surrounding infrastructure can be restricted using network policy within the cluster. Network policy is both workload aware and platform agnostic.
  • Restrict traffic to/from the cluster – I recommend using perimeter firewalls or their cloud equivalents (e.g. security groups). Keep in mind, though, that these tools are not workload aware, so they can be limited in terms of their granularity. Even so, they can play a valuable role as part of an in-depth security strategy (as opposed to on their own).
  • Restrict traffic within the cluster – Implementing zero-trust network policies will allow only traffic that is explicitly required by the application services. This will help limit a breach that has already occurred by restricting an attacker’s ability to move laterally into other, potentially more sensitive, components within the cluster.


I have listed ten best practices for securing Kubernetes at the infrastructure level. While this is certainly not an exhaustive list by any means, it should give you the foundation for a good start. I recommend reading chapter 2 of Kubernetes security and observability: A holistic approach to securing containers and cloud-native applications to learn about these best practices in further detail and to discover additional best practices for infrastructure security.

To learn more about securing containers and Kubernetes at the infrastructure level, read this free O’Reilly ebook.

Join our mailing list

Get updates on blog posts, workshops, certification programs, new releases, and more!