Kubernetes is a popular container management platform that supports containerized applications distributed across public, private, and hybrid clouds.
While Kubernetes doesn’t offer built-in solutions for securing ingress/egress traffic between internal clusters and external networks, there are specialized firewalls that can restrict traffic and prevent data leaks.
A Kubernetes firewall tracks and filters all inbound and outbound communication with production clusters. It should allow the necessary traffic, keeping specified default and custom ports open while blocking other data transfers to and from each Kubernetes cluster. Kubernetes firewalls protect clusters from the outside.
This is part of our series of articles about Kubernetes security.
In this article:
When a Kubernetes cluster is running in production, all communication must be secured to prevent data exfiltration and other threats. Both ingress and egress network traffic must be monitored and controlled according to a set of security rules defined by the organization.
In particular, egress traffic should be limited to an absolute minimum of accessible ports and addresses to enable cluster maintenance operations and access by external integrated systems.
This is where a firewall comes in—it can restrict inbound and outbound traffic to and from a Kubernetes cluster. Here are some of the benefits a firewall can provide in a Kubernetes environment:
In a Kubernetes cluster, by default any pod can communicate with any other pod. This is detrimental to security, because it means that attackers who compromise one container can move laterally across the entire cluster without any restrictions. The Kubernetes NetworkPolicy resource allows you to limit traffic to and from pods.
Kubernetes networking capabilities, including NetworkPolicy, are powered by networking plugins. To use NetworkPolicy, you must have a plugin that implements it. Calico is one example of a popular open source networking plugin available for Kubernetes, which offers advanced monitoring, application-layer filtering, integration with cloud networks, and network policy enforcement.
While it is very important to implement NetworkPolicy in a Kubernetes cluster, it is not a replacement for firewalls. A NetworkPolicy controls traffic within the cluster (known as east-west traffic) while a firewall restricts ingress and egress traffic to or from the cluster (known as north-west traffic).
Related content: Read our guide to Kubernetes security policy
When deploying a Kubernetes firewall, you should be aware of the ports and protocols used by the Kubernetes control plane. These must be allowed to enable the cluster to function.
The following TCP ports are used by Kubernetes control plane components:
The following TCP ports are used by Kubernetes nodes:
The above are the default ports defined by Kubernetes—if you set custom ports for any of them, the firewall should be enabled for the custom port.
It is important to protect both ingress and egress traffic in a Kubernetes cluster. The cluster must be protected from malicious inbound traffic, but must also be prevented from sending malicious outbound traffic to elements outside the cluster. Here are cybersecurity best practices to ensure that your cluster integrates securely with the surrounding environment:
Kubernetes datastores are the first and most critical asset you should secure in your clusters. The etcd datastore contains all data about the cluster configuration and desired state, so once an attacker gains access, they can take full control of the cluster.
An etcd data store can be secured using etcd’s own security features, which encrypt data transmitted over TLS using X.509-based keys and certificates. Additionally, you can restrict access to the etcd datastore to ensure only the node hosting the Kubernetes API server (the control node) can ever contact it. This can be done using network-level firewall rules.
The Kubernetes API Server similarly allows attackers complete control over a cluster, allowing them to configure the entire Kubernetes management lifecycle. The API server is the heart of the Kubernetes control plane, facilitating communication between the different parts of the cluster, end users, and external components. The API Server must also be secured using X.509 and TLS.
Cloud platforms (AWS, Azure, GCE, etc.) typically expose metadata services locally on compute instances (from the perspective of a Kubernetes cluster, nodes). By default, these APIs are accessible by pods running on the instance and can contain configuration data such as cloud credentials or kubelet credentials for that node. Attackers can use these credentials to escalate privileges to other cloud services within the cluster or in the same cloud account.
When running Kubernetes on a cloud platform, you should restrict the permissions granted to instance credentials, use network policies to restrict pod access to metadata APIs, and never distribute configuration data or secrets using cloud provider metadata.
Tigera’s commercial solutions provide Kubernetes security and observability for multi-cluster, multi-cloud, and hybrid-cloud deployments. Both Calico Enterprise and Calico Cloud provide the following features for security and observability:
Security
Observability
Next steps: