Protecting the Entire Flock
Services at the cluster edge need love and protection too
By now, you’re probably familiar with network policy as a flexible, dynamic way to enforce security on your Kubernetes workloads. Most of the discussion to date, however, has focused on intra-cluster traffic: what about access to services from outside the cluster, i.e. to external service interfaces?
For access to services from within the cluster (say a pod querying a service for a piece of data), network policy will provide the same level of protection as it does for pod-to-pod traffic. However, external connectivity to a service is subtly different than intra-cluster connectivity to the same service.
…external connectivity to a service is subtly different than intra-cluster connectivity to the same service.
In Kubernetes, generally more than one pod will be offering the same service, either due to scaling and resilience requirements or rolling upgrades. Rather than trying to advertise all of those pods to a service’s client (either in the cluster or external to it), Kubernetes defines a single IP address for each service. That service address acts as an anchor for the service, which can then be advertised to both internal and external clients, using techniques such as DNS.
The Kubernetes environment then needs to provide a way for traffic that is directed at one of those service IPs to be routed to one of the actual pods providing that service. While there are many options to accomplish this, including the new service mesh offerings, such as Istio, the default mechanism in Kubernetes is kube-proxy, and the standard mechanism that kube-proxy uses to expose services for external access is node-port. The node-port mechanism maps the service to the local node’s IP address and a well known port.
Kube-proxy implements node-port by changing the network configuration on each host such that traffic that is destined for a specific service IP address will be re-written so that the new destination of the packet is one of the working pods offering that service. Kube-proxy then keeps those rules updated as services are defined or destroyed, and as the pods offering a given service are created or destroyed.
Since services are the capabilities you are offering via your application, it stands to reason that those services should be just as protected, irrespective of the mechanism of access…
The key mechanism that is used is something called Destination Network Address Translation, otherwise known as DNAT. Now, NAT is convenient — but there is no such thing as a free lunch. In this case, since it changes the addresses in the packet, it can conflict with Kubernetes network security policy, depending on how a given network security policy plug-in works.
Specifically, many Kubernetes network security policy plugins (including the most popular, Calico) rely on the source and destination addresses on a packet to enforce network policy. However, due to the way that DNAT is implemented in the Linux kernel, the destination address is changed before the Kubernetes network policy plugin has a chance to evaluate the packet. This means that it is difficult, if not impossible to write a policy that filters traffic destined to a service that is externally exposed using the node-port method.
Since services are the capabilities you are offering via your application, it stands to reason that those services should be just as protected, irrespective of the mechanism of access (intra-cluster or external), if not more so, than the individual pods. So, how do we do that?
The upshot is that, using Calico for network policy, a rule can now apply to service access from outside the cluster, as well as a normal intra-cluster traffic.
The kitten to the rescue
Project Calico supports the protection of externally-facing services like this by allowing you to apply policies to the node’s host interface, which all external access will come in from. In order to make sure that the DNAT processing applied by kube-proxy does not interfere with the enforcement of those policies, you’ll want to enable the pre-DNAT option, introduced in the Calico 2.4 release.
The upshot is that, using Calico for network policy, a rule can now apply to service access from outside the cluster, as well as a normal intra-cluster traffic. Pretty cool!
For further information on this, and details as to how to use this new feature, please refer to the Pre-DNAT documentation, which can be found here, and, as always, let us know your thoughts and questions.