Calico has provided secure networking for Kubernetes clusters for a while now, as demonstrated in this blog post from earlier this year. Our annotation-based security policy allows fine-grained policy declaration at the pod-level. We’ve recently been working to bring that policy to Kubernetes namespaces and services. The demo video below shows namespace and service policy enforced by Calico in a Tectonic Kubernetes deployment.
We think that Kubernetes namespaces are a logical choice for coarse-grained network policy. This allows you to secure groups of pods using broad strokes, eliminating the need to declare policy on each pod individually. In this design, namespaces can be declared as “open” or “closed”. Pods and services in open namespaces are accessible from anywhere, while pods and services in a closed namespace are accessible only from within that namespace. You can think of this as a logical firewall at the namespace boundary (though in Calico, each pod in that namespace is protected by its own iptables firewall). This allows you to do things like isolate applications, tenants, or development and production workspaces.
The demo video above uses two closed namespaces – client and webapp. Pods and Services created in a closed namespace are only accessible by other pods within that same namespace, unless they are exposed using a service (see below). This allows you to secure your applications from the outside world.
A common use for an “open” namespace would be for services like DNS, which need to be accessible across the entire cluster.
A closed namespace isn’t particularly useful on its own, since nothing within the namespace is accessible to the outside world! That’s where services come in. Services allow you to expose pods outside of a closed namespace – essentially poking holes in the logical namespace firewall.
In the video above, we expose the frontend service by applying a special label – “projectcalico-policy=open”. This indicates that the service should be exposed outside of a closed namespace. Kubernetes services already know which ports and protocols should be accessible on the underlying pods, so we can open the precise IP / protocol / port combination to expose the service.
We’d love to see this simple API implemented in Kubernetes, removing the need for special labels, and replacing them with native fields in the API. We’ve proposed this approach to the Kubernetes community in this issue, and we’d love to hear your feedback.
While this demo is still a proof-of-concept, we imagine this API being built upon down the road, allowing for more general intent-based policy. For example, a Kubernetes API object specifically for painting network policy across a selection of pods, or using a label selector to expose services to a group of pods or other services based on labels, allowing fine-grained control of who can access your services.
Get updates on blog posts, new releases and more!