Calico IPAM: Explained and Enhanced
Managing IP addresses is an essential, but often overlooked, aspect of container networking. Each networking plugin has its own approach to IP address management (IPAM, for short). The simplest approaches, such as that built into Kubernetes, assume the static allocation of a fixed set of addresses to each node. More advanced solutions, such as Calico, provide users more control and allow much finer-grained, dynamic IPAM.
In our most recent release of Calico, v3.3, we introduced a collection of cool new IPAM features giving users even greater control. Today I’d like to take a closer look at these enhancements, what they’re capable of, and how they can be used together.
How does Calico’s IPAM work?
Before we get into the new features, let’s quickly go over how Calico’s IPAM works at a high level. It’s a pretty cool feature of Calico, even if it doesn’t usually make the headlines. Its primary goal is to provide efficient usage of the cluster’s IP address space in a way that’s flexible enough to meet a variety of deployment architectures.
At a high-level, Calico uses IP pools to define what IP ranges are valid to use for allocating pod IP addresses. IP Pools are configured by cluster administrators and applied using calicoctl. If using Calico’s overlay mode, they can be any private network IP range. Many users don’t use an overlay, however, and in that case the IP pools must use addresses that are available on the underlying network environment.
Within Calico’s IPAM engine, these IP pools are subdivided into smaller chunks – called blocks – which are then assigned to particular nodes in the cluster. Blocks are allocated dynamically to nodes as the number of running pods grows or shrinks. In particular, this means that Calico is much more efficient in its use of IP addresses when only a few pods are running on a node, and at the same time doesn’t impose any upper limit on the number of pods per node.
So what’s new?
Calico v3.3 introduces two new enhancements to Calico IPAM:
- Configurable block sizes: Until now, the number of IP addresses in each block has been fixed at 64 (or “/26” in CIDR notation). This default was chosen because it works well for most users. However, for some users under intense IP address pressure, or those with special-case needs, a smaller IP pool and block size may be required.
- Per-namespace IP pools: Sometimes it is useful to define multiple pools of addresses within your cluster. Calico now allows you to assign a given IP pool to one or more Kubernetes namespaces. One way to make use of this is for assigning separate IP spaces to particular teams, users, or applications within a Kubernetes cluster, allowing external firewalls to be configured with static rules based on specific IP ranges. This extends Calico’s existing support for specifying IP pools on a per-pod and per-node basis.
Let’s take a look at these new features in action. For this example, I assume you already have a Kubernetes cluster running with Calico v3.3 installed. One quick way to get one is by following the Calico quickstart guide. You’ll ideally want at least a couple of nodes.
Suppose we want to provide a limited set of externally available IP addresses to applications in an “external” namespace, but want applications in the “private” namespace to use private IPs. We can do this by creating two small IP pools and assigning them to particular namespaces.
Step 1: Create the IP pools
Let’s start by creating the IP pools for our cluster – one for each namespace we intend to use. In this example, we’ll create two.
To do this, create a manifest file “pools.yaml” with the following contents:
apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: external-pool spec: cidr: 172.16.0.0/26 blockSize: 29 ipipMode: Always natOutgoing: true --- apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: internal-pool spec: cidr: 184.108.40.206/24 blockSize: 29 ipipMode: Always natOutgoing: true
Then, use the calicoctl CLI tool to configure the pools in Calico:
calicoctl apply -f pools.yaml
We just created two new IP pools. The external pool is limited to 64 addresses in total. The pools have the blockSize option set to 29, meaning that blocks allocated from those pools will be /29 CIDR blocks containing 8 addresses each.
Step 2: Assign each pool to a namespace
Now that we’ve created the pools, we can assign each one to a different Kubernetes namespace.
First, create two namespaces using kubectl:
kubectl create namespace external-ns kubectl create namespace internal-ns
Then annotate each namespace, telling Calico to use only the specified pools:
kubectl annotate namespace external-ns "cni.projectcalico.org/ipv4pools"=‘[“external-pool"]’ kubectl annotate namespace internal-ns "cni.projectcalico.org/ipv4pools"=‘[“internal-pool"]’
(As an aside, note that you can now reference the pool explicitly by name – that is also an enhancement in Calico v3.3.)
Step 3: Create some pods
Now that we’ve configured the new namespaces, let’s launch some pods in each. In this example, we’ll launch three nginx pods in each namespace.
kubectl run nginx --image nginx --namespace external-ns --replicas 3 kubectl run nginx --image nginx --namespace internal-ns --replicas 3
Using kubectl, you can now view the assigned IP addresses – you’ll see that the pods in external-ns have IPs from 172.16.0.0/26, whereas pods within internal-ns have IPs from 220.127.116.11/24.
kubectl get pods -o wide -n external-ns NAME READY STATUS RESTARTS AGE IP NODE nginx-65899c769f-8pvlc 1/1 Running 0 2m 172.16.0.32 casey-crc-kadm-node-0 nginx-65899c769f-lrr2l 1/1 Running 0 2m 172.16.0.34 casey-crc-kadm-node-0 nginx-65899c769f-qt6nn 1/1 Running 0 2m 172.16.0.33 casey-crc-kadm-node-0 kubectl get pods -o wide -n internal-ns NAME READY STATUS RESTARTS AGE IP NODE nginx-65899c769f-jxdd6 1/1 Running 0 2m 18.104.22.168 casey-crc-kadm-node-0 nginx-65899c769f-xqzsc 1/1 Running 0 2m 22.214.171.124 casey-crc-kadm-node-0 nginx-65899c769f-zbbm5 1/1 Running 0 2m 126.96.36.199 casey-crc-kadm-node-0
Calico already had some of the most advanced IPAM features in any container networking solution. With the new features in v3.3, Calico now provides even richer controls for cluster operators. For most users, the Calico IPAM defaults will continue to meet their needs well. For others who need the flexibility, you can now very easily control block size and assign IP addresses based on Kubernetes per-namespace, per-node, and per-pod pools.