Enable IPv6 on Kubernetes with Project Calico

The following blog is a guest blog post by Valentin Ouvrard, a member of our MVP Program, on his experiences running IPv6 on Kubernetes. You can find more writings by Valentin on OpsNotice and follow him on Twitter @Valentin_NC. And be sure to stay tuned for future updates on the status of IPv6 support in Kubernetes and official Calico documentation.


Kubernetes is basically thought to work on IPv4 only but with newer version, the IPv6 support becomes an important point and we could hope that Kubernetes v1.9.x will support IPv6 on all the components (Services, NodePort …).

Today, with Project Calico, you already got the capacity to enable dual-stack networking in a Kubernetes cluster using the Calico CNI plugin. This great feature gives IPv6 on the pod side, which allows you to contact pods directly from the Internet and allow pods to speak to external IPv6 services.

In this quick post, we will see how to enable IPv6 in a Kubernetes cluster by using Calico CNI plugin.

IPv6 on CNI side

The first point to enable IPv6 on a Kubernetes cluster is to use Calico CNI Plugin.

If you don’t use it already, you can get the latest release in the official Github repository.

You need to download the calico and calico-ipam binary on your CNI folder (/opt/cni/bin) and then create your CNI config file (/etc/cni/net.d/10-calico.conf):

{
    "name": "calico-k8s-network",
    "type": "calico",
    "etcd_endpoints": "http://:2379",
    "etcd_ca_cert_file": "/var/lib/kubernetes/ca.pem",
    "ipam": {
        "type": "calico-ipam",
        "assign_ipv4": "true",
        "assign_ipv6": "true"
    },
    "policy": {
        "type": "k8s"
    },
    "kubernetes": {
        "kubeconfig": "/var/lib/kubelet/kubeconfig"
    }
}

As you see, we use assign_ipv4 and assign_ipv6 to enable dual-stack networking on our cluster.

If you previously didn’t use Calico at all, you need to configure your Kubelet to use CNI networking and then delete any old CNI configuration:

--network-plugin=cni \
  --network-plugin-dir=/etc/cni/net.d \

IPv6 on Calico-Node side

As you probably know, Calico doesn’t use an overlay network but uses the power of the Linux routing table and the BGP protocol to share theses routes between workers.

Calico Node is the service that creates Linux routes and who peer with other workers to share them.

If your workers already got dual-stack networking, Calico Node normally detects it automatically and enables node-to-node mesh on dual-stack between your workers.

To check this point, just launch calico node status on any worker:

IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+---------------+-------------------+-------+----------+-------------+
| 10.200.0.200 | node-to-node mesh  | up    | 07:07:04 | Established |
| 10.200.0.201 | node-to-node mesh  | up    | 07:07:04 | Established |
| 10.200.0.203 | node-to-node mesh  | up    | 07:07:02 | Established |
+---------------+-------------------+-------+----------+-------------+

IPv6 BGP status
+-----------------------------------+-------------------+-------+----------+-------------+
|           PEER ADDRESS            |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+-----------------------------------+-------------------+-------+----------+-------------+
| fdb8:a9c2:3c97:64df::200          | node-to-node mesh | up    | 07:07:04 | Established |
| fdb8:a9c2:3c97:64df::201          | node-to-node mesh | up    | 07:07:04 | Established |
| fdb8:a9c2:3c97:64df::202          | node-to-node mesh | up    | 07:07:06 | Established |
| fdb8:a9c2:3c97:64df::203          | node-to-node mesh | up    | 07:07:06 | Established |
+-----------------------------------+-------------------+-------+----------+-------------+

If you have only the IPv4 node-to-node mesh, you will need to check your Calico Node configuration to make sure that it is able to get the IPv6 of your host.

Manage IpPools

By default, Calico provides a default IPv4 and IPv6 IpPool.

You can modify them by creating different IpPools for your needs:

$ calicoctl get ippool 
CIDR
192.168.0.0/16
fd80:24e2:f998:72d6::/64

$ calicoctl delete ippool <ippool-to-delete>

$ cat <<EOF | calicoctl create -f -
- apiVersion: v1
  kind: ipPool
  metadata:
          cidr: fd0e:c226:9228:fd1a::/64
  spec: {}
EOF

Calico gives you the possibilities to choose a specific IpPool for Application deployment just by annotating your Kubernetes resources (Pod, Deployment, ReplicasController …) with a simple ippool annotation:

annotations:
      "cni.projectcalico.org/ipv6pools": "[\"fd0e:c226:9228:fd1a::/64\"]"

Check the IPv6 on pod side

To check if IPv6 works in your Kubernetes cluster, you can try creating a pod and then check if it receives an IP from our IpPool:

$ kubectl run -i -t busybox --image=busybox --restart=Never

If you don’t see a command prompt, try pressing enter.

/ # ip -6 addr show
1: lo:  mtu 65536 qlen 1
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
3: eth0@if40:  mtu 1500
    inet6 fd0e:c226:9228:fd1a:01bd:cf5a:88ba:515e/128 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::dc83:33ff:fef9:f8ca/64 scope link
       valid_lft forever preferred_lft forever

/ # ip -6 route show
fd80:24e2:f998:72d6:83de:4a50:bd0b:9b45 dev eth0  metric 256
fe80::/64 dev eth0  metric 256
default via fe80::7c25:38ff:fe40:cd83 dev eth0  metric 1024
ff00::/8 dev eth0  metric 256

Go Further

At this point, we’re able to get IPv6 at the pod side, but if you try ping6 an IPv6 website like google.com, it will not work because we are using a private IPv6 range (ULA) and Calico doesn’t NAT our traffic because we haven’t yet enabled it.

To allow public IPv6 traffic, you got two solutions:

  • Add nat-outgoing: true on your IPv6 IpPool definition.
  • Use a public IPv6 range (given by your Cloud provider for example) and peer with an External router to announce our public Kubernetes IPv6 to the world.

The reach the last point, you will need to create a BgpPeer resource:

apiVersion: v1   
kind: bgpPeer   
metadata:     
  peerIP:      
  scope: global   
spec:     
  asNumber: 64510

By connecting your Kubernetes cluster to a BGP router, you allow each worker to announce their own IPv6 routes and give a better high-availability than static routing to any Kubernetes node.

Calico default behavior is to assign /122 IPv6 range to workers (64 IP by range) and share theses routes between them in Node-to-Node mesh (all the workers peer together).

Last point about security – if you peer your cluster with a BGP router, every pod will be publicly available. Typically, this is not a good practice. But by using Calico-Policy-Controller, you can restrict the access to your pods (IPv4/v6) and be sure, for example, that only your Front-end application can speak to your Backend.

If you want to get more information about the current status of IPv6 support for Kubernetes, especially for Services, and other resources, I suggest you checkout this Github repo.

Join our mailing list

Get updates on blog posts, workshops, certification programs, new releases, and more!

X