Introduce Your Kubernetes Services to the World

How to connect Kubernetes pods to on-premises infrastructure

Unless you’ve been living under a rock for the last few years, you probably know that Tigera’s Project Calico and Tigera’s Secure Enterprise Edition use BGP to connect the pods in your on-prem Kubernetes cluster to your infrastructure.  This means that you can run your Kubernetes clusters without the complexity and overhead of an overlay, or tunneled infrastructure. This has been a big win for many of our users and customers and continues to be a key decision point in our favor.

Even if you haven’t been living under a rock, however, you may have missed the news that we’ve made some substantial improvements in this part of the solution in Calico v3.4 that will also be available in the next version of TSEE.

Many of our users have asked if we could advertise Kubernetes Service virtual IP’s via BGP as well.  We’ve listened, and as of v3.4, you can. Before I tell you what we’ve done, maybe a bit on why we’ve done it.

BGP is a way for a router or a switch to learn from another router or switch, or a Calico node, what IP addresses can be reached via the remote ‘peer.’  Simply put, this means that router B, or a calico node, can tell router A that B can be used to reach a given set of IP addresses. Now, normally if a given router hears multiple routers telling it they all can reach a given IP address, that router will pick one of those ‘routes’ to use, and save the rest to use if the one it selected fails.  Interestingly, however, this behavior can be changed, and you can tell a router to use or load-balance across all the routes to a given destination, provided they have the same cost or weight. This is called ECMP.

Similarly, in almost all circumstances, an IP address points to a single entity such as a server or a container.  Therefore, ECMP routes spread the load among multiple paths to a SINGLE destination. However, this is not a guaranteed thing.  It is possible to have multiple servers, pods, etc. that offer the same service, share the same IP address. This is called ANYCAST; for example, it’s how Google’s DNS service (8.8.8.8) works.  There isn’t one honking huge server somewhere in the bowels of Google answering all of the DNS queries pointed at 8.8.8.8. There are lots (like 1,000’s) of servers all listening on 8.8.8.8.

ECMP and ANYCAST are often intertwined, but they can exist independently.  You don’t need to have ECMP enabled to use ANYCAST, and you don’t need to have ANYCAST destinations to use ECMP.  However, when you put them together, something kinda magical happens.

Let’s think of a service that is autoscaling in Kubernetes.  Maybe it’s not an actual service, but possibly a set of Kubernetes Ingress controllers or load balancers.  Kubernetes can scale them up and down based on demand, but how do you tell the rest of the network that there are now 5 ingress controllers for a given service rather than 4?

The Service VIP is already an ANYCAST address.  Anything that is answering for that service will respond to that Service VIP address.  What we want to do is tell the network about that Service VIP, and allow the network to spread the traffic among all of the instances of that service.  In this case, maybe the Ingress controllers that front-end it, or the service pods themselves. By using BGP to announce those Service VIPs to the network, and turning on ECMP on the networks switches and routers, the network will automatically distribute the request traffic to all the instances of that service, no matter how many or few exist, and no matter how dynamic they are, without ANY OTHER configuration.  Pretty cool, eh?

That’s the why and now let’s talk a bit about the how.

The details for configuring this can be found in the Project Calico documentation, but I’ll provide a brief overview below.

    1. Set the service cluster IP range (the default is 10.0.0.0/24) in the Kubernetes API server:
      $ kube-apiserver --service-cluster-ip-range <service CIDR range>
    2. Set the environment variable CALICO_ADVERTISE_CLUSTER_IPS in the calico-node daemon set.
      $ kubectl patch ds -n kube-system calico-node --patch \
          '{"spec": {"template": {"spec": {"containers": [{"name": \   "calico-node", "env": [{"name": "CALICO_ADVERTISE_CLUSTER_IPS", \ "value": "10.0.0.0/24"}]}]}}}}’

At this point, Calico will announce that service CIDR range from all Calico nodes in the cluster.

If there is a service that has its external traffic class set to local Calico will also announce that specific service’s IP from the actual nodes that are hosting that service’s instances.  Since IP routing always prefers a more specific route, the /32 (or /128 in the case of IPv6) service IP will be preferred over the cluster’s less specific CIDR block, ensuring that the external network will only send traffic for that ‘local’ service to the nodes that host it.

We have more plans in this space, so stay tuned.  If you have any thoughts or feedback, I’d love to hear it.

All of us here at Tigera want to wish you a Very Happy Holidays!

————————————————-

Free Online Training
Access Live and On-Demand Kubernetes Tutorials

Calico Enterprise – Free Trial
Solve Common Kubernetes Roadblocks and Advance Your Enterprise Adoption

Join our mailing list

Get updates on blog posts, workshops, certification programs, new releases, and more!

X