Using Calico with Kubespray

In the Kubernetes ecosystem there are a variety of ways for you to provision your cluster, and which one you choose generally depends on how well it integrates with your existing knowledge or your organization’s established tools.

Kubespray is a tool built using Ansible playbooks, inventories, and variable files—and also includes supplemental tooling such as Terraform examples for provisioning infrastructure. If you’re already using Ansible for configuration management, Kubespray might be a good fit, and there’s even documentation for integrating with your existing Ansible repository.

There are other reasons Kubespray might be a good solution: maybe you want to use the same tooling to deploy clusters on both bare metal and in the cloud, or you might have a niche use case where you have to support different Linux distributions. Or perhaps you want to take advantage of the project’s composability, which allows you to select which components you’d like to use for a variety of services, such as your container runtime or ingress controller, or—particularly relevant to this blog post—your CNI.

In this post, we’ll go over enabling Calico when following the Quick Start tutorial or using Vagrant to deploy Kubernetes locally, as well as how to configure your Calico deployment using the Ansible variable files that are part of Kubespray.

If you’re following one of the following tutorials in the Kubespray documentation, you can find supplementary information later in this post about enabling Calico:

First, let’s talk generally about how you can configure your Calico installation using Kubespray.

Calico configuration in Kubespray

Ansible uses variables to allow for conditional playbooks that can cover a variety of use cases and configurations. You can define your variables in many ways using Ansible; in Kubespray, they are most often defined in files located in your inventory. You can find examples of these in the git repository under inventory/…/group_vars/.

You’ll find the setting to select your CNI under the k8s_cluster sub-directory, in the main configuration file, k8s-cluster.yml. In that file there is a parameter called kube_network_plugin, which you can set to Calico as follows:

kube_network_plugin: calico

Calico-specific configuration variables can be found in the file k8s-net-calico.yml in that same directory. There are a number of options available in that file which we won’t cover explicitly in this blog post, but you can find complete configuration documentation here. For now, let’s talk about some specific settings you might want to consider changing.

Datastore

Calico stores its operational and configuration state in a central datastore, and lets you choose between storing that data either in the Kubernetes datastore or an etcd cluster. In most cases, I recommend using the Kubernetes datastore, which is simpler to manage, and allows you to take advantage of Kubernetes role-based access control (RBAC) and audit logging.

Using an etcd datastore, on the other hand, allows you to run Calico on non-Kubernetes platforms, or across multiple clusters and bare-metal instances. This also lets you scale the Calico datastore independently from your other resources. For most users, these features aren’t necessary, but if you’re interested in exploring you can read more about the datastore in our Calico the Hard Way guide.

If you’re getting started and following along with the Kubespray documentation, you can update your configuration file to use the Kubernetes datastore instead of etcd by editing k8s-net-calico.yml and uncommenting the following line:

calico_datastore: "kdd"

Typha

Since we’re now using the Kubernetes datastore, we’re also dealing with the Kubernetes API, so there are other considerations we should take into account. Typha is the fan-out proxy used by Project Calico. It sits between the many instances of Calico (specifically the Felix component) and the datastore. It can filter out updates not relevant for Felix—reducing application load—while simultaneously serving hundreds of Felix instances, reducing the load on the datastore.

We recommend using Typha any time you are using the Kubernetes datastore instead of interacting with etcd, but this becomes a hard requirement in clusters with more than 50 nodes.

You can enable Typha by uncommenting the typha_enabled parameter in k8s-net-calico.yml and setting it to true:

typha_enabled: true

If you’re following along with the Kubespray guides, this setting should be sufficient, but if you’re looking to deploy clusters into production, or clusters with over 50 nodes, you might want to configure additional settings. You might enable TLS, or set the number of Typha replicas and the maximum number of connections each instance can handle. You’ll find the following additional settings in the configuration file:

# Generate TLS certs for secure typha<->calico-node communication
# typha_secure: false

# Scaling typha: 1 replica per 100 nodes is adequate
# Number of typha replicas
# typha_replicas: 1

# Set max typha connections
# typha_max_connections_lower_limit: 300

Overlay Networks

If possible, we recommend running Calico without an overlay network; this gives you the best performance and also the simplest network, which can be valuable when you need to troubleshoot. In many environments, however, that is not a possibility, and so Calico supports two types of encapsulation: VXLAN and IP in IP, which can be configured using the settings below:

# IP in IP and VXLAN areis mutually exclusive modes.
# set IP in IP encapsulation mode: "Always", "CrossSubnet", "Never"
# calico_ipip_mode: 'Always'

# set VXLAN encapsulation mode: "Always", "CrossSubnet", "Never"
# calico_vxlan_mode: 'Never'

The topic of overlay networks can be complicated, and requirements vary from one environment to the next. Visit our documentation to learn more about overlay networks.

Enabling Calico in the Kubespray guides

There are a number of paths in the Kubespray documentation that take you through provisioning and deploying a cluster. In the sections below, we’ll cover what is required to enable Calico in each of those scenarios.

Kubespray Quick Start

The main page of the documentation contains quick-start instructions that have you build your inventory using the Kubespray inventory builder and a set of IP addresses. The instructions have you copy an inventory from the inventory/sample directory, which already has Calico defined as the network plugin. If you need to make any additional changes to the configuration, you can update the parameters in inventory/sample/k8s_cluster/k8s-net-calico.yml.

Setting up your first cluster with Kubespray

This document provides step-by-step instructions for bringing up a working cluster on Google Cloud Platform (GCP). Like the quick start guide, this document has you copy the sample inventory to use in your own deployment, which means that Calico is already configured as the CNI, and additional configuration can be done in inventory/sample/k8s_cluster/k8s-net-calico.yml.

Vagrant

The main page of the documentation also has an example for bringing up a cluster using Vagrant, with more detailed documentation available here. While you will use the same sample inventory as the base of this project, there are Vagrant-specific variables and configuration that control which CNI is used during the installation. In order to use Calico, you’ll want to add your own configuration overrides. To do this, you will need to create a vagrant directory at the root of the project and create a config.rb file:

$ mkdir vagrant
$ touch vagrant/config.rb

Then, add the following lines to your config.rb:

$network_plugin = "calico"
$multi_networking = "False"
$inventory = "inventory/my_lab"

Now when you run vagrant up, the resulting cluster will use Calico for networking.

Next steps

If you enjoyed this blog, you might also like:

Join our mailing list

Get updates on blog posts, workshops, certification programs, new releases, and more!

X