Contributing Cool Community Content to Calico

It’s right there on our community page—the statement that “Project Calico is first and foremost a community.”

With that in mind, we wanted to make it easier for new contributors to get involved. It’s a win-win scenario—developers experience less frustration, they can get their work done, and have their contributions considered. Plus, the project can easily benefit from the contributions.

Recently, we have been doing a lot of work to simplify the contribution process, and to encourage, recognize, thank, and reward contributors. For example, earlier this year we announced our Calico Big Cats ambassador program and began using a single monorepo architecture. Read on and we’ll dig into that more.

In my role as Lead Developer Advocate for Project Calico, up until now, when I wanted to make a bug fix or improve something, I needed to feed that back to the development team for them to implement. In this blog post, though, I’m going to test out the new contribution process myself, document it for others, make improvements, and see what I can learn.

The Project Calico home page is a great place to find a contribution to make, so I headed there. Following the “Find a good first issue” link took me to a curated list of potential first issues. I wanted to fix a “real” valuable issue, so I chose this issue requesting improved IPAM metrics. There are slightly easier and harder issues to tackle, so you should be able to find something that is a match for your skillset, needs, and available time.

The proof is in the PR{udding}

As we simplify the contribution process, one of the most significant changes is moving to a single monorepo architecture (i.e., the code and associated files for each Calico component now live within a folder in its repository). This makes it much easier to check out the code and have a useful environment to start contributing.

I decided that in light of these recent improvements, it was time for me to revisit the contribution process. I wanted to test things from the perspective of a new contributor and see if I could:

  • Get a working development environment
  • Run baseline unit tests (UTs), functional verification tests (FVs), and system tests (STs)
  • Figure out the correct process for contributing code
  • Test locally in an in-laptop cluster
  • Submit a PR
  • Run end-to-end (E2E) tests and get my changes into a release

I’ll be using these goals as the structure for the rest of this article.

Setting up a development environment

The first step to being able to contribute code to Project Calico is to be able to build it yourself, unmodified, and edit it in your favorite development environment.

I’ll assume you have a GitHub account and you’re all set up and logged in. First, head to that “monorepo” site I talked about above, and use the Fork button to create a fork in your own GitHub account. A fork is simply a copy of the Project Calico code that you can work on independently without the fear of being shouted at when someone’s cluster breaks!

Once you have the fork in your own GitHub account, grab the SSH path for the codebase from the GitHub page for the fork. Here’s mine, but yours will be different:

Next, it’s just a case of cloning the repository to your local laptop from a terminal. That could look something like this:

chris @ chris-work ~
└─518─▶ cd ~/repos/chris @ chris-work ~/repos
└─521─▶ git clone [email protected]:cdtomkins/calico.git cdtomkins_calico
Cloning into 'cdtomkins_calico'...
remote: Enumerating objects: 186740, done.
remote: Counting objects: 100% (593/593), done.
remote: Compressing objects: 100% (345/345), done.
remote: Total 186740 (delta 243), reused 469 (delta 209), pack-reused 186147
Receiving objects: 100% (186740/186740), 123.39 MiB | 6.67 MiB/s, done.
Resolving deltas: 100% (124814/124814), done.
chris @ chris-work ~/repos
└─522─▶ cd ~/repos/cdtomkins_calico/

Calico Open Source is primarily written in Golang. I am wise enough not to run through the minefield of recommending the best environment for you to set up, but to give you some ideas, you’re going to need a code editor, perhaps Visual Studio Code.

You’ll also need to install some prerequisites. Those are documented in the Developer Guide, which is stored alongside the code in the repository.

chris @ chris-work ~/repos/cdtomkins_calico [master]
└─540─▶ sudo apt update && sudo apt install -y git docker make
<...>
make is already the newest version (4.2.1-1.2).
docker is already the newest version (1.5-2).
git is already the newest version (1:2.25.1-1ubuntu3.2).
0 to upgrade, 0 to newly install, 0 to remove and 0 not to upgrade.

You need Golang, too. For that, you’ll want to follow the official Go install instructions.

At this point, you can open your editor and get coding. If you’re following along and using Visual Studio Code, it will recognize the programming languages in use and offer you some useful plugins. Discussion of those is beyond the scope of this post, but suffice it to say that you probably want to use them!

Here’s how it looks:

So far, if you have any development experience, you might well be thinking that this is all just common sense, and you’d be right! The fact that so far things are so straightforward stems from the recent process improvements that have been made. For example, to build Calico before the change to a single repository, you would have needed to clone and configure seven or more repositories, not just one.

Now, with the exception of the Tigera/operator codebase, everything can be done in just one place. This is especially beneficial when making changes that touch the codebase in many places.

Baselining UTs, FVs, and STs

Like most quality open-source software, Calico Open Source includes the following three types of tests alongside the code in the repository:

  • UTs test the functionality of specific small sections of the code.
  • FVs, sometimes called integration tests, test a single component in its entirety, without detailed knowledge of individual functions within that component.
  • STs test a completely integrated system to verify that the system meets its requirements.

There is a fourth class of tests, E2Es. These are treated differently and are not run until the proposed change has a pull request submitted. At this point, a maintainer can issue the /sem-approve bot command, which is required for all tests coming from external contributors. The command will trigger UTs, FVs, STs, and an E2E test run. The E2Es are a full end-to-end test and therefore require spinning up cluster(s) that closely resemble real-world deployments, and carry a cost. That’s why they are not performed in the same way as other tests are.

You can read a lot more about software testing on Wikipedia.

I find it’s a good idea to grab the output of the UT, FV, and ST tests before making any changes to the codebase. That way, I know what a healthy run looks like and can easily establish whether my changes have caused any regressions or new issues.

However, the tests are run individually for each Calico component, not holistically. That’s a good thing because no time will be wasted running tests for Calico components totally unrelated to the code being worked on. Of course, the E2E tests should still discover any unexpected interactions at a system level once they are run. So, before I can establish the baseline output for the tests, I need to know what code I’m testing.

Luckily, the particular issue that I’m addressing is very easy to localize within the codebase. A simple grep through the repository for the code mentioned in the issue that I’m looking to improve points me right at the three files I’m interested in examining as a starting point:

chris @ chris-work ~/repos/cdtomkins_calico [master]
└─505─▶ grep -lr ipam_allocations_per_node *
calico/reference/kube-controllers/prometheus.md
kube-controllers/tests/fv/metrics_test.go
kube-controllers/pkg/controllers/node/ipam.go

Let’s look at these three individually to figure out how they are involved. I will explain the thought process I went through for each file:

File Role
calico/reference/kube-controllers/prometheus.md This is a Markdown file, commonly used for documentation. Examining it in Visual Studio Code, I can clearly see it’s the documentation for Calico’s Prometheus statistics. I will need to update this to properly document my change.
kube-controllers/tests/fv/metrics_test.go The path to this file, and the filename, give me some big clues. This file contains FV tests for Calico’s metrics output, and at least one is related to the code I want to change. 
kube-controllers/pkg/controllers/node/ipam.go This is some of the actual code that implements the Prometheus statistics. I can start figuring out my changes here.

As I mentioned earlier, each Calico component lives in a folder within the top-level repository. It’s easy to see here that the code I am interested in modifying lives in kube-controllers. Sure enough, that maps directly to the name of the Calico component kube-controllers. So, now I’ll baseline the tests for the kube-controllers component by running the UTs and FVs for this component. This will take a few minutes, so I also put the kettle on:

chris @ chris-work ~/repos/cdtomkins_calico/kube-controllers [master] 
└─512─▶ make test

As you can see, I have not included all of the output here as the command is extremely verbose. It’s a good idea to copy and paste or redirect this output to a file for later comparison. However, it’s really the final few lines that I care most about. If my environment is set up correctly and all is well, it should look something like this:

••••••••••• JUnit report was created: /home/chris/repos/cdtomkins_calico/kube-controllers/report/fv_suite.xml
Ran 53 of 55 Specs in 888.090 seconds 
SUCCESS! -- 53 Passed | 0 Failed | 0 Pending | 2 Skipped 
PASS

It’s time for full disclosure before moving on! The issue was more complex than I expected and my coding was not up to the task (in Golang, at least!), so my colleague Pasan kindly took over the coding work. Thanks Pasan! With that admission made, please read on.

The purpose of this blog post is not to teach Golang, so I will not dive into the details of the technical fix. Thanks to the wonders of open-source software development, you can dig into it yourself here if you’d like.

However, the output from tig below shows that I initially identified the right files.

commit a9ea2ad91f735c5d8decf8cf3bde5c3c51695152
Refs: [HEAD], v3.19.0-20892-ga9ea2ad91
Author:     pasanw <[email protected]>
AuthorDate: Thu Mar 24 09:56:38 2022 -0700
Commit:     GitHub <[email protected]>
CommitDate: Thu Mar 24 09:56:38 2022 -0700Pool-based IPAM Metrics (#5706)
---
calico/_includes/charts/calico/templates/calico-kube-controllers-rbac.yaml |  10 +++-
calico/maintenance/monitor/monitor-component-metrics.md                    |   2 +-
calico/reference/kube-controllers/prometheus.md                            |  23 ++++++--
kube-controllers/pkg/controllers/node/controller.go                        |   6 +-
kube-controllers/pkg/controllers/node/ipam.go                              | 355 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---------------------
kube-controllers/pkg/controllers/node/ipam_allocation.go                   |   5 ++
kube-controllers/pkg/controllers/node/ipam_test.go                         | 215 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--
kube-controllers/pkg/controllers/node/pool_manager.go                      | 141 
++++++++++++++++++++++++++++++++++++++++++++
kube-controllers/pkg/controllers/node/syncer.go                            |   3 +
kube-controllers/tests/fv/metrics_test.go                                  | 636 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-----------------------------------------------------------
node/tests/k8st/infra/calico-kdd.yaml                                      |  11 +++-
11 files changed, 1135 insertions(+), 272 deletions(-)

Testing Locally

The exact method for testing a change depends on the nature of the change (of course)! The various tests that I previously described should take care of a diverse range of deployment scenarios, so I realized that for my change, my objective with local testing should be to test a basic scenario with a cluster that:

  • Is running the latest Calico release version
  • Is running a recent Kubernetes version
  • Has a simple deployment
  • Has all components “vanilla” except for calico-kube-controllers substituted for my modified version0

There are many ways to achieve this. In fact, running make kind-cluster-create in the Calico repository will build a cluster with Calico installed using components built from the checked-out code!

I decided to do it in a slower way, to show a little more of how things are working. First, I followed the instructions on building a single Calico component image, which is stored in Docker:

REPOSITORY   TAG       IMAGE ID   CREATED   SIZE
chris @ chris-work ~/repos/cdtomkins_calico [master]
└─537─▶ make -C kube-controllers image
make: Entering directory '/home/chris/repos/cdtomkins_calico/kube-controllers'
<...>
make: Leaving directory '/home/chris/repos/cdtomkins_calico/kube-controllers'
chris @ chris-work ~/repos/cdtomkins_calico [master]
└─538─▶ docker image ls
REPOSITORY                                    TAG            IMAGE ID       CREATED         SIZE
calico/flannel-migration-controller           latest         548cfd50eb4b   2 minutes ago   179MB
calico/flannel-migration-controller           latest-amd64   548cfd50eb4b   2 minutes ago   179MB
flannel-migration-controller                  latest-amd64   548cfd50eb4b   2 minutes ago   179MB
<none>                                        <none>         841e7ad12e49   2 minutes ago   103MB
calico/kube-controllers                       latest         49db6cf70bcb   2 minutes ago   132MB
calico/kube-controllers                       latest-amd64   49db6cf70bcb   2 minutes ago   132MB
kube-controllers                              latest-amd64   49db6cf70bcb   2 minutes ago   132MB
<none>                                        <none>         c1a5d0e730b1   2 minutes ago   103MB
calico/go-build                               v0.65          3d9d3cb48117   2 weeks ago     5.46GB
registry.access.redhat.com/ubi8/ubi-minimal   latest         0e1c0c70dbc5   6 weeks ago     103MB

Next, I built a vanilla minikube (local) cluster with 3 nodes and 2 IP pools (note that I first specify that I don’t want a CNI, then I remove the Kindnet CNI that is spuriously installed by minikube when adding the additional nodes). I installed Calico using the recommended Tigera Operator install method:

chris @ chris-work ~/repos/cdtomkins_calico [master]
└─557─▶ minikube start --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.0.0/16
😄  minikube v1.25.1 on Ubuntu 20.04
▪ KUBECONFIG=/home/chris/.kube/config
✨  Automatically selected the docker driver. Other choices: none, ssh
❗  With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=7900MB) ...
🐳  Preparing Kubernetes v1.23.1 on Docker 20.10.12 ...
▪ kubeadm.pod-network-cidr=192.168.0.0/16
▪ kubelet.housekeeping-interval=5m
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
chris @ chris-work ~/repos/cdtomkins_calico [master]
└─558─▶ kubectl apply -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.www.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.www.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.www.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.www.tigera.io created
namespace/tigera-operator created
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/tigera-operator created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
chris @ chris-work ~/repos/cdtomkins_calico [master]
└─559─▶ kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml
installation.operator.www.tigera.io/default created
apiserver.operator.www.tigera.io/default created
chris @ chris-work ~/repos/cdtomkins_calico [master]
└─560─▶ minikube node add
😄  Adding node m02 to cluster minikube
❗  Cluster was created without any CNI, adding a node to it might cause broken networking.
👍  Starting worker node minikube-m02 in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
🐳  Preparing Kubernetes v1.23.1 on Docker 20.10.12 ...
🔎  Verifying Kubernetes components...
🏄  Successfully added m02 to minikube!
chris @ chris-work ~/repos/cdtomkins_calico [master]
└─561─▶ minikube node add
😄  Adding node m03 to cluster minikube
👍  Starting worker node minikube-m03 in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
🐳  Preparing Kubernetes v1.23.1 on Docker 20.10.12 ...
🔎  Verifying Kubernetes components...
🏄  Successfully added m03 to minikube!
chris @ chris-work ~/repos/cdtomkins_calico [master]
└─561─▶ kubectl delete ds -n=kube-system kindnet
daemonset.apps "kindnet" deleted
chris @ chris-work ~/repos/cdtomkins_calico [master]
└─562─▶ calicoctl apply -f ~/2022/2022_01/5430_improved_ipam_metrics/second-ipv4-ippool.yaml
Successfully applied 1 'IPPool' resource(s)

The cluster is now running the latest release version of Calico, including the latest release version of calico-kube-controllers:

chris @ chris-work ~/repos/cdtomkins_calico [master]
└─640─▶ kubectl get pods -A -o wide | grep -i calico-kube-controllers
calico-system      calico-kube-controllers-7dddfdd6c9-vk5mx   1/1     Running   0               3m6s    192.168.120.66    minikube       <none>           <none>

Next, I upload my modified calico-kube-controllers image alongside the release version in minikube’s image repository:

chris @ chris-work ~/repos/cdtomkins_calico [master]
└─642─▶ minikube image ls | grep kube-controllers
docker.io/calico/kube-controllers:v3.21.4
chris @ chris-work ~/repos/cdtomkins_calico [master]
└─643─▶ minikube image load calico/kube-controllers
chris @ chris-work ~/repos/cdtomkins_calico [master]
└─644─▶ minikube image ls | grep kube-controllers
docker.io/calico/kube-controllers:v3.21.4
docker.io/calico/kube-controllers:latest

Now, I can instruct the Tigera Operator to ignore the calico-kube-controllers deployment by adding an annotation, so that I can adjust the deployment without interference.

NOTE: This is a developer option and not something you should be doing anywhere near a production cluster or without a clear understanding of the need, as is specifically noted in the Tigera Operator repository README!

chris @ chris-work ~/repos/cdtomkins_calico [master]
└─645─▶ kubectl annotate deployments.apps -n=calico-system calico-kube-controllers unsupported.operator.www.tigera.io/ignore="true"
deployment.apps/calico-kube-controllers annotated

Now, I can patch the deployment for calico-kube-controllers to specify that the image tagged latest should be run, rather than the version tagged with v3.21.4, and confirm that it has worked by checking the pod:

chris @ chris-work ~/repos/cdtomkins_calico [master]
└─646─▶ kubectl patch deployment -n=calico-system calico-kube-controllers -p'{"spec":{"template":{"spec":{"containers":[{"name":"calico-kube-controllers","image":"docker.io/calico/kube-controllers:latest"}]}}}}'deployment.apps/calico-kube-controllers patched
chris @ chris-work ~/repos/cdtomkins_calico [master]
└─647─▶ kubectl get pod -n=calico-system -l=k8s-app=calico-kube-controllers -o yaml | grep "image:"
image: docker.io/calico/kube-controllers:latest
image: calico/kube-controllers:latest

That’s it—the cluster is now running my modified version of calico-kube-controllers! I can test that it is still responding to metrics requests, too. Spoiler; everything looks fine.

chris @ chris-work ~/repos/cdtomkins_calico [master]
└─651─▶ minikube ssh
Last login: Wed Jan 26 15:34:39 2022 from 192.168.49.1
docker@minikube:~$ curl -s http://192.168.205.195:9094/metrics | tail -n 3
promhttp_metric_handler_requests_total{code="200"} 0
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
docker@minikube:~$ exit
logout

Now, I can iterate as much as I need to on my code and test locally, without needing to push to my branch or submit a PR upstream.

Submitting a PR

Here is Pasan’s pull request again.

The process to submit a PR is straightforward and is captured in the Contributing to the Calico Codebase document stored alongside the code in the repository. You should probably check for updates rather than following the steps here, but at the time of writing, it was as follows.

  1. Create a personal fork of the repository.
  2. Pull the latest code from the master branch and create a feature branch off of this in your fork.
  3. Implement your feature. Commits are cheap in Git; try to split up your code into many. It makes reviewing easier, as well as for saner merging.
  4. Make sure that existing tests are passing and that you’ve written new tests for any new functionality. Each directory has its own suite of tests.
  5. Push your feature branch to your fork on GitHub.
  6. Create a pull request using GitHub, from your fork and branch to projectcalico master.
    1. If you haven’t already done so, you will need to agree to our contributor agreement. See below.
    2. Opening a pull request will automatically run your changes through our continuous integration (CI) pipeline. Make sure all pre-submit tests pass so that a maintainer can merge your contribution.
  7. Await review from a maintainer.
  8. When you receive feedback:
    1. Address code review issues on your feature branch.
    2. Push the changes to your fork’s feature branch on GitHub in a new commit—do not squash! This automatically updates the pull request.
    3. If necessary, make a top-level comment along the lines of “Please re-review,” notifying your reviewer, and repeat the above.
    4. Once all the requested changes have been made, your reviewer may ask you to squash your commits. If so, combine the commits into one with a single descriptive message.
    5. Once your PR has been approved and the commits have been squashed, your reviewer will merge the PR. If you have the necessary permissions, you may merge the PR yourself.

Running E2Es

As noted earlier, E2Es are run once a maintainer decides a PR is ready to be reviewed and all other tests are passing. If the maintainer decides an E2E is required, they will issue the /sem approve bot command in GitHub, and the tests will be performed; the results will appear on GitHub. Our change did not need an E2E run, so here is one from another PR, for completeness.

You can also run make e2e-test locally and the Kubernetes Network Special Interest Group conformance tests will run locally against your checked-out code, installed on a local kind cluster.

What About After Your First PR?

Even a single minor PR, such as a documentation error or a small bug fix, is a valued contribution. But what if you got a taste for it, and you’d like to contribute more, or you have an idea for a significant change? Well, the best thing to do (as so often in life) is to start a conversation! Join other contributors, maintainers, and expert users in the monthly Calico Community Meeting, or in the #contributors channel on the Calico Users Slack. There are other paths you can take, too! If you have an idea, we can discuss it and help support you; if not, we can help direct you towards a challenge that’s just right for your available time, inclination, and skillset. Every contribution is valued.

So what are you waiting for? Become a Calico contributor today!

 

Join our mailing list

Get updates on blog posts, workshops, certification programs, new releases, and more!

X