One of the key features of Calico is that it is simple and universal – you can use it to network any virtualized environment, whether with VMs or containers. One of the key features of Apache Brooklyn is its ability to orchestrate any combination and kind of workload through simple but powerful blueprints.
So what would you get if you put the two together? Seamless networking and automation of VM and container workloads – allowing you to build applications using a combination of VMs and containers, mixing and matching depending on what best suits each workload type. Brooklyn takes care of the automation. Calico taking care of the networking.
So much for the theory – how does this all work in practice? Over the past couple of weeks I’ve been working with Andrew Kennedy and Csaba Palfi at Cloudsoft to configure a Brooklyn / Clocker deployment to manage applications spread across OpenStack and Docker. This blog post describes how it works, and how you can do this yourself.
First we built a simple 5 host deployment. We used 2 of the hosts to run OpenStack VM workloads and the 3 other hosts to run Docker containers. We used the standard Calico OpenStack plugin for the OpenStack workloads and the standard calico-docker code for the Docker workloads. Brooklyn was responsible for orchestrating the workloads across all of the hosts (using Clocker t control the Docker workloads).
Everything works as you’d expect. You can create VMs and containers through OpenStack and Docker as normal, create and manage arbitrary OpenStack security groups, and assign the VMs and containers to any of those security groups. When you do so, the correct routes get created and propagated through BGP to allow connectivity, while the security group rules are applied to both VMs and containers as normal (and yes, adding a container to a security group does allow it to match on rules that filter on security group).
The summary? A few extra steps in Calico set up (see below for what to do manually), but it all just worked. Nobody likes manual steps, and that’s where Clocker and Brooklyn come in, allowing the automated install of extra Docker hosts, and of applications and security profiles spanning both VMs and containers. Andrew Kennedy will be showing off this demo at the Cloudsoft booth at DockerCon. Project Calico will also have a booth; come and talk to us there to find out more.
The rest of this blog discusses the Calico networking in more detail, but we encourage you to also check out the Brooklyn and Clocker projects that made the higher level automation possible (and expect to see a cross-link here to their blog post on this topic for more details coming soon).
First we used the same etcd cluster for all the Calico nodes. For simplicity we also configured Calico to use the same BGP AS on all the hosts for routing purposes
The security groups were configured (via Brooklyn) via OpenStack using the standard OpenStack interfaces. (You could also use either the horizon GUI, the command line, or the OpenStack RESTful API.) As soon as the security group has been created it appears as a Calico profile available for all types of workloads to use. OpenStack VMs were assigned to profiles using Calico’s OpenStack plugin. Containers were assigned to profiles using the standard calicoctl tool. All coordinated by Brooklyn & Clocker.
Here’s what you have to do (with Calico) in more detail.
etcdctl set /calico/v1/host/<docker host>/config/InterfacePrefix caliOpenStack hosts expect interface starting with “tap” while Docker hosts expect interfaces starting with “cali” – and this way you’ll be able to mix the two in the same deployment.
calicoctl pool add) on one of your Docker hosts as documented here. You should use separate IP pools for OpenStack and Docker (so that OpenStack does not select IPs that are in the Docker range), but provision both of them using calicoctl so that they are included in the BIRD BGP configuration.
etcdctl set /calico/v1/host/<openstack host>/bird_ip <IP address of host>
/etc/bird/bird.confon the OpenStack compute hosts, and restart BIRD. A good trick is to dump the config on one of the Docker hosts and use that as the basis for the OpenStack configuration, to reduce the amount of manual editing required.
docker exec calico-node cat /config/bird.cfg
OK, so now it’s all working, how do you test that it works?
calicoctl profile showon a Docker host – you’ll see a list of profile IDs, each of which is the UUID of an OpenStack security group.
calicoctl member addto assign them to the security groups.
ip route showon both OpenStack compute hosts and Docker hosts. If the routes to the containers and VMs show up on the owning host but nowhere else, you have probably made a mistake configuring BIRD.
It’s worth mentioning that this is still a work in progress and there are still some rough edges in the tools when doing this. For example, some calicoctl commands are a little confused by the OpenStack data they find in a mixed deployment: you can list profiles derived from OpenStack security groups using calicoctl, but you cannot use it to edit or view their contents;
calicoctl shownodes does not work; and
calicoctl node stop only works with the force option. None of these is a significant issue in practice, and we’ll be cleaning this up in future (at the same time as making the process above a little less manual).
Get updates on blog posts, new releases and more!