3 Takeaways on the Future of OpenStack Networking from the Boston Summit

The Project Calico team is glad to see the OpenStack ecosystem move closer to the Kubernetes world, as evidenced by this week’s events at the OpenStack Summit in Boston. Here are 3 highlights for OpenStack Networking as we looked past the noise and observed the big picture.

1. Operators/Users are desperate for operational simplicity and scale

A common theme across many of the Neutron sessions involved operators and users complaining about the complexity of operations of Neutron with OpenvSwitch and multiple layers of bridges, tunnels and overlays (and even more so when additional elements such as complex SDN controllers are introduced). This was in contrast with vendor participants at these sessions attempting to introduce ever more complex designs and features on an already convoluted foundation.

In contrast, a number of operators and users were quite surprised to discover the elegant simplicity and operational scalability of Project Calico’s pure L3 approach and overlay-less networking across both Kubernetes and OpenStack/Neutron, as evidenced by the number of times we were approached in the corridors and lunch hall after the sessions and enthusiastic expression of interest.


2. OpenStack deployments need to integrate nicely with Kubernetes, whether alongside, underneath, or increasingly as a containerized application on Kubernetes

The workshop with attendees deploying OpenStack on Kubernetes with OpenStack-helm, leveraging Calico for simple/scalable networking (hosted by the AT&T Integrated Cloud team)

The broader theme of this Summit could be expressed as “OpenStack can play nicely with Kubernetes”. One of the trends highlighted at the event is the architectural shift in which OpenStack is increasingly being deployed as a containerized application on Kubernetes, where Kubernetes simplifies the deployment and operational lifecycle management for OpenStack, including autoscaling, in-place non-disruptive upgrades between major versions and other key functions. An illustration of this was the impressive demo by the AT&T Integrated Cloud team at a session together with Canonical and Tigera, where they showed a running OpenStack deployment on top of Kubernetes using OpenStack-Helm (leveraging Calico for simple, scalable networking) and then performing a live upgrade through Kubernetes without affecting applications (such as live video being streamed from Openstack instances).

Even more impressively, the AT&T Integrated Cloud team hosted a hands-on workshop with over 100 participants in a packed room also performing the same task of deploying, and upgrading OpenStack on Kubernetes leveraging Helm, and Calico for networking, and completing this lab in mere minutes.


3. Complexity of Network Overlays for isolation and L2 primitives does not scale operationally in a Kubernetes + OpenStack (and other Cloud-Native Infrastructure) World

As the OpenStack world comes to terms with the realities of joint operation with Kubernetes and other Cloud-native infrastructure at scale, it’s becoming evident to infrastructure operators that the legacy primitives that attempt to isolate applications and users using complex network overlay technologies and L2 network abstractions like bridges and vswitches significantly limit operational and scalability handicaps. Operators are increasingly embracing the Project Calico approach of using simple yet scalable pure-L3 approaches for networking, combined with powerful Network Policy for isolation across Kubernetes, OpenStack, other cloud infrastructure orchestrators, and host Linux instances.

The Project Calico team welcomes the OpenStack networking community and operators as they embrace the realities of the hybrid cloud-native world. Join us, and other teams (such as the OpenStack-helm) team in helping architect a simple, powerful and scalable network and security fabric across OpenStack and Kubernetes.

Join our mailing list

Get updates on blog posts, workshops, certification programs, new releases, and more!