OpenShift From Pilot To Production Matrix 23 Oct 2017

OpenShift from Pilot to Production, Part 1: Connectivity and Network Policy in the Hybrid, Multi-Cloud World

In this multi-part series, I will be looking at what it takes to get OpenShift from initial pilot to a production deployment meeting enterprise requirements for application security and connectivity.

So you’ve been sold on OpenShift’s enterprise wrappers around Kubernetes for continuous development to deployment workflows. Congratulations — you’ve made a great first step on the path to Cloud Native Nirvana!

However, as that new PaaS smell begins to fade, the reality sinks in that you are faced with the challenges of real world connectivity and security.

  • Connecting applications in OpenShift to applications and infrastructure outside it within the cloud and on-premises across various regions.
  • Since OpenShift pods are dynamic and transient, securing your workload connectivity so that accidental misconfigurations, oversights, or compromised applications do not lead to the next Equifax-like incident.
  • Enabling traditional network security functions in the dynamic OpenShift environment, including policy-driven coarse-grained and fine-grained isolation, monitoring, in-depth audit/logging/capture, anomaly/threat detection and SIEM integration.
  • SELinux and SecComp can be leveraged in OpenShift for policy-guided compartmentalization of applications within an individual node. How can this be extended into application connectivity across the network, within the cluster, and to host/VM instances and services outside OpenShift?
  • Meeting regulatory compliance requirements (for e.g., PCI, HIPPA, GDPR, etc.), with specific requirements for network and application connectivity.
  • Providing connectivity and security while avoiding operational complexity.
  • Public cloud providers, managed hosting operators and internal service providers might need additional security boundaries around the network policy controls they might offer to individual projects/tenants.

In collaboration with the Kubernetes community (and sub-teams like SIG-Network), the team at Tigera has been working on architecturally elegant solutions to these challenges through stewardship of, and participation in, open source projects like Calico, Flannel, CNI and Istio.

In this blog series, we will discuss how OpenShift operators can architect their deployment to enable it to meet these critical real-world policy and network security requirements while simultaneously enabling simple, scalable connectivity within OpenShift, and to instances/VM’s outside OpenShift running on-premises or in the cloud.


The Kubernetes Network Policy specification provides for compartmentalization of network flows with a rich and yaml-based declarative framework. Calico’s Network Policy implements—in fact is a superset of—Kubernetes Network Policy, as the Calico team helped with the design of Kubernetes Network Policy.

The Kubernetes Network Policy specification provides for compartmentalization of network flows with a rich and yaml-based declarative framework. Calico implements this specification, and in fact is a superset of capabilities. (The Calico team helped with the design of Kubernetes Network Policy, and Calico was deemed to be the reference implementation during early testing.)

The Tigera Essentials Toolkit for OpenShift builds on Calico Network Policy enhancing policy management and operations for production deployments. This facilitates in-depth instrumentation and monitoring of application connectivity flows, and policy overrides to enable common enterprise and service provider workflows for business, security and regulatory compliance requirements.

In Part 2 of this blog series, we will discuss Calico Network Policy features along with the advanced policy management and operations features enabled by the Tigera Essentials toolkit, and their use cases in real-world deployments at scale, leveraging best practices gained from operational patterns at Tigera customers (including Google Container Engine, IBM, Box and numerous others) that have deployed Kubernetes into production at scale.


Given that Kubernetes network policy enables application isolation by using a rich, declarative syntax, the traditional Gen1 SDN approach of attempting to isolate applications and tenants from each other using operationally-complex and performance-inhibiting L2 network overlays is no longer required.

Calico networking treats every node in the OpenShift cluster as a router and programs each node’s routing table with the appropriate routes, and enables each node to route traffic leveraging the standard Linux routed data path. The clean IP routed architecture of Calico enables easy connectivity at scale, borrowing from the same designs that have helped scale IP connectivity in the Internet.

Alternatively, when deploying in a cloud like Google GCE, Microsoft Azure, or similar environments that already provide network connectivity to container addresses, Calico networking can be disabled altogether, and Calico can function in policy only mode.

Part 3 of this blog series will introduce some of the common network connectivity features of Calico networking as applied to OpenShift deployments, and contrast the simple elegance and operational scale of this approach versus more complicated overlay-based approaches.


Part 4 of this blog series will share deployment topologies and reference architectures for OpenShift, leveraging best practices from numerous production deployments of Kubernetes with Calico.

These will demonstrate how Calico and Tigera Essentials enable a seamless architecture with scalable IP connectivity, simplified troubleshooting, and policy-driven network security and operations within an OpenShift deployment.


There’s more to getting OpenShift up and running than meets the eye — particularly as it concerns securely connecting your workloads. In this post we covered some of the highlights of how Calico can help. Watch out for the next installment where we dig into the network policy aspects in more depth.