This blog is the second in a four part-series explaining how to deploy secure application connectivity in an OpenShift environment.
- Part 1: Connectivity and Network Policy in the Hybrid, Multi-Cloud World
- Part 2: Security and Policy as Code
- Part 3: Scalable, simple, non-overlay IP connectivity across on-premises and Public clouds
- Part 4: Real-world recipes from production OpenShift deployments with declarative policy-guided security and connectivity
OpenShift provides an automated, declarative platform for applications. Infrastructure-as-code and Intent-based networking enable automated, declarative infrastructure. Yet, most organizations struggle with static, brittle, manual layers sandwiched in-between for application connectivity and security. These include legacy perimeter firewalls, IPS/IDS appliances, WAFs, antiquated VM-era policy controllers and complex SDN controllers. Heck, many organizations typically also require “Bob” from the InfoSec team who needs to review firewall rules before the app rollout is committed to production (no intent to point fingers at real-world-Bob’s in InfoSec).
If you’re a typical organization embracing agile processes, automation, and a declarative infrastructure platform and are wondering why app deployments still take weeks, it’s usually because of this brittle and antiquated layer. Why are these orgs locked into this layer reeking of a legacy process developed during the days of legacy VMs and manual intervention? It’s because this is how the typical organization achieves enterprise and LoB control, security oversight, and compliance.
Even more concerning, rules aren’t actively maintained when apps are decommissioned, leaving behind cruft that accumulates over time.
Enter Tigera CNX.
CNX enables organizations to embrace a declarative, automated, policy-driven approach to application connectivity and security (and associated operations), across containers, VMs and bare metal and cloud instances. CNX leverages Project Calico and Istio as foundational open source building blocks.
Calico is the gold standard for Network Policy across Kubernetes deployments. Calico policy is used in the majority of Kubernetes production deployments implementing policy-based security across various networking plugins (including Flannel, Calico and the native underlay network within major hosted Kubernetes cloud services). It has been adopted by the largest public cloud providers as part of their hosted Kubernetes services, and has been embraced by leading commercial Kubernetes distributions, as well as upstream Kubernetes installers (and is frequently the default choice within these).
Likewise, Istio is rapidly on the path towards becoming a standard control plane component of microservices deployments, contributing observability, traffic management, security, policy and other functions leveraging the Envoy sidecar proxy injected alongside the application.
Application Connectivity & Security: Moving to a Declarative, Automated Paradigm
Stay tuned for an upcoming whitepaper that will provide a detailed walkthrough of how CNX enables an automated, declarative, policy-driven workflow for secure application connectivity, for all critical stages of an organization’s transformation to agile microservices-based applications. The paper will also explain how various roles across development, cloud architecture, SRE/Operations and InfoSec can all eliminate many common sources of friction (and tension) that delay the move to agile microservices.
CNX enables automated and declarative policy-guided connectivity workflows for a number of use cases:
- Across pods within OpenShift (and other enterprise Kubernetes platforms)
- Virtual Machines
- To orchestrated VM/Cloud platforms like OpenStack (where Calico runs as a Neutron plugin)
- Within cloud instances and bare metal host platforms.
Since the large public cloud providers have already integrated Calico into their Kubernetes offerings (such as Amazon EKS, Microsoft ACS Engine, Google GKE and IBM Cloud), organizations are able to benefit from a consistent policy implementation with CNX on OpenShift.
CNX enables numerous advanced network policy use cases, such as:
- Providing hierarchical enterprise policy control that is declarative and automated. This helps Bob from InfoSec evolve from a manual security reviewer to an enabler of automated, declarative policy-as-code that adapts to dynamic orchestrated workloads, while also aligned with the InfoSec team’s priority within the enterprise organizational hierarchy.
- Automating common operations workflows for connectivity and security, converting tasks previously requiring manual Operations/SRE/DevSecOps intervention for monitoring, audit, alerting, fixing, and forensics into automated, declarative policy-driven workflows.
- Enabling developers to automate the creation and tracing/troubleshooting of network policy
All of these advanced workflows are integrated seamlessly with OpenShift, leveraging integration hooks provided, such as API Aggregation. So operators integrate users/group and RBAC permissions through OpenShift (or Kubernetes), and can use existing OpenShift and Kubernetes frameworks (whether it is ‘oc’, ‘oadm’, ‘kubectl’, the dashboard, or even custom integrations) to access the advanced, declarative policy-driven workflows and automation provided by CNX.
There are many more advanced workflows enabled by Tigera CNX, Project Calico and Istio (covering those would make this blog post into an e-book.) We’ve been thrilled by the number of organizations that have been collaborating with us to enable CNX to provide them with agility to evolve past the existing approach that is a slow, brittle and manual process alongside their OpenShift deployment.
In Part 3 of this Blog series, we will dive into the connectivity portion of Tigera CNX and Project Calico, and explore multi-cloud and hybrid connectivity across containers,pods, VMs, host instances, and workloads running within the major public cloud providers. In the meantime, give Tigera CNX a test drive with your OpenShift deployment – and automate the previously manual, brittle application connectivity and network security layers that hindered the move to agile microservices.