Container Network Interface (CNI) is a framework for dynamically configuring networking resources. It uses a group of libraries and specifications written in Go. The plugin specification defines an interface for configuring the network, provisioning IP addresses, and maintaining connectivity with multiple hosts.
When used with Kubernetes, CNI can integrate smoothly with the kubelet to enable the use of an overlay or underlay network to automatically configure the network between pods. Overlay networks encapsulate network traffic using a virtual interface such as Virtual Extensible LAN (VXLAN). Underlay networks work at the physical level and comprise switches and routers.
Once you’ve specified the type of network configuration type, the container runtime defines the network that containers join. The runtime adds the interface to the container namespace via a call to the CNI plugin and allocates the connected subnetwork routes via calls to the IP Address Management (IPAM) plugin.
CNI supports Kubernetes networking, and can also be used with other Kubernetes-based container orchestration platforms such as OpenShift. CNI uses a software-defined networking (SDN) approach to unify container communication throughout clusters.
In this article:
Kubernetes is an open-source container orchestration platform developed by Google. It is used for managing and automating application container deployments across multiple machine clusters. Kubernetes allows you to operate, schedule, monitor, and maintain containerized workloads.
Kubernetes can also be used for networking, which lets administrators move workloads between different cloud infrastructures, including public, private, and hybrid clouds. Kubernetes allows developers to quickly package and deploy applications using their preferred infrastructure, which is useful for developing new versions.
With Kubernetes networking, Kubernetes components can communicate with different applications and with each other. Kubernetes differs from other networking platforms in that it has a flat network structure, which means that host ports don’t have to be mapped to container ports. It allows you to run a distributed system, with machines being shared between applications without the need for dynamic port allocation.
Initially, containers (pods) don’t have a network interface. To create a network interface for a container, the container runtime sends an ADD command to the CNI plugin (the runtime can call the plugin with commands like ADD, DEL, and CHECK). Once the new network interface is created, a JSON payload passes the details of what needs to be added to the CNI.
Both Linux container and container networking technology are continuing to evolve to meet the needs of applications running in various environments. CNI is an initiative of the Cloud-Native Computing Foundation (CNCF), which specifies the configuration of Linux container network interfaces.
CNI was created to make networking solutions integratable with a range of container orchestration systems and runtimes. Instead of making the networking solutions pluggable, it defines a common interface standard for both the networking and container execution layers.
CNI focuses on the connectivity of container networks and the removal of allocated resources upon the termination of containers. This focus makes CNI specifications simple and allows them to be widely adopted. The CNI GitHub project provides more information about the CNI specifications, including the third-party plugins and runtimes that use it.
CNI has a multitude of supported plugins, with major container orchestration frameworks like Kubernetes having implemented it. Plugins address various container networking functions and must conform to CNI standards defined by the CNI specification.
CNI offers specifications for multiple plugins because networking is complex, and user needs may differ. It is essential to choose the right plugins for your project and use case.
CNI networks can be implemented using an encapsulated or unencapsulated network model. XLAN is an example of an encapsulated model, while Border Gateway Protocol (BGP) is an example of an unencapsulated model.
This model encapsulates a logical Layer 2 network over an existing Layer 3 network topology, which covers multiple Kubernetes nodes. Layer 2 network is isolated so there is no need for routing distribution. The overhead cost is minimal, while providing improved processing and larger IP packages—the overlay encapsulation generates an IP header that provides the IP package.
UDP ports distribute encapsulation data between workers in Kubernetes, translating information from the network control plane to reach the MAC addresses. Examples of common encapsulation network models include VXLAN and Internet Protocol Security (IPsec).
Put simply, this model provides a bridge that connects Kubernetes workers and pods. Within pods, the element managing communication is Docker, or another container engine. It is applied to use cases that prefer a Layer 2 bridge, as it is sensitive to Kubernetes worker latencies in Layer 3. For data centers in separate geographic locations, it is important to minimize the latencies between them in order to prevent network segmentation.
Examples of CNI network providers that follow this network model include Canal, Flannel, and Weave.
This model provides a Layer 3 network for routing packets between containers. There is no isolated Layer 2 network or overhead, but this is at the expense of Kubernetes workers, which must manage any required route distribution. A network protocol is implemented to connect Kubernetes workers and use BGP to distribute routing information to pods. Within pods, the component managing communication with workloads is Docker or another container engine.
This model involves extending a network router between Kubernetes workers—this router provides information on how to reach the pods. Unencapsulated networks are suited to use cases that prefer a routed Layer 3 network. Routes for Kubernetes workers are dynamically updated at the operating system level, reducing latency.
Examples of providers that use an unencapsulated network model include Romana and Calico.
Calico’s flexible modular architecture supports a wide range of deployment options, so you can select the best networking approach for your specific environment and needs. This includes the ability to run with a variety of CNI and IPAM plugins, and underlying network types, in non-overlay or overlay modes, with or without BGP.
Calico’s flexible modular architecture for networking includes the following.
In addition to providing both network and IPAM plugins, Calico also integrates with a number of other third-party CNI plugins and cloud provider integrations, including Amazon VPC CNI, Azure CNI, Azure cloud provider, Google cloud provider, host local IPAM, and Flannel.
Get updates on blog posts, workshops, certification programs, new releases, and more!