Microsegmentation was initially conceived as a means of moderating server-to-server traffic within a network segment, but has since expanded to include traffic between segments. Modern microsegmentation can regulate if and how servers or applications in one network segment can communicate with those in another network segment.
You can base microsegmentation policies and permissions on the identity of a resource, which makes it independent from the infrastructure. In comparison, network segmentation depends on IP addresses for the networks. Microsegmentation is thus a great way to create intelligent groupings of the workloads in your data center according to their characteristics.
With the massive adoption of containerized and cloud-native architectures, microsegmentation is changing. Traditional segmentation solutions cannot scale and do not effectively enforce security in a containerized environment, such as a Kubernetes cluster. New microsegmentation technologies are emerging that can discover endpoints in cloud-native environments, define policies, and apply policies dynamically to constantly changing cloud-native networks.
Traditionally, companies defended their network perimeter with a variety of security tools deployed at the network edge (primarily the firewall). The main focus was on screening north-south traffic between the network and external traffic sources. These tools were like the moat protecting a castle, and inhabitants inside the “castle” (the network perimeter) were inherently trusted.
With the advent of the cloud and trends like bring your own device (BYOD), things have become a lot more complex. There has been major growth in east-west traffic, within the data center, and between distributed systems. It is becoming increasingly difficult to defend the perimeter, and there is a need to create multiple micro-perimeters around each valuable asset the organization needs to protect.
This is where microsegmentation comes in. It allows network administrators to create secure “islands” within their distributed infrastructure and control access to those islands for all types of users, whether they are outsiders, customers, or employees logging in from a variety of locations.
Microsegmentation is central to implementing a zero trust security model. Specifically, it is a core component of zero trust network access (ZTNA) solutions. With microsegmentation, network segments can be protected with a handful of consistent identity policies, rather than hundreds of unwieldy IP-based rules.
This type of microsegmentation ring-fences applications to protect sensitive communication. This includes controlling traffic between applications (whether they run on containerized workloads, hypervisors, or bare metal) in proprietary data centers and public or hybrid cloud environments.
High-value applications that provide critical services, contain sensitive or personal data, or are subject to regulations (such as HIPAA, SOX, or PCI DSS), must be protected. Organizations can leverage application segmentation to improve their application security and maintain compliance.
In a traditional network, assets are dynamically placed throughout the development, staging, testing, and production environments, and across public or hybrid clouds, making it difficult to control and protect them. Environmental microsegmentation isolates various deployment environments from each other, restricting their communication.
It is common for an N-tiered application to have application, web, and database tiers, which might need to be protected from each other through segmentation. Microsegmentation at the application tier level prevents unauthorized lateral movement between workloads by dividing them based on roles.
A segmentation policy could, for example, permit the processing tier to communicate with the database tier but not with the web or load balancer tier. This helps reduce the attack surface.
You can achieve a higher level of granularity in your segmentation by reducing the attack surface into smaller units. This means you treat each virtual machine (VM) or bare metal server as an individual unit. You can implement this for on-premises data centers as well as cloud environments.
To implement process-based nano-segmentation, you need to dynamically program precise outbound and inbound security rules for each workload. The platform can then create an adaptive perimeter designed around each computing instance, which creates a segment of one.
You can define permitted workload interactions for your applications without network dependencies, by combining process-based nano-segmentation with an allowed policy model. You can then further drill down your segments. For example, you can separate two instances of one process that runs on the same machine into two isolated segments.
User segmentation limits application visibility to members of specified groups, using identity services like Microsoft Active Directory. Group membership and user identity form the basis of user segmentation, so there is no need to make changes to the infrastructure. With user segmentation, each user in your VLAN might have a different policy providing different access permissions.
Containers might hold sensitive data or critical business information, and are often deployed as microservices within a Kubernetes cluster. If the containers are not segmented, sensitive information may be exposed to anyone with access to the network.
Kubernetes allows you to use network policies when implementing network segmentation. You can use network enforcement capabilities provided natively in Kubernetes, or infrastructure layers like service meshes.
The default settings in Kubernetes do not restrict communication between containers, pods, and nodes; these components can communicate between namespaces or within the same namespace. You can use network policies to restrict this communication, starting with adding a policy that denies all communication, and then specifying what communication is allowed.
Kubernetes pods need to communicate with each other. When defining network policies, you should systematically list all the pods each given pod must communicate with. You should also define a list of allowed and restricted ingress and egress public internet communication.
The network security threat landscape is continuing to evolve, with organizations using ever more complex networks. To minimize their risk of exposure to cyber threats, organizations must implement a zero-trust approach to security. Zero-trust policies restrict access to the organization’s resources and systems, so that employees and applications can only access the data and services they need to complete a task.
After an employee is authenticated, any requests they submit to the network must be evaluated according to predefined access control policies and either permitted or blocked. Likewise, applications should only be provided access according to business logic and limited to the minimum required privileges.
For zero trust to be effective, you must be able to enforce it. This is where microsegmentation comes in, as it allows you to create boundaries between all workloads and enforce tight access controls. In the event of a network breach, an attacker’s ability to exploit your organization’s systems will be greatly reduced.
Read our blog: The New Model for Network Security: Zero Trust
To implement microsegmentation in your organization, you need to design an architecture that suits your security and productivity needs, and implement it gradually to avoid disrupting normal network operations.
Microsegmentation is most effective when it uses well-defined boundaries. Define the objectives of your applications according to your business needs and the categorization or identification of end users. This allows you to identify the desired boundaries for your applications, which determine the type of information that can be exchanged.
The next step is to create the boundaries for each application. Establish context-based visibility for each application and define any internal or external communication required. Identify the users that need to access the application, as well as the specific data or application services they require.
Applications often have tiers for services that are relevant to a particular group of users. Use a least-privilege approach, starting with the lowest privilege level and applying additional privileges for each user group and service.
Once you have identified the infrastructure assets that need to be protected, you must group these assets according to logical definitions. Structure your implementation to first select a group of assets (such as applications, datasets, servers, or users) and then define their implementation process. Evaluate the methodologies and verification process to strengthen and enforce the implementation process.
Identify assets and attribute them to enable effective policies. Applications and components should be labeled and grouped accordingly. Create policies based on labels and application visibility.
You can author and configure policies to enhance your visibility into the communications between assets such as servers and applications. This granular level of visibility allows you to customize your microsegmentation policies according to your business requirements.
The best way to ensure your security policies are effective is to test them using simulations. This allows you to identify and address gaps in the implementation policy.
Calico Enterprise and Calico Cloud provide a unified, cloud-native segmentation model and single policy framework that works across all of your existing environments—including hosts, VMs, containers, Kubernetes components, and services—while automatically scaling with your microservices environment.
Calico enables full workload portability and the ability to define segmentation policies for multi-cloud and hybrid connections. It is built for cloud scale and provides you with the ability to roll out security policy changes in milliseconds, while legacy segmentation tools take hours.
Key features and capabilities include: