Why Choose Calico?

Calico powers over 2M Kubernetes nodes across 166 countries, and all of the major Kubernetes platform vendors and managed Kubernetes services providers use it for their own Kubernetes environments

About Tigera

Tigera provides the industry’s only active security platform with full-stack observability for containers and Kubernetes. Our platform prevents, detects, troubleshoots, and automatically mitigates exposure risks of security breaches.

We deliver our platform as a fully managed SaaS (Calico Cloud) or a self-managed service (Calico Enterprise), and our open-source offering, Calico Open Source, is the most widely adopted container networking and security solution.

Tigera’s platform specifies security and observability as code to ensure consistent enforcement of security policies, which enables DevOps, platform, and security teams to protect workloads, detect threats, achieve continuous compliance, and troubleshoot service issues in real time.

Active security for containers & Kubernetes

Tigera’s active, zero-trust based security for containers and Kubernetes enables you to prevent, detect, and mitigate threats. We focus on threat prevention by reducing the attack surface, then layer on threat detection and threat mitigation capabilities.

Reduce attack surface with zero trust

  • Zero-trust workload access
  • Identity-aware microsegmentation for workloads
  • Universal firewall integration
  • Envoy-based
    application-level security

Detect known and unknown threats

  • Protect workloads from container and network based threats
  • Workload-based WAF, IDS/IPS with deep packet inspection
  • ML-based zero-day workload threat identification
  • Protection from vulnerabilities and malware

Automatic risk mitigation

  • Dynamic Service and Threat Graph
  • Security policy recommender
  • Admission Controller
  • Alert, pause, quarantine, terminate compromised workloads

Security and observability as code

 

Cloud-native applications deployed in Kubernetes have ephemeral components with dynamic IPs that are distributed across multiple clusters, clouds, and hybrid environments. This makes it impossible to secure and troubleshoot these applications using traditional approaches. We solve this by enabling DevOps teams to specify security and observability as code (SOaC). SOaC is the configuration of security and observability at deployment time employing Kubernetes primitives and declarative models, using the same versioning that DevOps teams use for source code. Following the principle that the same source code generates the same binary, a SOaC approach ensures that any Kubernetes component generated with the code has the exact same security and observability confirmation regardless of the deployment model, type of distribution, or container type.

Kubernetes-native architecture for security and observability

We are Kubernetes-native and offer rich security and observability functionality by deeply integrating with Kubernetes’s core. We provide this functionality in Kubernetes clusters by adding new custom APIs and controllers, as well as providing infrastructure plugins for the core components of networking, storage, and container runtime. Being Kubernetes-native, we work with the Kubernetes command line interface (kubectl), which can be seamlessly integrated with Kubernetes features such as role-based access control (RBAC), service accounts, audit logs, etc.

Calico offers a number of additional custom resource definitions (CRDs) that extend Kubernetes APIs. Examples include GlobalNetworkPolicy, GlobalThreatFeed, GlobalAlerts, PacketCapture, StagedNetworkPolicy, and HostEndpoint.

Since Calico is Kubernetes-native, all of its security and observability features can be accessed via Kubernetes API server, making it possible to programmatically configure functionality.

Being Kubernetes-native means that the same functionality will work across multiple clusters, distributions, and deployment models.

Commitment to open source

We are committed to developing, cultivating and supporting open source projects and communities.

Project Calico: We are the creator and maintainer of Project Calico, which delivers open source Calico, the most widely adopted solution for container networking and security, powering 2M+ nodes daily across 166 countries.

eBPF, Envoy, and WireGuard: We actively use and promote popular open-source projects like eBPF, Envoy, and WireGuard. Calico provides a pluggable data-plane architecture enabling support for multiple data planes, including standard Linux, eBPF, and Windows. Calico also integrates with Envoy to provide observability functionality, and uses WireGuard to encrypt all in-cluster communications.

Loved by the community

The global Calico community is large and growing. We deliver more than 100 free technical training sessions annually to thousands of community members. We also offer free, self-paced Calico certification programs.

Trusted by companies all over the world

Tigera's platform is used by leading companies, including AT&T, Discover, Merck, NBCUniversal, HanseMerkur, Allstate, Box, Siemens Healthineers, Playtech, Royal Bank of Canada, and Bell Canada.

Request a demo

As a new member of the Calico team, Ed thought I might be in a good position to explain why Calico is a good idea.

So what is the point of Calico?  To answer that, we need a bit of a background.

Calico is a networking method for interconnecting virtual ‘workloads’ together.  I’m deliberately using the word ‘workloads’ here instead of Virtual Machines/Containers/etc. because Calico could apply to any or all of them.

A virtual infrastructure (Openstack, Docker, etc) needs to be able to allow interconnection of workloads.  Usually, though, users do not want all workloads to be able to talk to all others in the data centre – it is more likely that they will want a few ports on a few workloads open to the internet (e.g. port 80 on their web front-end or load balancer) and further ports open between specific workloads (perhaps to allow their web server to get access to the SQL database).  In addition, a virtual infrastructure may have many different users (imagine Amazon Cloud) who need to be presented an experience as if they were the only user of the system (in other words, one user’s internal workloads shouldn’t be able to communicate with another’s).

Traditionally virtual infrastructures have offered a LAN-like (layer 2) experience to users configuring multiple workloads.  This is what most users setting up small systems in the real world are used to.  But the approach has some drawbacks too – the infrastructures have had to implement several layers of virtual LANs, bridging and tunnelling to make layer 2 networking work across multiple physical host machines.  The bridging is messy (more things to go wrong), the virtual LANs and tunnels use up resources on your hosts and places restrictions on the network between the physical hosts.  And the bigger your cloud infrastructure is (the more hosts and data centres you have) the uglier the solution looks.

So what’s the alternative?  We suggest Calico: a Layer 3 approach.   The Calico team looked at the current layer 2 based solutions and asked how you might design a large network if you started from scratch.  The obvious design inspiration is the internet – the biggest network there is!  The internet connects together many smaller networks using Routers.  Routers talk amongst themselves and learn the current shape of the internet using protocols including BGP.  Firewalls control which computers can talk to each other and on which ports.  So can we do something similar with our virtual workloads?

In a system using Calico, compute hosts run a virtual router (the one already built into the Linux kernel) and configure their workloads to be connected to it.  This router then shares routes with the routers in other compute hosts (using BGP – just like the internet) allowing all workloads to be connected to each other.  Calico uses IPtables (as used in many firewalls across the internet) to restrict which workloads can talk to which other workloads.  I think this gives Calico networks advantages in the following areas:

Better use of resources:  Layer 2 networking is based on broadcast/flooding. The cost of broadcast scales exponentially with the number of hosts.  Once you get to a certain size (think 100s or 1000s of servers), you end up with lots of network bandwidth being eaten up simply by people issuing periodic messages reminding everyone else that they’re still there – let alone the proactive broadcast queries to find the location of destinations.  Worse, because of the way that virtualization works, these broadcast messages don’t get restricted just to the network: they end up being processed on the CPU of the compute servers themselves, eating up CPU which should be being used for workloads.  If you use VLANs to interconnect compute hosts, you can only have 4096 VLANs (12bits in the header).  This may sound like a lot, but if you’re assigning a VLAN to each user of the virtual infrastructure, you limit yourself to this many users.  (It’s also a real pain when you’re trying to bridge between networks as they tend to be locally assigned, and you have to try to work out how to do conversion.)  People have tried to ameliorate these problems by using technologies that layer on top of L2, like GRE or VXLAN.   These solutions help a bit, but introduce other problems (such as increased encapsulation overheads).

Scalable: The Calico approach to building networks is exactly the same as the techniques used on the internet: BGP and L3 routing. The internet is bigger than any data centre you are likely to deploy, so it’s clear that the Calico approach has the potential for enormous scale.

Simpler and easier to debug:  As there is no encapsulation, there are fewer steps between a packet leaving a VM and appearing on the wire which means fewer bits to configure and to go wrong (no messing about with different MTU values to work around encapsulation for example). Because it works like the internet, you probably already know how to debug it – tools like wireshark will just work (which is not the case for some Layer 2 technologies).

Fewer requirements on your data centre: VLANs only work within a L2 network, while Calico only needs hosts to be routable (Layer 3) to each other, so it is easier to have your data centres geographically distributed.

Flexibility: The Calico model is not dependent on any particular virtualization model.  It is designed to work equally well for VMs, containers, white box devices or even a mixture of all three.

There are other benefits as well, but these are the ones which have really resonated with me during my ramp-up on the team.

In conclusion – if you’re setting up a virtual infrastructure for an open environment like OpenStack or Docker with more than a handful of hosts on a single site, you should probably take a look at Calico.

Comments are closed.