Guides

Cloud Native Architecture

Cloud Native Architecture: Pros, Cons, and Basic Principles

What Is Cloud Native Architecture?

A cloud native architecture is an application architecture explicitly built for the cloud. Cloud native deployments use cloud platforms by design, unlike lift-and-shift deployments that move on-premise applications as-is to a cloud environment.

Cloud native architectures allow organizations to scale applications easily by adding or removing server nodes. The ability to scale up and down is crucial for accommodating temporary surges in demand.

In traditional on-premise architectures, monolithic applications were typically deployed on one server. Cloud native applications are horizontally scalable and are decomposed into microservices. A microservices architecture splits applications into smaller functionality segments, deployed separately but connected via APIs.

Resilience is important for cloud native architectures, especially when they use microservices—with applications deployed across distributed nodes, the system must be resilient to a failure affecting one node. A well-designed cloud native application should have the ability to keep running or recover quickly in the event of node failures.

This is part of our series of articles about cloud native security.

In this article:

Pros and Cons of Cloud Native Architecture

Cloud native architectures offer flexibility and scalability that make them attractive to organizations that embrace a DevOps approach. Advantages of a cloud native architecture include:

  • Customizability — Using loosely coupled services rather than technology stacks enables DevOps teams to select the optimal framework, system, and language for their projects.
  • Portability — Containerized microservices are portable, enabling organizations to move between cloud environments. They can avoid vendor lock-in as they don’t rely exclusively on a single vendor.
  • Improved resilience — Cloud native architectures work well with container orchestration tools like Kubernetes, allowing teams to recover quickly from issues in a specific container or instance without affecting the availability of the whole application.
  • Optimization — Microservices operate independently, allowing developers to optimize each service individually to provide the best end-user experience.
  • CI/CD adoption — Microservices-based application development helps organizations implement continuous integration and continuous delivery strategies, enabling faster development cycles with automation and reducing the risk of human error.
  • Efficiency — Organizations can leverage container orchestrators to allocate resources and schedule tasks automatically according to demand.
  • Low-impact updates — Microservices architectures enable developers to update an individual service or add new functionality while maintaining the availability of the application.

However, it is important to note that cloud native architectures also present some challenges. Issues to consider when deciding to go cloud native include:

    • Dependencies — Microservices often require specific software, hardware, or operating system dependencies (e.g., GPUs or SSDs), limiting their flexibility. For example, dependencies can tie an application to a specific operating system.
    • Security — Containerized cloud native architectures usually require updates to existing security systems or the adoption of new security technologies. Container technology creates new attack surfaces that can be challenging to protect.
  • DevOps adoption — While DevOps is a powerful and efficient software development approach, adopting new DevOps processes can be a challenge, especially for not yet agile organizations. Adopting a new cloud native architecture usually requires extensive training and cultural change to enable Dev and Ops teams to work together.

Related content:

Cloud Native Application Architecture Principles

The specifics of cloud native architectures may differ, but most designs incorporate the following concepts:

Stateless Processing

Stateless processing is a core concept in cloud native applications, enabling high scalability with inherent fault tolerance. It involves a transaction processing system split into two layers. One layer comprises the variable number of transaction elements that don’t retain long-term state information, while the other layer contains a scalable storage system. The storage system uses various elements to store state data securely and redundantly.

The transaction processing element processes state data from the state storage system to perform transactions, writing the new state data (updated during the transactions) to the storage system.

Microservices

Microservices are an architectural approach that breaks down complex applications into smaller, more manageable parts. A microservice is an independent process that communicates with others via a language-agnostic API. Each service is fully isolated, dedicated to completing one specific task.

Microservices architectures enable a modular application building approach, making it easier to reuse and redesign individual elements. This modularity also supports technological diversity and enables easier development, deployment, and scaling.

Containerization

Containers have become the ubiquitous method for building cloud native applications. Containers leverage Linux namespaces (long-standing partitions) to maintain isolation between file systems, network stacks, and processes. Containers are secure partitions based on the namespace method, each running one or multiple Linux processes. A Linux kernel on the host supports these processes.

Containers work similarly to virtual machines (VMs), although containers are much more flexible. While it is only possible to install VMs with the support of a full operating system, containers can support applications by packaging software. This packaging approach allows developers to add applications easily. Another important difference is that containers are lighter weight than VMs, requiring fewer resources and less maintenance. They can start faster, deploy easily, and offer higher portability.

Communication and Collaboration

Cloud native services must have the ability to communicate and interact with each other and with external services. APIs (typically RESTful APIs) enable communication between cloud native applications and legacy or third-party applications.

Microservices can facilitate management and internal communications by building a dedicated infrastructure layer, called a service mesh, that handles these communications. The main role of a service mesh is to secure, connect, and monitor services in cloud native applications. Although Istio is the most widely used service mesh, several open source implementations are available.

Automation

Cloud native architectures facilitate infrastructure automation, allowing developers to implement CI/CD pipelines and accelerate tasks such as deployment, scaling, and recovery. A CI/CD pipeline automates the development lifecycle’s building, testing, and deployment phases. Cloud native systems also support automating processes such as rollback, recovery, canary deployment, and monitoring.

Defense in Depth

Traditional application architectures rely predominantly on a security perimeter to prevent infiltration. However, regardless of how much an organization hardens its network, the reliance on a single line of defense is insufficient to prevent internal and external attacks. Another challenge for the traditional network perimeter approach is opening up the network to remote access and mobile devices to accommodate user demands.

Cloud native architectures use Internet services by definition, so they must have built-in protections to mitigate external attacks. Defense in depth is a security approach that emphasizes the system’s internal security, based on the military strategy of slowing down an attack with multi-layered defenses. In computing environments, this works by implementing authentication challenges between all components within the network, eliminating implicit trust between components.

Cloud native architectures can extend the principle of defense in depth to script injection and rate-limiting in addition to authentication. Every design component must protect itself from all other components, even when part of the same architecture. This approach makes cloud native architecture more resilient and enables organizations to deploy services in cloud environments without a trusted network between the users and the service.

Cloud Native Security with Calico

Calico Enterprise and Calico Cloud offer several features for zero-trust workload security for cloud-native applications. These include:

  • Identity-aware microsegmentation for workloads – Deploy a scalable, unified microsegmentation model for hosts, VMs, containers, pods, and services that works across all your environments.
  • Zero-trust workload access controls – Securely and granularly control workload access between Kubernetes clusters and external resources like APIs and applications.
  • Intrusion detection and prevention (IDS/IPS) – Detect and mitigate Advanced Persistent Threats (APTs) using machine learning and a rule-based engine that enables active monitoring.
  • Universal firewall integration – The Calico Egress Gateway provides universal firewall integration, enabling Kubernetes resources to securely access endpoints behind a firewall. This allows you to extend your existing firewall manager and zone-based architecture to Kubernetes for cloud-native architecture.
  • Encryption – Calico utilizes WireGuard to implement data-in-transit encryption. WireGuard runs as a module inside the Linux kernel and provides better performance and lower CPU utilization than IPsec and OpenVPN tunneling protocols. Calico supports WireGuard for self-managed environments such as AWS, Azure, and OpenShift, and managed services such as EKS and AKS.

Next Steps

Rate this article

ratings
0 / 5 Average

Join our mailing list​

Get updates on blog posts, workshops, certification programs, new releases, and more!