By 2025, Gartner estimates that over 95% of new digital workloads will be deployed on cloud-native platforms, up from 30% in 2021. This momentum of these workloads and solutions presents a significant opportunity for companies that can meet the challenges of the burgeoning industry.
As digitalization continues pushing applications and services to the cloud, many companies discover that traditional security, compliance, and observability approaches do not transfer directly to cloud-native architectures. This is the primary takeaway from Tigera’s recent The State of Cloud-Native Security report. As 75% of companies surveyed are focusing on cloud-native application development, it is imperative that leaders understand the differences, challenges, and opportunities of cloud-native environments to ensure they reap the efficiency, flexibility, and speed that these architectures offer.
Containers: Rethinking security
The flexibility container workloads provide makes the traditional ‘castle and moat’ approach to security obsolete. Cloud-native architectures do not have a single vulnerable entry point but many potential attack vectors because of the increased attack surface. Sixty-seven percent of companies named security as the top challenge regarding the speed of deployment cycles. Further, 69% of companies identified container-level firewall capabilities, such as intrusion detection and prevention, web application firewall, protection from “Denial of Service” attacks, and deep packet inspection as the top need for network security for cloud-native applications.
To overcome many threat vectors, companies must implement a zero-trust approach early in development to reduce the attack surface. This approach should start with a deny-all mechanism that only orchestrates communication between various workloads where and when it is necessary.
A zero-trust strategy reduces the attack surface and limits the blast radius of any potential intrusion by preventing bad actors from weaseling their way deeper into more vulnerable and sensitive areas of the application, data, and infrastructure. With this security foundation in place, IT teams can confidently move forward to deployment and layer in additional mitigating controls to bolster their defenses further.
The importance of observability
To independently troubleshoot Kubernetes microservices issues today, DevOps and SRE teams must stitch together an enormous amount of data from multiple disparate systems that monitor infrastructure and service layers. Troubleshooting this way is a significant time sink for already stretched-thin DevOps teams. This challenge is reflected in Tigera’s report, which found that nearly all (97%) survey respondents experience observability challenges when trying to secure their cloud-native applications, with 51% citing a lack of actionable insights, such as root cause and resolution recommendations, as the top challenge.
The difficulty of processing container-level data also plays a crucial role in meeting compliance requirements. More than 6 out of 10 (63%) respondents indicated that they must provide container-level information for compliance needs, but finding and correlating all relevant container data is a challenge that 77% of respondents faced when trying to meet container-level compliance requirements.
The complex nature of Kubernetes microservices deployments and the overwhelming amount of data generated makes it nearly humanly impossible to make sense of the data without machines to help diagnose and troubleshoot. This problem is only getting worse by the day, given the accelerating density of applications and the dynamic nature of cloud-native environments.
It’s time we realize that existing tools are inadequate and re-imagine the solution for this critical observability problem. This can only be done effectively by applying machine learning and artificial intelligence (AI) to observability; in effect, deploying machines to de-bug machines. By automating dynamic monitoring processes, for example, we can create intelligent observability that converts telemetry data into actionable insights. We can use AI to analyze this data to identify problem patterns and create unique observability “snapshots” that can be used to build reference templates, which can be cataloged and accessed by troubleshooting teams when issues arise. This will enable DevOps and security teams to reappropriate the time spent troubleshooting toward more productive activities.
The future of cloud native
We are still early in the process of fully addressing the challenges that this new evolution will bring. Much as these architectures continue to mature, so too does the sophistication of bad actors’ intrusion techniques. This makes the ideal cloud-native stack a moving target, and all stakeholders must be willing to adapt as we move forward.
That said, we have already learned a lot that will be instrumental in repelling bad actors. Much of these best practices come down to where one starts. We have already mentioned the importance of building from a base principle of zero trust. But even before that point, teams starting this journey should ensure they are working with partners that are tailor-made for cloud-native environments. Only these partners can understand and enable the collaboration needed between the many personas involved with developing, deploying, and securing cloud-native architectures.
Ultimately, the benefits of cloud-native architectures far outweigh the solvable challenges. Cloud-native will increase innovation velocity as enterprises can push out applications and services faster using pre-built components. This will profoundly impact the entire software ecosystem and put the current industry goliaths on their toes as they face competition from more agile disruptors. This is the ecosystem we are moving toward, and identifying and working through the challenges we’ve discussed here is a critical component of building that ecosystem in a healthy, sustainable way.
Learn how to adopt a holistic approach to container and cloud-native application security and observability by reading our free O’Reilly ebook.
This article originally appeared on DevOps Digest.
Join our mailing list
Get updates on blog posts, workshops, certification programs, new releases, and more!