How Should An Enterprise Approach Hybrid Cloud in the Context of Cloud Native Technology?
The early adopters have begun to find a great degree of success and it is now time for the more mainstream enterprise to get off the proverbial wall and begin exploring containers and other areas of the cloud-native landscape. However, there is a need to mitigate or manage the risk of adopting new technology as it does introduce a dimension of change that accompanies any transformation. It is critical to ensure that customers (both internal and external) do not experience any downtime or lack of responsiveness.
I had covered an extensive cloud maturity model on a previous blog over at DZone.
Need to Gradually Modernize Hypervisor-based infrastructure & Applications with Containers & Kubernetes
The path to modernizing your software delivery in the cloud-native era is that of moving from traditional VMs to Container-based microservices/applications powered by Kubernetes. We ask ourselves: how can organizations running legacy hypervisor based data centers take advantage of the Kubernetes revolution? How they, too, can embark on their modernization journey?
As opposed to VMs, containers enable the creation of multiple self-contained execution environments over the same operating system. However, containers are not enough in and of themselves – to drive large-scale Cloud Native applications. An Orchestration layer based on Kubernetes, organizes groups of containers into applications, schedules them on ESXi servers that match their resource requirements, places the containers on complex network topology etc. It also helps with complex tasks such as release management, canary releases, and administration. The actual tipping point for large-scale container adoption on VMware will vary from enterprise to enterprise. However, the common precursor to supporting containerized applications at scale has to be an enterprise-grade management and orchestration platform such as Kubernetes which provides many benefits in one – a native container model and orchestration using Kubernetes, Self Service, increased visibility & hybrid cloud capabilities.
So what are the fundamental benefits that a well engineered Kubernetes on hypervisors such as VMware include
- Highly elastic – scale up or scale down the gamut of VMware infrastructure (compute – VM/Baremetal/Containers, storage, network – switches/routers/Firewalls etc) in near real-time – seconds as opposed to hours or days.
- Highly Automated – Given the scale & multi-tenancy requirements, automation at all levels of the stack (development, deployment, monitoring, and maintenance) including the application running on VMware
- Low Cost – Operate at a lower CapEx and OpEx compared to the traditional VMware stack due to reliance on open source technology & high degree of automation. Further workload consolidation only helps increase hardware utilization.
- Self Service Based on Standardization – Kubernetes enforces standardization and homogenization of deployment runtimes, application stacks and development methodologies based on lines of business requirements. This solves a significant IT challenge that has hobbled innovation at large financial institutions.
- Microservice based applications – Applications developed for container enabled infrastructure are developed as small, nimble processes that communicate via APIs and over infrastructure like service mediation components. This offers huge operational and development advantages over legacy applications such as those running legacy hypervisors. While one does not expect monolithic applications to move over to a microservice model anytime soon, customer-facing stateless, stateful applications that need responsive digital UIs will need definitely consider such approaches.
- ‘Kind-of-Cloud’ Agnostic – Done right using a Managed Service, Kubernetes does not enforce the concept of a private cloud, or rather it encompasses a range of deployment options – public, private and hybrid. VMware can be added
- DevOps friendly – When combined with CI/CD tools, Kubernetes enforces not just standardization and homogenization of deployment runtimes, application stacks, and development methodologies but also enables a culture of continuous collaboration among developers, operations teams and business stakeholders i.e cross-departmental innovation. PMK is a natural container for workloads that are experimental in nature and can be updated/rolled-back/rolled forward incrementally based on changing business requirements. This enables rapid deployment capabilities across the stack leading to faster time to market of business capabilities.
- Governance – Running Kubernetes on any underlying platform can help enforce strong governance requirements for capabilities ranging from ITSM requirements – workload orchestration, resource limits for tenets, automatic auto sizing of workloads, seamless change management, provisioning, API based integration with backend – billing, chargeback & accounting applications.
Roadmap to Modernization
Whatever stage you are in at this point in time, you will be further along in months or years depending on the pressures in your business. With the easy availability of public cloud services, many larger enterprises have significant “shadow IT” expense in addition to the cost of supporting existing internal infrastructure. How is “shadow IT” to be accommodated? At the same time, how are existing lines of business applications to be supported with minimal disruption?
What then are some best practice recommendations that firms at any stage of cloud maturity can take value from? The below illustration captures the roadmap to modernization that I have seen work well at several successful cloud implementations.
I posit that there are six key things.
- Continue to run existing applications and infrastructure on your VM technology as before. However, move to a “cap and grow” model. Within the hypervisor space, move over to KVM based platforms running in an OpenStack environment. The cost savings will help drive new investments in container-based applications
- Consider a range of hybrid cloud architectures keeping the above maturity levels and architectures in mind. Introduce the public cloud but avoid lock-in to IaaS providers or to cloud stacks as much as possible. As a way of de-risking, invest in a private cloud strategy. The public cloud will never be a panacea.
- The biggest pain point in running a private cloud is typically in OpEx maintenance costs. Consider adoption of a SaaS Managed solution that deploys, monitors, troubleshoots and seamlessly updates your private cloud, so you can rest assured you’ve got the most advanced private cloud management at the lowest possible operational cost, for years to come.
- Multi-cloud management is a challenge cloud admins will need to deal with and something management needs to account for in the entire business case – economics, value realization, headcount planning, chargeback etc. The ‘single pane of management’ is a worthy goal to aspire to. However, beware of vendors selling ‘integrated’ stacks. These are as much lock-in as are the public cloud APIs.
- Leveraging successful blueprints and patterns around vertical industry adoption. How are leaders in your industry using the cloud for specific use cases common to everyone operating in the vertical?
- Investments in SaaS-based management planes across three important dimensions – private, public cloud and container native development – are key. These will serve as a way of de-risking your hybrid cloud and container management investments.
With Cloud Native application development emerging as the key trend in digital platforms, containers offer a natural choice for a variety of reasons within the development & operations disciplines. In a nutshell, containers are changing the way applications are being architected, designed, developed, packaged, delivered and managed. That is the reason why container orchestration has become a critical “must have” since for enterprises to be able to derive tangible business value – they must be able to run large scale containerized applications.
For an enterprise deploying large scale pure VMware or KVM based infrastructure, significant pressures will be seen on IT depending on the pressures in your business. With the easy availability of public cloud services, many larger enterprises have significant “shadow IT” expense in addition to the cost of supporting existing VMware based internal infrastructure. Drive the business case for K8s on VMware with economics and business value realization models in mind. The biggest pain points in running a VMware based cloud are OpEx maintenance costs & inflexibility.
Consider a Managed SaaS that deploys, monitors, troubleshoots and seamlessly updates your private cloud, so you can rest assured you’ve got the most advanced private cloud management at the lowest possible operational cost, for years to come. Investments in SaaS-based management planes across three important dimensions – private, public cloud and containers – are key. These will serve as a way of de-risking both your legacy hypervisor and greenfield container investments.
This article originated from http://www.vamsitalkstech.com/?p=7661
Vamsi Chemitiganti is a Tigera guest blogger. Vamsi Chemitiganti is Chief Strategist at Platform9 Systems. Vamsi works with Platform9’s Client CXOs and Architects to help them on key business transformation initiatives. He holds a BS in Computer Science and Engineering as well as an MBA from the University of Maryland, College Park.
Join our mailing list
Get updates on blog posts, workshops, certification programs, new releases, and more!