Welcome to Part 3 of the Cloud Native, Microservices, Security, & Scale Series.
The year is 2008. It’s been 11 years since that fateful Apollo 13 rental by Reed Hastings Jr, and Netflix is quickly growing in popularity for its no-late fee, keep as long as you want, DVD rental by mail service.
While Blockbuster struggled, it seemed like nothing could slow down or disrupt Netflix’s popular service… at least not until August when a major database corruption crippled the DVD giant for three days.
All of a sudden, the biggest threat wasn’t other movie rental services (including Blockbuster’s own attempt), but rather the very infrastructure and architecture Netflix was built on.
“That is when we realized that we had to move away from vertically scaled single points of failure, like relational databases in our datacenter, towards highly reliable, horizontally scalable, distributed systems in the cloud.”
– Yury Israilevsky, Stevan Vlaovic and Ruslan Meshenberg from Netflix
But What Exactly is the Cloud?
Most people when they think of the cloud probably think of the popular cloud hosting services, such as Amazon Web Services, Google, or Digital Ocean. However, there are many different cloud providers, including companies that choose to build their own on-premise clouds (or private clouds).
But what is a cloud? The easiest way to think of the cloud is to start with old, managed servers. With traditional hosting you would probably set up a few dedicated servers (for a small scale deployment) with a load balancer. For example, you might have a load balancer, a web front-end, application server, and a database server. And as your company grew, you would find yourself switching out these servers for larger servers with more space, more memory, etc.
But how do you know if you have the right resources to begin with? How would you be able to manage spikes or dips in traffic? And what happened if one of these servers went down or became corrupted?
The cloud on the other hand is a fleet of uniform, assignable servers — all managed behind the scenes. Instead of choosing and setting up 3 managed servers, you would simply set up the instances needed (hosting, database) and the cloud would take care of the rest. More importantly as your resource demands increased, the cloud is able to automatically scale and add more resources — meeting your needs. This means that not only are you not having to manage individual servers for your applications, but you also don’t have to worry about not having enough resources.
Another way to think of this is with the Pets versus Cattle analogy crafted by Bill Baker:
Managed servers are like pets — you probably only have one or two pets, and you spend a lot of time taking care of them. There’s a significant investment, and cost here. If your pet gets sick you need to take them to the vet, and they’re nearly impossible to replace.
The cloud on the other hand is kind of like raising cattle. If one cow gets sick, you’re able to take immediate action to ensure that the rest of the herd is protected and your farm isn’t negatively impacted. Instead, you simply take advantage of the other healthy cows who are “outstanding in their field.”
This is what the cloud enables — the ability to stand up (and tear down) servers or instances quickly and as needed. There’s no longer a significant investment required, nor do you have to manage each individual server within your infrastructure yourself.
The cloud enables smooth, horizontal scaling instead of forcing you to focus on more legacy vertical scaling mechanisms.
What Does it Mean to be Cloud Native?
Being Cloud Native means designing and running your applications to take advantage of all the benefits of a cloud system, including focusing on horizontal design — and treating the instances of your applications or services as “cattle” instead of using the traditional “pet” model.
The Cloud Native Computing Foundation describes cloud native applications as those that have the following three properties:
- They are container packaged, running the application and processes as isolated units
- They are dynamically managed by a central orchestrating process such as Kubernetes or Apache Mesos
- They are microservices-oriented, having small, focused applications that are designed to be composable via service endpoints (such as REST)
This approach allows enterprises to ship faster, reduce risk, and increase efficiency by building smaller, autonomous (yet composable) services that can be started and stopped on-demand with little to no consequence.
This means that should services start failing, or should you have a significant spike in traffic your system is autonomously able to restart, shutdown, or startup additional services to avoid negative consequences such as system downtime.
“Failures are unavoidable in any large scale distributed system, including a cloud based one.”
— Yury Israilevsky, Stevan Vlaovic and Ruslan Meshenberg from Netflix
Beyond the ability to take advantage of modern cloud infrastructures (such as AWS, Google, Azure, etc), and building self-healing systems, Cloud Native also allows enterprises to take advantage of modern architectures, modern code (you don’t have to use COBOL, Java, .NET, PHP, or any other language across your entire platform as each system/ service can be completely decoupled and isolated), and a focus on automation over people (not only in scaling and self-healing, but even allowing for CI/CD.
Essentially, Cloud Native is designed to bring teams together, letting them work autonomously, but with greater transparency and collaboration.
Next Week: What are Microservices ››
- Microservices vs. SOA
- The Glue Problem
- Containing Microservices
- Automating Container Management
- Scaling and Security Concerns
- Scaling and Securing with Project Calico
- Wrapping Up