A multi-part Microservice Primer for Infosec Professionals

In my role at Tigera, I spend a lot of time talking to many adopters of cloud-native and microservice technologies. More and more frequently, the security teams are a part of that mix, and they bring a very different viewpoint to the conversation. One thing that has stood out is that they are often brought into the discussion about a planned microservice application delivery platform fairly late in the cycle. They may not have had the time to fully internalize the changes that this kind of environment brings, and more importantly, the differences between a microservice environment and a more established virtual machine or dedicated bare-metal, environment.  So, in the next series of posts here, I am going to discuss the good, the bad, the ugly, and the just plain different, in a modern microservices environment.

It is tempting to view the new microservice or cloud-native application delivery models as just the next generation of virtualization. After all, didn’t virtualization promise many of the same benefits that the cloud fanatics now trumpet about containers, microservices, and Kubernetes? The tech industry continually recycles the same concepts over and over with occasionally new technical twists. So why is this time any different?

 

Let’s look at what is different this time around, from the security team’s vantage point

In the dim and distant past we had what is commonly referred to as “pets” in our data centers. Racks of servers, each unique in their own way and dedicated to a specific mission. We fed and watered them, logged into them and patched them when they were sick, and called them by name. When one fell down, we rushed to its aid. It WAS the service, and we treated it with reverence. Because it was the service, we created twins of it for high availability, but that twinning was rarely complete, and always unique to the specific service that was being offered. We operated by runbook, and there was a runbook for each service.

These pets lived for years and in some cases a decade or more. We supported infrastructure that was static, with static IP addresses, switches and firewall ports, but certainly not their configuration or software. They were the opposite of reproducible, and the antithesis of immutable.

Along came VMs. VMs brought some level of dynamism to the infrastructure, but it was a slow dynamism. VMs rarely came or went, and equally rarely changed the server they were hosted on. VMs were supposed to be “cattle”. They were supposed to be immutable and reproducible: just fire up another instance of the gold-master image. However, we still treated them as pets, logging into them, patching them and upgrading them. They might have been cattle, but we raised them for the county fair 4-H competition and gave them cute names. In short, we added the concept of dynamism to the infrastructure, but did not include it for the workloads hosted on the infrastructure. One could argue that this made things more complex, not simpler.

Because the infrastructure dynamism was low or slow, the security mechanisms that we put in place since the days of dedicated servers still continued to mostly work. Yes, we introduced SDNs, but we still used firewalls that had rules based around static IP addresses that mapped to now VMs. We were assured VMs would have a fairly long calendar life cycles, and thus static IPs would be just fine.

We continued to honor Patch Tuesdays by logging into the VMs to correct for the latest CVEs discovered, and continued to manage the servers as full-scale computer / operating system / application stacks. Nothing really changed, except we had to assume that the physical location of the VMs may be changed without our knowledge. We watched the logs for unauthorized access attempts, and scanned for viruses. We monitored for anomalous activity. In short, it was mostly business as usual for the security and compliance teams.

 

So, why, you ask, is this microservice revolution going to be any different?

There are a number of reasons I will detail shortly, but here is the big one, and it’s not the one you think.

The VM revolution was driven by operations as a way to simplify the deployment of servers. It was a cost control exercise through greater efficiency in hardware utilization. For developers, nothing changed in their workflow. They still developed and deployed software in the same way. The application owners still saw “servers” and managed them as such. For them, this was a big “nothing sandwich”, and frankly, one they really didn’t care about.

Conversely, the development of the cloud-native or microservice model is being driven by a sea-change in the way applications are developed and delivered. It is being driven by a demand from the business units, who are your customers, to be more agile and responsive, lest your business be overtaken by more agile entrants into your market. It is being driven by developers that are tired of “dependency hell” and want to re-use both internal and external code to make their lives easier. It is being driven by the application DevOps teams wanting to manage their infrastructure just like they manage their code. It is being driven by the desire to focus on the application with higher-function languages, and not the underlying plumbing. In many respects, this has more in common with the Linux revolution that was also driven by LOB developers, than the VM revolution that was driven by the corner office. I challenge you to think about which was a larger sea-change for you: VMs or the arrival of Linux and FOSS in your organization? That’s right, find the seat-belts and buckle in. Like it or not, you are going for a ride.

The very things that are driving this revolution are the ones that are going to guarantee that the behavior of the environment will be very different from the pets and 4-H entrants. The infrastructure will become VERY dynamic and the workloads themselves will be VERY static during their lifetime. However, that lifetime will be measured by a clock, not a calendar.

 

Here are some interesting concepts that you should think on for a minute

  • Code can now come from a myriad of different sources.
  • What is deployed is immutable.
  • You replace, you do not patch.
  • IP addresses, server location, etc. are now managed as a general resource, rather than being dedicated to a function or application.
  • Infrastructure and the application environment is now managed as code.
    • The entire lifecycle is automated and fast.
    • The trouble-ticket methodology of managing the environment is no longer a functional mechanism if done at “workload time”.
  • With auto-scaling, the numbers of components deployed can vary dramatically over a period of seconds.

We will explore this in more depth over the next three articles in this series.

  • First, we will look at Kubernetes as an example of a microservice oriented application environment.
  • Secondly, we will look at the assumptions about the infosec environment that may no longer hold in a microservice or cloud-native environment. Both the good and the bad.
  • Lastly, we will look at some of the ways that the infosec environment can change to meet the new requirements and capabilities of the microservices environment.

Join our mailing list

Get updates on blog posts, workshops, certification programs, new releases, and more!

X