A Multi-part Microservice Primer for Infosec Professionals (Part 4)

This is the fourth (and final) part of a four-part series discussing the microservice revolution, and how it impacts the InfoSec community. This series is a not meant to be a definitive guide, but more of an overview that initiates your Microservice journey. The first three parts covered an introduction to microservices, what’s changed, and how that may (negatively) impact a classical InfoSec architecture. If you’re jumping into this in the middle, I suggest that you go back and read those three posts now, they’re not long, and I’ll grab a beer while I’m waiting.

You’re back – that’s good, you don’t scare easily, do you. Ok, now let’s talk about this same microservices revolution might actually not just give you tools to mitigate the impacts, but actually make your job easier, and give you a more secure, responsive infrastructure, while at the same time, change the perceived ‘role’ of the InfoSec team from one of being the long pole to being an enabler of the business.

Render your intentions in real-time

If you remember, I spent a bit of time in the third installment talking about the “make it so” or “Captain Picard” model. It takes a set of intentions and assessments if the application environment is in compliance with those intentions, and if not render the changes necessary to make it as close to compliant as possible in near real time. However, proper tools are required to render the necessary changes.

This means that you should use security platforms (e.g., network segmentation or firewall platforms) with the following characteristics:

  • Native integration with the orchestrator or at least a well-defined API that enables orchestrator integration
  • An understanding of the descriptive metadata that the orchestrator uses as a “key” or “ID” of entities under its control, and the ability to map those IDs into locally relevant data in ‘orchestrator time’
  • The ability to render intent in advance of need (i.e., at orchestration time, not packet time).
  • Gracefully handle the same scale and dynamism or churn that the orchestrated system itself can handle, as well as the same “response” or rendering rate.

You introduce substantial impedance mismatch to your system if you do not use platforms with the above listed characteristics.

A corollary to the ‘make it so’ model is the observation that the orchestrator can do ‘whatever you just asked for’ and at a blindingly fast rendering pace. The only thing worse than having a ‘what I meant, not what I said’ even with your distributed system, is having one that renders across 1000’s of nodes and 100k’s of endpoints in milliseconds. Safeguards can be put in place to prevent, or at least limit the blast radius of such events. Most of these safeguards are considered part of a Continuous Integration and Continuous Deployment (CI/CD) pipeline. A full discussion of CI/CD is beyond the scope of this blog, but a few ideas are listed below:

  • Have your CI/CD pipeline do either functional or end-to-end tests before deploying and ‘pausing’ a deployment that does not pass.
  • Use security platforms that have a ‘try before enable’ model, so that proposed changes can be evaluated in a live system BEFORE they are activated. The ‘try before enable’ model might be one way of executing the tests mentioned above.
  • Use some form of canary deployment to watch for effects in production, but without rolling the change out everywhere.

Use the same identifiers

Remember in the last installment, and I pointed out that reserving resources (say blocks of IP addresses) to enable legacy platforms had some substantial limitations. Fortunately, there is a way out of this predicament. The microservice world and its associated orchestrators make substantial use of metadata or labels to not only control the behavior of workloads, services, etc. but also identify them. This is one of those concepts that it did not exist, would make the development of these orchestrators and systems almost impossible. So, instead of trying to continue to use your existing identifiers as an anchor of security policy or enforcement, maybe you should just use the same metadata and labels that the orchestration system uses. There are a couple of nice effects from this:

  • There is no longer a necessary translation between the orchestrator ID and the security platform ID. Those translation methods are never entirely in-sync nor faultless, and failure in them leads to horrendous diagnostic issues.
  • Reservations of resources are no longer necessary because metadata is, for all intents and purposes, dimensionless and unlimited
  • By utilizing the same metadata that the developers need to utilize to enable their functions, services, and applications to be accessed, scaled, connected, etc. the developer can ensure that the metadata is correct. The developer is incentivized to correctly identify and label the workload to get the workload to function correctly, vs. doing the bare minimum at the end of the development cycle to get the ‘security checkbox’ checked.

Code review as a function of the CI/CD chain

Earlier, we pointed out that manual code review processes would not be complementary with the rapid development and deployment model. However, there are many useful tools out there that can automate much of the code review process.

  • Want to know if any code is tainted by a specific license – there are tools for that
  • Want an inventory of all of the FOSS libraries and modules in your code, their version, their license, and if there are any outstanding CVE’s, there are many tools for that.
  • Do you want to do static code analysis – yup, tools exist

In fact, almost anything you would want to automate in a code review process is automated. You can also use repositories or container registries that already perform those for you, either on your own code, or other published code and containers. You can set up your CI/CD chain to prefer code from those repositories and/or run those analyses and only raise an exception and stop the deployment in case of a possible issue.

Stopping the rot with micro-segmentation and micro-policy

As we discussed previously, a perimeter firewall does nothing to provide blast containment. Once the perimeter is breached, the opposition has free-reign to explore, document, and exploit the infrastructure. Your only hope is that attackers tell you when they start and stop exfiltrating your data, so you at least know something happened. Relying on notification from attackers is not a good state of affairs.

However, what if we could use metadata labels to identify what workloads had what characteristics or ‘personalities’ such as being an LDAP Server, LDAP Client, front-end load balancer, etc. You could then write policies that stated that LDAP Servers should allow LDAP traffic from LDAP Clients or that front-end load balancers should accept HTTPS traffic from anywhere on the network.

The orchestrator could then tell a network security platform that a given workload was both a front-end and an LDAP client. That would mean that a policy could be rendered that would allow it to send LDAP traffic to an LDAP Server and receive HTTPS traffic from the Internet.

This kind of approach, something that we pioneered here at Tigera, allows you to create a network environment that practices a zero-trust or least privilege model that blocks traffic, even if the opposition manages to penetrate the perimeter. You give them no ‘free-ride.’

Immutability and ‘fixing’ things

Since these environments are supposed to be immutable, there is no need to give anyone capability to ‘log-in’ to a piece of infrastructure or container(s). This means that there will be no changes made to production resources that do not go through the CI/CD chain and its associated RBAC/AAA and logging. The number of entry points you need to watch just went down by about at least a few orders of magnitude.

Next steps

We hope you have found this series useful, at least in provoking some thinking and conversations in your organization. This is not a complete guide to InfoSec in microservice environments, but the intent is to motivate you to start thinking about these issues, and how you might address them in your environment. If done right, you will not only have a more secure environment, but your organization will be more responsive to your customers (and your customers’ customers) and you will be seen as an enabler of the business, not a tall poppy.

————————————————-

Free Online Training
Access Live and On-Demand Kubernetes Tutorials

Calico Enterprise – Free Trial
Solve Common Kubernetes Roadblocks and Advance Your Enterprise Adoption

Join our mailing list

Get updates on blog posts, workshops, certification programs, new releases, and more!

X