Meeting PCI DSS Network Security Requirements in Kubernetes Environments

 

Compliance standards such as PCI DSS have assumed that traditional characteristics and behaviors of the development and delivery model would continue to be constant going forward. With the Container/Kubernetes revolution, that set of assumptions is no longer entirely correct. Attend this webinar and learn about what’s changed, how those changes weaken your compliance and control environment, and what you can do to adjust to the new reality.

Michael Kopp: Hello everybody, and welcome to today’s webinar meeting PCI DSS network security requirements in Kubernetes environments. Welcome, thank you for coming and I am pleased to introduce today’s speaker. We have with us Vince Lau. Vince has over a decade of experience helping healthcare, financial services, government, telecom and retail, manage cyber security risks. He is responsible for Tigers content and the go to market strategies. He took his MBA from Santa Clara and has a Bachelors in Computer Engineering from Cal Poly San Luis Obispo. My alma mater also, so go Cal Poly. Also, he is a CISSP.

Michael: If you have any questions, please ask them as they come up, and we will try to answer them in real time. We also will have a Q&A session at the end, where we will handle any and all questions that come at us. Without further ado, let’s get to the main presentation… Vince?

Vince Lau: Thank you Michael, go Mustangs. Right?

Michael Kopp: Yeah.

Vince Lau: All right. Thank you again for the introduction. And with that said, just want to go over a couple of interesting stats that I came across preparing for this webinar. Right. The first one being, I know some of you probably have seen this, the cost of noncompliant has significantly increased over the past few years. The issue keeps getting more serious. And this is a report in regards to compliance versus cost of non compliance, issue from the Ponomon Institute.

Vince Lau: At the current moment, it’s over two and a half [inaudible 00:01:38] two and a half times more, to be non compliant and compliant. Now, you must be wondering, hey, what are some of the reasons why, right? Now, tons of reasons why, but I think one of the reasons being is that, there isn’t really a firm understanding, especially within the Kubernetes space in regards to compliance. I think, essentially, this situation will continue without further education on both the practitioner side and also from the auditor side. Don’t take my word for it, let’s take a look at an actual recent news article.

Vince Lau: Here’s an interesting story, right? Where Anna Caitlin, she’s a systems engineer at Paybase, explain how they went through an end to end payment. So to say audit, or compliance exercise that she PCI DSS level one. Level one is extremely difficult. This is a top tier certification that you get from PCI and it basically entails most amount of cardholder information that peer involves. Now, she read it on Google GKE and all this stuff. Besides addressing some of the Kubernetes security shortcoming, another crucial factor, like we mentioned before, was addressing the challenge of the status quo PCI requirements.

Vince Lau: As you probably have read, by now with this particular extract that I pulled from this article. She basically was doing the exact right thing, right? You can see that she was using network and pot security policies. But she had to persevere and convince sort of say, the auditor. That, hey, what this define as the server in the PCI spec is actually a pod, right?

Vince Lau: So we can kind of see that from both a practitioner perspective and a auditors perspective. There is a level of understanding that might need to be achieved in order to do a proper compliance. So, ladies and gentlemen, I think we’re the forefront of meeting so to say, coming to an agreement of some of the compliance specs, in regards to Kubernetes and traditional workloads in the new age of this environment. The good news is that, achieving compliance can all be addressed through a proper understanding of the specs, right? Also understand how Kubernetes environments different behave and also Most importantly, some of the tools that can help solve these issue.

Vince Lau: So like Paybase did it, right, so can we. This is kind of the part where I say, hey, you know what, last [inaudible 00:04:12]. Don’t take my word for it. This is the point where, hey, take my word for it and let’s go on this journey. I count on myself being of a practitioner and an auditor. And I think having both sides of the perspective level understanding is very important. With that said, this is the journey we’re going to be going on today. What’s kind of changed in the Kubernetes environment, so let’s just understand, hey, like Anna was going to. Server pod, what’s the deal, right?

Vince Lau: Then we’re going to actually go through the PCI requirements and understand some of the challenges that we’ve seen, some of the challenges you guys are probably facing. Then each one of these actually will go to the solution and tool set for meeting some of these requirements. Right?

Vince Lau: Alright, let’s jump right into what’s changed, right? If you look at the slide here, the prior to Kubernetes. We’ve kind of basically been working with very static environments, right? You can see there’s a bunch of servers here. Racks of servers, ran applications into traditional networking devices, switches, firewalls, all these things. And do you think quite frankly, didn’t change a whole lot, right. So even if we went into the more recent IT sort of say, approach with virtualization. Taking virtual machines, more dynamic, but they still long enough, existed long enough for them to basically have accountability. Especially, when it comes to audit. Right? So accountability is key, right? You have something that [inaudible 00:06:12] and she didn’t change over the course of after an audit between cycles.

Vince Lau: Things used to be measured in, calendar year, quarter, potentially years based on what your audit requirements are. Obviously doing more frequent, less frequent. So you can do basically a compliance report and you can evaluate your environment and see what’s plugged into that. See what [inaudible 00:06:33] law, rules exist, go to an audit of environment. Most audit involves spending time. I’m sure many of you have been there before. [inaudible 00:06:42] which collecting the data, because, things didn’t change that much and you come back and say, hey, this is where we are and you know, we should be good to go for maybe another cycle. Essentially, compliance that isn’t built for auditing and reporting assuming longevity. I would be sufficient to say that you’re again applying over a certain period of time.

Vince Lau: Now let’s take a look how Kubernetes, I guess changes that from a compliance perspective. This is the comparison between the old traditional workloads versus container these workloads orchestrated by Kubernetes. So, modern applications on the right, very different from traditional application can see just the number of boxes, that alone gives you an idea, hey, there’s a lot of components here. Like I said before, traditional application used to be, have this long lived workloads kept for months years even longer. Now, things are very short lived. What goes on in the run when they need to, it’s not like they’ll just sit there idle, when they’re not needed at all.

Vince Lau: With traditional application, they used to be more monolithic, right? Most of the work we’re done with maybe in memory functional call which is just reside on a critical box. Your functions never really went over a network. Now, with modern application, things are all on the network. I think probably preaching the choir, everything is done via API calls over network. So then this is a completely different environment. The overall view here is that, we’ve a lot more things, basically, emotion and microservices, much more dynamic than the old traditional monolithic applications. So what does this really mean? Right, so let’s move on. Alright, this is the implication here.

Vince Lau: Just some fasts, so you get an idea how dynamic container environment is, right? They’re lightweight, much faster, [inaudible 00:08:41] to start time. They have a much shorter lifetime as well, they just come and go whenever they’re needed. Right? Whereas VM to servers used to be measured by months, speed and now potentially measured by seconds, minutes and essentially much to a lifetime looking at 900, if you minus first the startup times on average, serving a container versus a virtual machine.

Vince Lau: And now you’ve got this thing where you might have 80 hundred… maybe we’ve had 1000 containers per server. Whereas in the past, you’ve seen I don’t know, maybe eight and 20 depends on a server. I mean, we’re talking magnitudes different here. We can reasonably assume that you can have at least tons more components here, basically, at opening your, again, increasing attack surface, right? So you do a map here, much greater turns, endpoints are changing much more frequently. And so then what does that mean for the compliance aspect, right? So if you think about this is a really important point, if you environment changing this frequently, how do you know if things are complying after you do an audit. It’s just, how do you know, right? Your application was great 10 minutes ago, which brings us to this line.

Vince Lau: And now they’re like, hey, there’s a new code that went out there. And I don’t know if I’m compliant anymore, right? So in this very dynamic environment, you can’t just rely on the fact that you can do maybe an audit or two in a year, and think that, hey, you’re compliant. You have to start thinking, hey, you know what? Compliance is actually a continuous process here. What was compliant 10 minutes ago might not be compliant 10 minutes after with a new push of the code from your developers within the CICD pipeline.

Vince Lau: Okay, so that’s the space [inaudible 00:10:36] things on. Let’s jump into the specifics with regard to PCI, right? This obviously, webinar, is about the PCI aspects. Just a bit of quick overview. So PCI standard basically requires entity that store processes, transmits cardholder information, anything that… you guys actually, everyone knows your credit card numbers. Basically, bill numbers and your information to the business information. Right?

Vince Lau: You don’t want that out, card holders or the network, card network, doesn’t want that information out and obviously charges malicious charges or fraudulent charges. [inaudible 00:11:08] occur when that happens, obviously privacy, all that stuff. Basically, they have PCI DSS cover systems that handle all quarterly data, whether in motion or systems that’s in rest. How they do this is, they segregate the environment, right? They say, hey, you have to define what they call CD, the card data environment and access control to the CD, right? So it’s this zone where it’s like, okay, hey, this is words. Very sensitive, important data. Let’s do everything we can to protect that information.

Vince Lau: The next slide here. So here are the detailed specs with regard to what we saw from an overall perspective. There was 10 items, don’t worry, we’re going to actually go through each one of those sections and you’ll see that overall slide again, with a specific section we’re going through. I just want to take a minute here to highlight the fact that there was a detailed spec below that, obviously. We won’t go into the specific, like this level of detail here. This is not the intent of this webinar. But what we’ve done is, we’ve consolidated these into groups. So that’s easier to understand, hey, from a broad PCIS perspective, what the challenges are and what the solutions are, right? This is exactly what you mentioned before with the journey, what we want to do.

Vince Lau: Alright, so we’re going to start off with the first group. We’ve done a bit of mapping here, in regard to some of those sub overall themes in PCI’s. This one is around building and maintaining a secure network, right? Basically, here you can see an image of a firewall essentially PCI requires network segmentation, isolating CD environments from the remainder of entities network. You can see here clearly spells out firewall all the place in the peace, anything, there’s nothing wrong with why well, I want to make sure that we are not saying firewall too bad or anything. This is how the specs were written. The thing is, PCI specs been around, containers are new. So it’s subject to interpretation, right? If you look at how traditional firewalls handle workloads, it’s difficult.

Vince Lau: Difficult to configure, it’s difficult to manage because of the fact that it leverages static IP pods and ranges, right? We’ll go back to our old traditional environment, where things are more static. You can define basically identity around IP, a port. Most importantly, IP’s rarely change, right? Those of you who are technical, you go into your computer. You go, hey, the sign is DC, DHTCP, IP, and the least time, much longer, right?

Vince Lau: You know, with containers they change dramatically. Containers rarely have static IP’s. It will fail fast, reschedule quickly. Basically, the static IP thing is out the window. Then this basically brings a whole different class of connectivity and security issue [inaudible 00:14:30] in the field, the fact that a lot of times its DevOps teams will need to push to a new set of code, there’s changes’ to the Kubernetes environment, and they submit a change ticket with firewalls. Often it takes a couple weeks to just get pods and IP open. That becomes a real bottleneck. [inaudible 00:14:50] scene to alleviate this condition, why IP ranges for firewalls, basically letting steps of IP addresses through a Kubernetes perspective. These things are potentially not ideal.

Vince Lau: And here’s another one where we’re seeing lots of our customers actually deploy very common approach with handling PCI given some limitations or some challenges in the Kubernetes environment firewalls. We often see separate clusters being used just to address PCI requirement, they completely build another cluster just for PCI, right? Now this definitely achieve the objective of segmentation, isolation, all that good stuff. But it does become very cumbersome, essentially air problem, when you have this type of environment, right? You basically have a lot of duplicate resources essentially double, whatever you’re trying to achieve with your prior environment. Just to make this work extra database while storage, introducing lots of operational complexity, right? I don’t need to go into just the cost alone. Hardware, software, all that stuff, manpower just to meet this requirement.

Vince Lau: If you look at the next slide here. Here’s the extra state in which I really want to operate in, just taking a really simple example here with a single cluster with single Kubernetes cluster. Eventually, you want to be [inaudible 00:16:32] within that particular environment, but be able to say, hey, I want things that are doing PCI. Just to be able to talk to PCI and things are not part of PCI, separated and don’t communicate, right? I mean, that’s the constant isolation.

Vince Lau: What we want to do is, we want to be able to, one, identify everything that’s covered. Be able to block all traffic between these two, right? Or between the ones that PCI and non-PCI and most importantly, allows [inaudible 00:17:01] flow freely between the PCI workload. Now the question here is exactly how we get there, right? And this is the concept of Kubernetes network policy, right? The policy is code concept here.

Vince Lau: Basically, a network policy is specification on how groups of pods are allowed to communicate with each other and other network endpoints. Similar to a traditional firewall, the concept’s not that much different, is essentially being able to restrict access. Basically, with this approach, you could write a firewall rule. But basically, you have to get an IP with the traditional firewall, right? So, in this sense, in a Kubernetes environment, we’re going to take a really simple example here. Say, hey, you know what, instead of IP, we’re going to have to obviously use IP. But let’s make use of labels, right? Let’s label the pots with tags that’s just like PCI

Vince Lau: When things are dynamic, right? When you label, all these pods that come and go and you have a PCI label, even when they’re dynamic, they’re constantly changing. An IP address [inaudible 00:18:12]. You need to be able to apply this set of policy rules to things that come and go. This is what we mean by decorative as in regard to, hey, doesn’t matter how many things get spun up, shut down, whatever it is.

Vince Lau: That policy basically follows everything that’s labeled with metadata of PCI label, right? So, that’s what we mean by the declarative. Last week here, we want to make sure that when we describe these rules, that they can be repeated in automation. I want to be able to use them basically in a decorative manner, basically, in the CICD pipeline, right? The entire configuration, we want to be able to describe it in like a file, basically configuration file like a Yammer file. And basically, let the underlying network, plumbing or plugin or what you will call it, take the steps in enforcing those functions. As you can see here, essentially, this entire concept of policies code basically creates policies that can be introduced into the CICD pipeline and you basically have control over how you want to dictate and drive traffic based on a concept of policies and label.

Vince Lau: This is a part where you say, hey, don’t take my word for it, right? You’ve actually seen this in the second slide we presented. You can see Anna from Paybase, they basically implemented it. The challenge area is, hey, you know what, so they obviously understand this is the right approach to do this. Challenge area, if you know, with some auditors, hey, you know what, let’s treat pods as servers. These are not as traditionally, the same as traditional workloads. And so, this is the education part where I think could be useful for both the practitioner and also the auditor, right?

Vince Lau: Okay, with that said, let’s move along. So second component or another grouping within building and maintaining a secure network, is maintaining sort of say, a critical diagram. Now, this network documentation is very key to a PCI assessment. So someone who’s coming into the audit, right? It is one of the first requirement listed in the PCI data. And why? Obviously, they want to make sure they validate that, hey, the current diagram exists and people know where, what connections actually can set up or configured to a CDE environment. There’s a process here, they want to know what the process in place to keep this diagram current. Because if things change, whoever is on the side of securing that data, would’ve been able to know where that data is going to. It does not land in unprotected space, where it could become potentially compromised or jeopardize.

Vince Lau: Now as you can see, there’s a diagram here. This is typical networking environment, just like how we mentioned. It started off in the slides before. The challenge here is, how can a static diagram like this keep up with containers like we mentioned, right? How can things stay current, when there’s so much churn? All the pods comes and go, in a matter of minutes, potentially seconds. What do we do here? This is the part where you really need to take another approach with looking at some other tools where you have better visibility in regard to understanding both the network and also the flow, right?

Vince Lau: This is one of, so to say, one of the biggest impacts of container. Which is why it’s important to make use of that network policy we mentioned before to find things with labels. So even when things come and go and move around, you have the tool sets to follow no matter what happens. Now, this approach is actually powered by the data that actually comes from defining the network policy and the labels, right? You can see that this provides you a network diagram in regard to understanding the name spaces, the labels, the metadata behind all the Kubernetes workload. This one you can actually drill down and look at, all right, you know what, I’ve got [inaudible 00:22:29] pods in a particular namespace.

Vince Lau: I got a set of pods to a particular label, and you can understand all the pods that falls in your PCI range. You can see something along the lines here, you can see there’s width, with regard to the graphs here. Basically, that’s the data flow, how much data is flowing between different components here, right? How much data is flowing between names, how much data is flowing between these PCI workloads. Documenting data flow actually gives all of us, a much better understanding of where card data stored, processes transmitted environment, right? As well as identifying, all supporting and connecting system in device. This is something that actually helps troubleshoot connectivity issues, any security issues that might arise.

Vince Lau: In essence, without a proper level diagram with Kubernetes context, all of us risk potentially having entire microservice environment being in scope for audits. And that’s something that could be quite daunting. Not only would there potentially be a change when having major or minor findings in your audit. You could end up doing a ton of work, which would be very unfruitful. Let’s skip this bill, we spoke about that piece. Or you will go back to this higher level in a view here, we’re going to move into the second piece about protecting cardholder data.

Vince Lau: This one is fairly quick. One of them talks about, for that one talks about data and trends. Fortunately, Kubernetes does not come with any type of encryption between pods at the moment. You’re basically left with using tools or third party tools. I mentioned this is a very quick one. We have a capability to encrypt port to port traffic and this is done via MTLS. So this is very easy. The benefit here is that you can turn this on without changing anything with the application and it can be set globally on a node, node by node basis, right? This is something that obviously very important to do, simple, in terms of understanding concepts, and let’s make use of simple tools like I mentioned before.

Vince Lau: We move on to the third piece here, maintaining a vulnerability management program. If you look at this section, this consulting section, it boils down to using a IDS and also reviewing logs.

Michael Kopp: For what?

Vince Lau: Anomalous and suspicious activity, one of my favorite things in the security space. Diving deep into the logs of ideas and IPS. Now, we can actually see that traditional logs only provide very limited information here. If you can see on the right hand side, by public information, this is obviously not rocket science and [inaudible 00:25:21] know that, right? If you look at this particular log here, I know the screens a little bit… the screen image here at the bottom might be a little bit small. This is actually a screenshot from one of my favorite IDS tools called [inaudible 00:25:32], right? Open source [inaudible 00:25:33] tool. You can see [inaudible 00:25:35] information pretty much everywhere, right? You’ve got, source destination port, and then obviously because it’s IDS, it actually analyze the packet and see, hey, what’s going through? And now obviously provides a description regards to potentially what type of [inaudible 00:25:50], what type of activity was happening in your environment to trigger that alert.

Vince Lau: So now we see this, in a Kubernetes environment. This information is somewhat useful but not entirely useful. You don’t have any idea, which port did what. It’s just IP information and we know for a fact that IP are very ephemeral in a Kubernetes environment. So essentially, these logs that you get would become fairly incomplete and also an accurate, right? Where connections in the nine Kubernetes [inaudible 00:26:26] appear to be accepted, which is not what you want.

Vince Lau: So what do you want to really want to do is take advantage of more data points, which is very important. You need to have much better context. Like potentially with a flow log like this one. It’s full, I actually provide way more information compared to five couple information that we saw earlier. It’s 24 piece of information here. That’s I guess, if you do the math, you play with math, that’s about five times more data, so five times higher the resolution.

Vince Lau: Essentially, what you want to be doing here, you want to love the actual context in regards to what the port’s actually doing. So, that you get much more accurate understanding of what’s going on without having to worry about the false positive information I mentioned before. Something that potentially [inaudible 00:27:19] was actually denied the port level, right? Flow log like this provides the deep visibility and it basically provides you with information like I mentioned before. Basically, source destination port, namespace, labels, what policies were applying, policy action, actual python, I think connection counts. So these are things that are very crucial when it comes to understanding containers, especially in a Kubernetes environment.

Vince Lau: Now, let’s take this further. We’re doing it together here in terms of, I mentioned here [inaudible 00:27:59]. We also actually monitor for anomalous traffic. Based on this information, on our end, we see any type of port scans, any type of IP sweeps, these are reconnaissance activities when you’re cluster. We will basically set up an alert and send it to whatever learning medium of your choice, email, dashboard, Slack, whatever it is, so that you’re aware of it. We also integrate some of these threat feeds so that we can block [inaudible 00:28:32] traffic from going out. We actually, in order to verify, have both the slow information, the detail high resolution context, and also the capability to help you detect these anonymous behaviors. Again, you see that in a prior example, looking at IDS logs of store, you can get the context in regards to, hey, some type of attack that happened but what about the context. Which port did what, we have absolutely no idea, right?

Vince Lau: Within the maintaining a vulnerability management programs, the calls out protecting and preventing web attacks, right? Walk through explanation of this images here. This is absolutely something that needs to be done six foot six basically said, hey, you know what, installed web application firewall. Those of you who are not familiar with application firewall. They do sit in front of a public facing or any web applications, to monitor and detect, prevent web based attacks. This solutions don’t perform the functions like network segmentation, but they actually provide protection for web traffic itself. We’re talking stripping access to resources, scripting, packs like that. This is all great. But again, the challenge here trickles down from the prior slide, the visibility aspect of it is going to be lacking.

Vince Lau: Obviously, the web’s going to be giving you a lot of context in regards to the HTTP traffic. But then, what about the actual port context? The incoming connection come landed a web application and it reached a port. Which port to begin with? I mean, a lot, right? I know I’ve worked with some of these problems, and I don’t know. I’ve never been… it’s really hard to trace. You have to do a lot of correlation. On the right hand side here, the policy and policy, right? Obviously, you think about the concept of installing the web, in your [inaudible 00:30:34] network. Components of network segmentation, you basically have two separate policies of potentially multiple policies. We all know, for fact, that the more disparate system’s out there, the more policies you have, the harder it is to have a unified security approach. It’s just to make things harder for security teams to manage and also get a good signal pane of glass, in regards to understanding what’s going on, right? These are the complexities that a lot of our customers and prospects face.

Vince Lau: Basically, you want to leverage a policy that unifies both application and network, right? This is something that you would then only use policy, instead of two. [inaudible 00:31:21] only, the service that communicate with each other, but also for my application perspective. What URL was it in, what methods are allowed. So this is something that we Tigera, provide in regards to unifying both the application and network policy into one. This is something we call Enterprise Calico policies for those of you who know Calico. Then multi-layer checks will basically perform automatically, there’s no need to write multiple policies. These controls are very effective determining who is allowed to do what, under what condition, right? For example, a user can set a policy that allows service, let’s say a service called PCI audited.

Vince Lau: If we get a request, we don’t need to maybe a web pass like flash card data. Then they can do that, while denying service to all other services. And with this approach, early, we mentioned about visibility. This is one completely unified, holistic approach in regards to managing client access to both network and application. Obviously, we capture layer seven information. You getting context [inaudible 00:32:32] the network level, layer three, all the way to layer seven. It makes it a lot easier when you’re troubleshooting and managing these to have all the Kubernetes context associated with both the network and application logs, right? Makes it just like said before.

Vince Lau: Now, I’ve been in the trenches trying to figure out a lot of times what happened to what and and the last thing you want to do is have the security and go like, hey, is this what it is? I’m trying to figure it out, correlate. It’s a time consuming process. It’s an error prone process. It [inaudible 00:33:04] be this possible. That’s the intent with unified policies.

Vince Lau: Okay, we’re moving down to I guess the fourth bucket here, implementing strong access control, right? This section essentially talks about more people have access to cardholder data, the more risk with your CD. Limiting access to those with a legitimate business reason, health and organization, prevents handling accidental compromise of cardholder data, right? We spoke about leveraging a network policy in the beginning of this presentation to find a dynamic pizza environment. Now, let’s take that concept further, right? Let’s take that network policy further. Because, that’s actually part of a zero trust model. You want to really leave a Heine environment wide open, where anyone can just walk in and do what they want.

Vince Lau: That’s not what you want there. I mentioned zero trust security, network security, right? Essentially, what we’re talking about here is, we establish trust on individual pods and services. This trust, based on multiple sources of identity data. We spoke about the fact before, in the past, IP was the main source of identity data. We’re moving way beyond that now, in a containerized environment. You have to work off of multiple source. Now, what we do here is, we look at both cryptographic identity of the workload and also the Kubernetes identity of the workflow itself, which is like a lot of things in terms of what we spoke about before, right? Network policies, what part policy to belong into namespace, label, all that stuff before we can establish trust.

Vince Lau: And so, PCI talked about [inaudible 00:34:57] a motto of least privilege and the way we do that is provide what we saw before. A unified policy model where you define fine grained security, access all the way from layer three to seven for your applications so that you can go as granular as you want in your security architecture. All the way to that URL, web methods, finding some of these white list rules.

Vince Lau: So going a little deeper here, in terms of the defense in depth architecture, how we do that is we do enforcement of these white lists of rules with multiple points of infrastructure. You can see on the diagram and left, right? This is done that hose pods, and the container level. So what’s the point here being you say, Hey, you know that, you know, that’s great. It’s a lot well, moving not because you think about it in the unfortunate events, infrastructure gets compromised. You know, like, let’s say somehow someone breach fake identity of a pod. What happens then, right?

Vince Lau: With this approach your application space secure. There’s that cryptographic identifier we spoke about before in regards to expandable nine cert, that belongs to the application. So even if either one of those things get compromised, there’s multiple layers of assessment here that prevents that attacks or that attacker or that attack compromise pod, from gaining access to whatever resource there is. This makes it a much, much safer environment in regards to restricting access in terms of these privileges, all that very important security concepts.

Vince Lau: We’re not going to cover 12 so this is the last section will be covering basically regular monitoring and testing networks, right? This piece here, you take a merge a little bit of concept here. We spoke about the concept of like this, confines being out of date, maybe within minutes. So this [inaudible 00:37:02] continuous compliance, right? In this way to tie this into the regular monitoring and testing central time into report. You can see here a map tells us a few requirements for PCI in regards to some of these things that requires reporting. Right? There’s a formal process for proving and testing on network connections, this is something that obviously you need to understand what workloads will come in scope. Next one talks about hey, how do you know manage access between TV and internet? Prove that. What is connected to what?

Vince Lau: At least if there is a internet reachable connection that’s demonstrate that you can show where’s it going. On inventory, obviously, that’s very important understand what workloads are in scope work, what those are not in scope. Lastly, record audit trail for system components. So this particular piece here with our customers and prospects, as an activity that takes quite a bit of time, right? I’ve been there myself, trying to generate some of this information for an audit. And it’s not easy A lot of times, especially when you’re working in a Kubernetes environment, which encompasses traditional networking technology, like firewalls, right? So this is a very time consuming process.

Vince Lau: What you want to really do is leverage specific compliance reports, when it comes to audit, right? And what we’ve done here and make things very easy solution is to break them down by three different types of reports, right inventory report, network access reporting and policy audit reports. Inventory report basically, you think about it, would help you drive to understand what things are in scope, not in scope. So I’ve mentioned before it’s very bad if [inaudible 00:38:59] in scope. Your entire network or all your assets become under scrutiny for PCI audit. If you can show what things are protected by essentially what network policy, right even if you delete him, the labels in place you can show it up to you to be monitored, and answer that question.

Vince Lau: Now, this is the one kind of mentioned before with a, you know, where’s this traffic flowing? Which workloads have access to and from the internet? Which workloads have access to other names faces. And lastly, air to what endpoints have encryption, right? We talked about encryption being one of the PCI requirements. So these are things that you can pull immediately to see what’s in scope Are you complying is not compliant. And lastly, policy audit is really important. The policy level policy we spoke of, essentially acts are providing segmentation segregation, right? So you want to know, hey, what happened there? Changed, for this meeting was modify was deleted.

Vince Lau: So with that, going back to the prior slide, in regards to some of the requirements, you can see that you could easily map from those requirements to the policies. So report types that we just spoke about. Excuse me. So these are some of the things that make the lives of, you know, client team and security teams a lot easier, a lot less error prone and reduce the amount of time required when it comes to the audit. And that’s, good news when it comes to making things easy. And more accurate.

Vince Lau: So I’m going to wrap up here. I know we spoke a lot about a lot of our capabilities. Essentially, the capabilities that we spoke about math into three core buckets that helps address PCI network security. So the first here being, zero trust that was security where you can essentially white lists all rules and communications for your Kubernetes space workloads. This hinges on the workload identity concept, leveraging multiple points of identity data, both cryptographic. Also, from the network perspective regards label with namespace, all that stuff.

Vince Lau: To truly establish a zero trust environment, restricting access, least privileges and also providing a defense in depth approach so that even when things get compromised, not all of it gets compromised, right? You to have different layers protecting access to applications that were just compromise or the other way around. Second piece here is, we provide the visibility protection for the clusters. This is where you saw the 24 points of data compared to the traditional five couple data, that you could get from [inaudible 00:41:53]. You know that, that diagram that you saw earlier regards to the Kubernetes [inaudible 00:42:00] diagram, and the workflow is powered by the 24 points of data that’s provided, right?

Vince Lau: You have an accurate understanding of what pause what components are communicating. And also Most importantly, is that workflow, understanding how much data is flowing between, you know, these components to you writing too. Security issues, such as hey, maybe some [inaudible 00:42:24] compromise. It also performance tuning and troubleshooting.

Vince Lau: So not a lot least, we’ve got continuous compliance, which unlike the traditional approach, where you do a snapshot based assessments of your compliance into continuous enforcement and visibility across the infrastructure. And you saw that with the types of report that we can provide, and those you can put any time to get an idea of what happened. Even if you’ve done the audit, and things change, and next heavens, you come back and say, hey, look, I know exactly what happened. I know these workloads came up. These workbooks shut down and went down. And that gives you all the information that you need in regards to understanding, hey, what happened at any given moment in time. So, that’s it. We’re going to hand it back over to Michael to see if there’s any questions that the audience might have?

Michael Kopp: Sure. Yeah, we have a couple questions actually from the audience. But before we do that, I just want to just remind everybody, that you can learn more about PCI compliance. We have a paper that you can you go get. In fact, there’s a link in the Bright talk interface, that you can get direct access to that PDF without having to fill out a form. You can go get that. Also, our next webcast is coming up, June, 19, in two weeks, we do one every two weeks. And it is on Kubernetes network policy when Tigera together for security auditing. So let’s get to the questions.

Michael Kopp: So we have a question here on the [inaudible 00:43:57] verbatim. So, some policy beta, ready and authenticated MLM & PLS, right? Is provided by his service mashes such as STLA. How do you compare the production provided by Calico Tigera, versus a service mesh, regardless what it is.

Vince Lau: [inaudible 00:44:15] complimentary?

Michael Kopp: Yeah. They’re very complimentary. In fact, the unified policy that you saw [inaudible 00:44:22] part of our solutions. Service mess, is part of the Tigera solution. When we provide that capability, you can see that it still is great in terms of providing capable security functions that application level. We actually enhance and then often that, giving more security control mechanisms, finding control in regard to the next level. So they’re very complimentary, but this new provides, like I said, doing great job of using security.

Michael Kopp: It could be a lot better when you combine the two solutions to give it more holistic zero trust network security solutions.

Speaker 3: Actually I have a question I’ve been sitting in on a few of these webinars over a year during two months. So we talked about PCI compliance. But there are other compliance. There are a lot of other compliance regimes. Right? So what about our companies having to address multiple compliance regimes at the same time and how do you how do you deal with that?

Vince Lau: Yeah, that’s a great question. So we kind of spoke about the whole concept of using network policies to trust model in regard to addressing the network security aspect of it. And essentially, this is something that you’ve seen. Paybase, they’ve done in regard to meeting the PCI compliance requirements. It is going to be not a lot different than other compliance. What you see in this industry and the space, most of them are going to call out for segregation, these privileges, all these things.

Vince Lau: All these security components, in regard to network policies, you trust. They all apply to many compliance frameworks and requirements. In fact, in my past life, oftentimes you’ll be dealing with multiple. I’m pretty sure those of you on the phone when you’re dealing with one, you just won’t be different one you probably dealing with multiple, right? Something like ISO 2701. Potential GDPR. You know, obviously, PCI is very industry specific. HIPAA is very niche, industry specific, specific, so you wouldn’t have to have them together. But these things basically cut across many compliance frameworks. So you meeting this particular one for PCI and chances are you’ll be meeting the other ones for another framework that you’re working with. And the most important thing is like what Anna did, understand it well, and obviously work with your assessor, an auditor to understand the implications of binaries environment, right?

Vince Lau: Are you treating like server, the PC I expected call them server is server pod. I mean, essentially, you have to work with both your team and also the auditors who really have a firm level of common understanding around how this new technology affects. That’s been around for quite a while.

Michael Kopp: We have a question that came in earlier actually, it said something around. You explain what accurate logging, you mentioned accurate logging. Understand that there are more data points beyond five couple right?

Vince Lau: Yeah.

Michael Kopp: You can go you can go beyond five couple.

Vince Lau: Correct. Yeah. So we talked about accurate logging, this is something that you will have to look at the stack of, hey, where things have been stopped, right? Traditional five couple gives you information in regard to, hey, things being stopped at the network level. But, if you look at a Kubernetes infrastructure and environment, things can be stopped at the par level. Here’s something that you potentially could be shooting yourself in the foot of you rely on five couple information.

Vince Lau: You can go to an auditor, if they say, hey, can you show me what information made it through your Kubernetes environment. And all the logs that you have, besides you say, yeah, allow traffic. But in reality, you could have proper blocking at the pod level. And you just misrepresent yourself and gotten yourself into hot water with a potential flying. So the concept of accurate logging is very important. Because, pipeline information not only gives you incomplete view of what’s going on, it also gives you an inaccurate view of what’s going on, in regard to traffic flow within your Kubernetes environment. That’s very important concept to understand.

Michael Kopp: Well, I don’t see any further questions from the audience, I’d like to thank you all for coming. And please do join us in two weeks for our next webinar. We have a contact us form, we also have a demo request form. If you’d like to see some of this in action, we’d love to show you. So once again, thank you for attending, and have a great rest of your day. Thank you.