Kubernetes is widely used to re-architect traditional applications. Many organizations first set up Kubernetes within their on-prem environment and then later expand to the public cloud. This hybrid environment often creates security and compliance challenges with workloads. Join this webinar to learn how to leverage universal security policy definition that works across a hybrid environment.
Watch this On-Demand Webinar
Michael: Hello everyone and welcome to today's webinar, Kubernetes: Securing Hybrid and Multi-cloud Environments. We're glad you all could join us. Today, our speaker is Christopher Liljenstolpe. He is the original architect behind Tigera's Project Calico, the open source version of our Tigera Secure products. He frequently speaks around the nation at nearly 60 meetups a year, educating people on networking and network security for microservices, modern applications, and Kubernetes. He also consults with Tigera's enterprise clients on security and compliance for their modern applications.
Christopher: Thank you very much, Michael. So we've had some requests from some of you to discuss hybrid and multi-cloud environments. So we've been talking a lot about security and compliance in Kubernetes environments, but I'm sure most people on this call, you're not in a green field environment, right? You're not just standing up a brand new infrastructure for the first time. You have no legacy environments and everything in your estate is going to be Kubernetes on a given cloud provider or on Prem. And you don't need to worry about anything outside of the Kubernetes environment. That's a wonderful state to be in, but that's not a realistic state in most enterprises today, or even most organizations today.
Christopher: So there's a term being bandied around in the industry called hybrid environment and sometimes it's called multi-cloud, sometimes it's called hybrid-cloud. The problem is, is there's lots of different things that that can mean.
Christopher: So the first thing I thought I'd try and piece out when we talk about hybrid environments, we can then talk about how to secure it in a Kubernetes centric manner. What does hybrid mean? And what does hybrid mean to you? We've got some examples up here.
Christopher: A hybrid, what a lot of people think of, maybe is public-private. So they have an enterprise data center, maybe a Kubernetes cluster in it and then they have a public cloud infrastructure somewhere where they are also running, say Kubernetes, and other workloads. So that might be on an AWS infrastructure or an Aujer infrastructure or Google or IBM or Packet or any one of those providers out there. So that's one form of hybrid where I've got a private infrastructure and then I'm also running infrastructure on public cloud.
Christopher: The other one is potentially running across multiple public clouds. And multiple public clouds could be that you're running a Kubernetes cluster and other workloads in both Ajure and AWS, or GCE and IBM. You could be running some of your infrastructure in a host that Kubernetes offering like Amazon EKS or AKF, and other bits of your infrastructure in self-managed Kubernetes or other workloads in say, ECS. That's another form of hybrid, where you're running multiple infrastructures all in different public clouds. It could be different regions or different providers all together.
Christopher: You can also have hybrid environments where you've got ... And all of these are common but this is probably the most common one. Where you've got, say a containerized environment like Kubernetes, running somewhere in any of those environments we were talking about, and then you have ... I won't use the term legacy because that has bad connotations, you have heritage components. Like VMs or bare metal servers or even appliances. They could be filers, they could be Exadata racks or Oracle racks. Or they could be even Z System mainframes. All of those things that exist in the enterprise and I have customers who still have Sperry Unisys systems still sitting buried in the corner somewhere.
Christopher: You have these heritage environments and those environments both provide data, critical data that needs to be consumed by say Kubernetes cluster and vice versa. Might need to as data gets moved over to Kubernetes environments, those heritage components still need access to that data, so they'll now need access to that Kubernetes environment. We have a number of different things. Infrastructure, multiple public clouds. Infrastructure, and private cloud, and public cloud. Infrastructure that spans containerized environments and VMs and appliances and bare metal servers. All of these things are hybrid environments.
Christopher: Let's talk a little bit about what the problem there is. Today, before we have Kubernetes involved, most of these things are fairly static. Which means that you can then build firewall rules, you know what the IP address is of your VMs in AWS are and you know what the addresses of your VMs or your bare metal servers in your private data center are, and you can build server static firewall rules to provide that isolation. That's great until Kubernetes comes along. We discussed this earlier, I'm not gonna go into all of the interesting challenges that a containerized dynamic containerized environment brings like Kubertnetes but suffice it to say you can go look at some of the earlier webinars that we've done on that topic, but suffice it to say, that IP addresses are pretty ephemeral in Kubertnetes. IP addresses aren't bound to a specific application or even a specific pod. They're just a resource that just gets claimed and released as workloads come and go.
Christopher: So the same IP address, even in the matter of one day, might be used by four very different applications. Managing this kind of infrastructure with heritage fire walling that's mainly IT [inaudible 00:07:14] based is going to be incredibly problematic, and is incredibly problematic. So what do we do?
Christopher: Some of things that we can do is, you can just say I'm going to trust, I'm going to build a perimeter firewall and I'm gonna trust everything inside that perimeter firewall to be a trusted entity inside the court and sanitary. We've again, talked about this in other seminars where that might not be a wise choice from a security standpoint and you probably want to take more of a least authority or zero trust approach.
Christopher: The other option is to have, implement your security in a way that is integrated with the orchestration system of your containerized environment like Kubertnetes such that as Kubertnetes is driving these dynamic changes across your workloads, the security enforcement mechanisms are being automatically updated as well. So that's great for Kubertnetes workloads and that's what we've talked a lot about in our webinars to date. The problem that you have is the Kubertnetes's orchestrator doesn't really know anything about all these other heritage components, so let's talk a little bit about the Kubertnetes security model and then we can talk about how we might be able to integrate that into some of these other environments.
Christopher: If you go to the next slide, I'm just gonna give you a walk through of what is a Kubertnetes network security policy. In this case for example, we have a very simple security policy we want to enforce in our infrastructure which says the PCI certified workloads can talk to other PCI certified workloads and PCI certified data and PCI certified contaminated databases and that non PCI workloads can't talk to those PCI workloads and databases. We're gonna create the network policy and the first slide just says, this is a yaml file yaml fragment in our commercial platform. You can do this via the gooey or use these yaml files and Kubertnetes is very yaml driven. So you have a network policy model.
Christopher: The first thing we do is we say, [inaudible 00:09:34], the next thing we do is we give that policy a name. We potentially apply it to a name space which is a way of enforcing tenancy for example in Kubertnetes or application segregation in Kubertnetes. Then the next thing we say is that this policy, which we're gonna get to in a second, is applied to any workloads where PCI is equal to true. So one of the things Kubertnetes has is a concept of metadata labels that get attached to workloads and services and deployments. Everything in Kubertnetes is managed by metadata. So in this case you will have defined a workload that let's say, maybe a PCI data processing piece of code. In that Kubertnetes definition of that application component you will have attached the label, PCI is equal to true, ie: this is a PCI certified workload.
Christopher: It doesn't matter, you could have 50 different types of workloads, all with PCI is equal true labels on it. Each of those could exist not at all or one time or a million times in your infrastructure but any of those applications with that label attached to it, any instances of those will also have that label. What the spec selector line says is that anything labeled PCI true will attract this policy. This policy will apply to it.
Christopher: So let's look at what this policy says. This policy says, is that anything labeled PIC is equal to true, anything with this policy applied to it, will allow inbound traffic from any other thing in the infrastructure that is labeled PCI is equal to true. Now you'll notice I said, any other thing, not any other Kubertnetes pod. In most Kubertnetes environments, that would be a Kubertnetes pod or a service, but we're gonna talk about some of the things we can do with Calico and our commercial product Tigera Secure, it extends that a little bit. We'll get to that in a little bit.
Christopher: So we're basically gonna allow anything inbound from something else labeled PCI is true and similarly we are going to make sure that we can only talk out to other things labeled PCI is equal to true. If you put this out then each PCI workload has a belt and breaks of security where it is prevented from talking to anything else that is not PCI enabled and will disallow any traffic from anything that is not PCI enabled. So we now have a fairly secure mechanism by which we control PCI to non PCI traffic.
Christopher: Again, you'll see, there's no IP addresses here. There's no specific mentions of pods or anything else. It's basically just saying, if there's an object with this label on it, that that controls the behavior of this policy. So let's talk some more about some of the other interesting things we can with policies.
Christopher: What we were just showing you was a network policy. It's very closely related to Kubertnetes network policy. That controls mainly layer three, layer four. Connectivity based on an IT packet port combination. We've extended this recently within Calico and Tigera Secure to include some layer five through seven capabilities as well. We worked very closely Iftio to do this and we've done a lot of work for the Iftio community to enable what I'm gonna show you here. And in a couple of weeks time we're actually going to have a conversation about Iftio policy. So if you're interested in layer five through seven policy, please tune in. If I haven't bored you to tears in this webinar, give me another chance to do so in the Iftio webinar on the fourth or fifth, right Mike?
Michael: On the fifth, yes.
Christopher: The first thing we can do is, there's a concept in Kubertnetes that's something called service accounts. It's sort of like assigning a user, a synthetic user to a workload. In this case what we're going to say is, anything labeled app details will accept, will attract this policy and this policy will basically allow it to talk to anything that's the egress policy. But will only allow inbound traffic from other workloads that have service accounts or the synthetic user with the name of product page. The interesting bit is if you have Iftio installed as part of this, that service account instruction will also force a mutual PLF authentication of that flow. So if you do this and you're running Istio what you'll then get is a mandatory MTLF certification of the traffic. So the EPLS encrypted and will do client server authentication using TLF certificates and that will all cooperate with the Iftio infrastructure.
Christopher: Similarly we can group service accounts. We can say there's a number of service accounts, Bob, Alice, Jane, Tom and all of those service accounts might have a label attached to them ratings reader. So it now allows you to use labels to group service accounts and say anything with a label of ratings reader.
Christopher: ...group service accounts can say anything with the label of ratings=reader, again, will drive TLF, MTLF certification and only allow traffic from endpoints that are asserting that service account. Finally, we can start combining these things and we can say that we're only going to allow traffic from pods, from endpoints that are labeled app=productpage and have service account ratings=reader. What we've now done, is we've now done defense in depth and we're now enforcing this at layer five through seven with the service account and MTLS as well as at the network layer with the actual pod identifier, end point identifier labeled app=productpage. Finally, we can build on this even more and say, "Not only are we going to check at later three, later four, and the TLS certificates...", we're also going to say that if you pass all of that, the only thing you're going to be able to do is do an http "GET" on a URL that's prefaced by ratings."
Christopher: So, we now have done quite a bit of defense in depth. This policy is much more in layer three, layer four. It's layer three all the way up through layer seven and that includes TLS sections. So, now, this is pretty much a refresher. We've gone over some of this before in webinars. This is a bit of a reminder about how network policy and application policy works, in primarily a Kubernetes environment. But again, the one thing you'll notice, and I'll keep on harping on this, is there's nothing here that identifies an IP address or a specific entity, or it identifies pods specifically. This is all driven by metadata and Kubernetes, for Kubernetes workloads maintains an inventory of what work loads exist in the system, what their network addresses are and what labels are attached to them, and that can be dynamically adjusted, but Kubernetes maintains that infrastructure and maintains that mapping, and our Calico open source project and [Tigerera 00:17:16] Secure commercial product uses that mapping to then, actually build filtering roles that enforce the policies that you see here.
Christopher: Let's think a little bit about a policy. You can now start thinking about policies a little bit, and one of my colleagues, a while ago, Cody, did a really nice webinar on this as well. It's on something called Micro-policies. But you can now start writing policies instead of saying, "This thing, this IP address is an X." An X needs to do all these things, and most of those things overlap with things, Y and Z do as well, but in each X, Y, and Z, I rewrite those rules saying what's allowed and what's not allowed. This is allowed to talk to LDAP servers. This is allowed to talk to clients, etc.
Christopher: Another way of doing this is leveraging those labels, and saying, "Maybe labels say you're an LDAP producer or LDAP consumer", and then there's a policy that says, "If you're labeled an LDAP producer or an LDAP server, you allow traffic in from things labeled LDAP consumers or LDAP clients," and doesn't matter what you are. You could be a database server, you could be a front end, you could be a whatever. If you have those labels, those policies will then be attracted. So you now start thinking about policies in terms of behaviors or personalities. So that's another very powerful construct here, but again coming back to all of these constructs are all driven around metadata labels. So, I'm gonna just go a little bit further on how the enforcement's done, and then I'm going to talk a little bit about how we can apply this to things that aren't necessarily Kubernetes, and then I'll go into how you can apply this across multiple clusters of Kubernetes environments and they build on each other.
Christopher: So, the first thing you need to do is think about, as I sort of said before, how are we making sure that we're applying these policies in the right places, and you can think of this as a little bit like a multiple tact vector protection or multiple authentication capabilities. The way we enforce these policies is we have a label-based policy model and those labels can be driven by cryptographic identities, I.e. service accounts, driving MTLS by the current [efemoral 00:15:24] IP address in ports of pods, etc. So network identity. Other identities within the pod like behavior, etc. and that will drive this policy model that decides, "Is this pod, for example, or is this end point something that should be allowed to communicate or not?" We also show detection and enforcement is done both within Kubernetes space, the pod name space, as well as, in the underlying host, so there's multiple places where this enforcement is done to make sure any single compromise can't violate your security environment.
Christopher: Where we do this enforcement, and this starts talking a little bit about what we can do for things other than Kubernetes is we do this enforcement within the Kubernetes pod itself. We do this enforcement between the Kubernetes pod and the underlying host operating system. So we now do this in the underlying kernel of the host. That host could be a physical server or a [VM 00:20:53] within the pod itself. We can also do the enforcement on the host's physical interface, not just on the virtual interface of the pod, but we can also do this enforcement on the node itself. On the worker node, itself. Or on any host itself. So it's given the defense in depth, except for this opens up some interesting capabilities. So, to date, I said that Kubernetes maintains an inventory of endpoints in the system, that Kubernetes manages, and labels the metadata attached to them. However, what we've done with Calico and Tigera Security, we've extended this a little bit. One of the things we've added is the ability to put into our inventory, which is a copy of the Kubernetes inventory, something called a host endpoint.
Christopher: Now what a host endpoint basically says is, "There is a host out there somewhere" and in this case, it's name is ServerR10P7.eth0. Host endpoints are actually interface endpoints. If you have multiple ethernets, for example, in that host, you would have multiple host endpoints for that host. And we basically say that on eth0, on serverR10P7, there are some IP addresses that we expect to see on that post and there is an interface that makes you want to bind to and what the node name we're going to show within our data store is, but more importantly, I can now attach the same labels that I attached to Kubernetes endpoints, to those host endpoints. So I can now, for example, say that this thing is of type production and cuts DB: server. So I'm not attaching labels to servers that aren't managed by Kubernetes. These could be VMs, these could be baremetal hosts servers, these could be desktops, these could be basically anything that I can have a network entity for. So that's what a host endpoint is, and this is manageable, this is a Calico and Tigera Secure feature. This is not something available in standard Kubernetes.
Christopher: And it looks like we have a question. What's the question, Mike?
Michael: Can the network security policies be tied to SGT tag names? Security group tagging is enforced by next Gen firewalls or would the SGT tag names and labels applied in the network policies be separate lists that would be separately managed?
Christopher: So, the thing is, the security group tag names are sort of a vendor specific thing within the various firewall vendors. We have done integration, for example, with security groups in AWS, so we have a product called Tigera Secured Cloud edition that runs in AWS that allows you to use Amazon security groups as labels for policy enforcement and detection, and we're going to be extending that into the other cloud providers. We have not done that to date with any of the other firewall vendors, however, it would not probably be all that difficult to write some bridging logic to take a, whatever that firewall can kick out of saying what end points belong to what security groups and use that to generate the appropriate [YAML 00:24:40] fragments that would then get stored into our Key values storage that I'm showing here. But, whoever asked that question, if you have specific use cases, etc. we'd love to hear from you. We're always looking to see what else we can do within the space.
Christopher: So, this gives us the ability to attach labels to specific IP address interface pairs, whether they're managed by Kubernetes or not in the underlying infrastructure. The next thing we can do is, there might be things, for example, that we want to use labels for that aren't things that we can actually run Calico or Tigera Secure on. They might even be things like address ranges. So, for example, we'll take the Use case here, of a network policy you might want to deploy to embargo a country.
Christopher: So, let's say there's a country called [Presnovia 00:25:39] and it's run afoul of our financial controls in a given country and they are to be embargoed by countries in the country that's annoyed with Presnovia, and you can't do business with them, you can't accept traffic from them. What you can now do, again, this is showing a [gooey 00:26:04] interface to attach labels to IP address ranges. This is in our commercial product. You can also do this with the [YAML 00:26:11] file looks somewhat like the host endpoints to the YAML file I just showed you. And in this case, you identify a set of one or more cider ranges. They can be V4 or V6, and by the way, host endpoints can be V4 or V6.
Christopher: And then, we can attach labels as well. Let's get through the attached the label to this called embargo = true. So basically we then said is, whenever we see traffic in our policy enforcement environment going to these address ranges, we know that the label, embargo = true is attached to those IP addresses and therefore, we have a policy that says you can't connect to or receive traffic from things labeled, embargo = true.
Christopher: So instead of writing those IP addresses into all your firewall rules, you basically attach a label to it, and then you have a single policy that says, "anything labeled, embargo=true, we're going to block traffic to or from and log it" so we can prove to the regulators that we are, indeed, adhering to the legal requirements. So, we now have a way of attaching labels to individual endpoints, as well as to ranges of address. Another example I see very frequently here and using network sets is, for example, I might have a [NOC 00:27:31] or management network somewhere in my organization, and those are the only endpoints, those are the only things on that network, should be the only things you're allowed to say to the servers or make changes in various feats of infrastructure. I can then have a network policy that says I'm only going to allow SSH into things labeled production server from things labeled ["NOC Net" 00:27:53]. And then, any SSH coming in from something on NOC Net will be allowed in production servers but nothing else. There's many uses for this network sets and the host endpoints.
Christopher: The thing to keep in mind is that while Kubernetes manages the Kubernetes endpoint to label mappings, and that's all done dynamically, there are being done external to an orchestrator, so it is your responsibility to make sure these are currently removing them when those servers are gone or the network set changes, etc. Although, this can also be automated as well. So we've seen customers, for example, use, as part of their terraform or puppet or chef or Ansible environments, what they use, or salt vac, what they use to build out their servers, as part of that, make a call to our API to create the host endpoint or remove the host endpoint if they're removing the server or change the labels, etc. It can't be automated, but this is something that is going to be have to be done from your side of an automation standpoint, not from our side.
Christopher: How does this actually work in practice? So if we go to the next slide, what we now have, the red stop signs here, are places where policy is enforced, and as I showed earlier, if you look at the left side, I've got two orchestrated hosts, one is running open stack, the VM orchestration host B, and orchestration host A is Kubernetes, the Kubernetes host. And, we have container X and VM instance Y, there, and network enforcements happening both on the physical interface, as well as, on the instant self, and then we also have some baremetal servers. These could be VM, cloud instances, baremetal servers that we've done host endpoints for and we've said that F0 on Cloud Instance C is also labeled food consumed
Christopher: It's also labeled foo consumer, and that Baremetal Host D, is labeled foo producer. You then install Calico, or Tigera Secure on those hosts, just like it would be installed on a Kubernetes host, or on an OpenStack host. Those containers are labeled foo consumers, as well, so Container X and VM Y, Instance Y, are also foo consumers.
Christopher: And then, if we have a policy that says foo consumers can talk to things labeled foo producers, then that policy will be enforced, or in this case say foo consumers are allowed to talk to foo producers, and so its an egress policy, then in that case that policy would be enforced for Container X, for VM Instance Y, as well as Cloud Instance C and Baremetal Host D. So, we have book ended policies only allowing egress to, in this case, Baremetal Host D, from Container X, Instance Y, and Cloud Instance C, and Baremetal Host D is only allowing traffic from those three endpoints.
Christopher: But, we also have created a network set, in this case 198.51.100/24, and we would attach a label to that as foo producer as well. So, maybe there's a cluster of servers in there that are legacy appliances, and we can't run Calico on them, you can still have a foo producer label there. It, itself, cannot protect itself, but what can happen is the egress policies that are being enforced on Container X, Instance Y, and Cloud Instance C, will only allow traffic to Baremetal Host D and the network set, and none of the others will be … No other traffic will be … If it's protected by Calico, or Tigera Secure, we'll be allowed to talk to that network set.
Christopher: We can take this a little bit farther by saying that we can install a intermediate Linux box, or a cluster of them, in your network, and install Calico on that as well in a mode we call gateway mode, and then in that case we can apply policies there. And, as long as traffic is directed through those boxes, we can provide filtering and policy enforcement, even for those endpoints where we can't install Calico on them, or Tigera Secure on them, natively. So, I should have the … I didn't have here, I should have, that network set. I could have that gateway box sitting in front of the network set, and then that would be doing the policy enforcement for that network set as well. So, networks that would now have an ingress policy in front of it, as well as relying on the egress policy coming out of the other endpoints.
Christopher: So, what we've now done is basically using the same Kubernetes dynamic model, easy-to-understand policy model, label-driven, and we've extended that to things that aren't in Kubernetes. That Kubernetes is not natively aware of, and it now is part of your same security regime as your Kubernetes is.
Christopher: So, this handles a number of the hybrid use cases, where you've got Kubernetes talking to legacy or heritage environments, be those VMs, be those bare metal servers, even potentially, in some cases, services being offered by public cloud providers like RDS, et cetera, and that's our Tigera Secure Cloud Edition product.
Christopher: Finally, there's another mechanism we can do, and the part of hybrid that we haven't talked about yet is how about if we have multiple Kubernetes clusters? So, multiple Kubernetes clusters, you would have a separate Kubernetes, a separate Tigera or Calico cluster, mapped one-to-one to those Kubernetes clusters, and that Calico/Tigera cluster basically stores two bits of data in its datastore, in its control plane store. One is what I'll call inventory, endpoint identity. The host and the workloads and the labels associated with them, much like the host endpoints network sets, and what Kubernetes natively tells us about in each cluster.
Christopher: It also stores the intent, the policies. Bobs can talk to Alices, but Freds can't talk to Alices. What we do in Tigera Secure, and this is a feature in our commercial product, not in the open source project Calico, we allow you to federate the inventory, that endpoint identity data, between clusters. So now, if you have a policy that says Bobs can talk to Alices, or LDAP producers can receive traffic from LDAP consumers, the policy enforcement in Cluster 1, where say there is an LDAP producer, will know about the LDAP consumers not only in Cluster 1, but also the LDAP consumers in Cluster 2, and will allow all the LDAP consumers, both in Cluster 1 and Cluster 2, to talk to the LDAP producer in Cluster 1, or vice versa.
Christopher: So, this is federation. This is a new feature in Tigera Secure, and this allows you to build a fairly complex set of Kubernetes clusters to have federation of identity between. And another interesting point to this is that federation will also carry over the network sets, and the host endpoints that you might have put in on either one of those clusters. So, this is now not just for Kubernetes, this is also for the host endpoints and network sets that you have identified.
Christopher: So, we now have the ability to cluster both heritage and Kubernetes environments, and have them all share the same concept of identity and labels across all of the clusters in your infrastructure.
Christopher: I mean, people tell me I have a loud voice, but I don't think it's going to be loud enough for whoever made that question to be hearing me directly as well as through the speaker, but there might be people here in this office who would disagree with that statement.
Christopher: So, basically, you now have the ability to have a Calico or Tigera Secure number of those clusters … Again, public cloud and private cloud, supporting legacy workloads, or heritage workloads and Kubernetes clusters, having them all share the same concept of identity for all workloads, and then applying policy to those as appropriate.
Christopher: If we go to the next one … Then, as we say for example I have a policy in Cluster 1, and this is an important point to make, we do not federate policy. If you want a policy to exist, like in this case we just have this policy that allows LDAP servers to receive traffic from LDAP clients in Cluster 1. So, if there's an LDAP server in Cluster 1, and we've federated Cluster 1 and Cluster 2, if I said LDAP server in Cluster 1 will allow traffic from LDAP clients in Cluster 1, and LDAP clients in Cluster 2. If there was an LDAP server in Cluster 2, the policy isn't federated, so that LDAP server in Cluster 2 wouldn't allow traffic either from the LDAP client in Cluster 2, or the client in Cluster 1. In order to have that policy, you would need to load that policy also into Cluster 2.
Christopher: Since policies change substantially less frequently than identities, we don't see this as really being a problem, what we needed to do was federate the identities, because the identities can change very, very frequently. They're changing automatically by Kubernetes, for example, but policy is something that a human is driving. A human is making a decision to make a policy, so that's sort of in human time, and you're probably going to be using some form of CICD deployment model, and that can just as easily take a policy and deploy it into one cluster as deploy it into multiple clusters.
Christopher: The reason we didn't federate policies is it's very possible that different clusters might exist in different regulatory domains, or some might represent different application spaces, et cetera, and you may not want the same policy in each cluster. So, this is … you should decide what policies belong in which clusters, based on the environment they're in, or what those clusters represent. What we're federating is the identities, so that if there is a policy in that cluster that relies on a specific set of identities, you'll know about those identities, no matter what cluster they originate in.
Christopher: And, so as I was sort of identifying earlier, if you … These are the configurations that turn on federation between multiple clusters. We can go into the architecture of how this works, but basically we stand up something that watches for changes in a given cluster, and replicates it to another cluster, as far as the identity.
Christopher: If you … and this is how it gets configured, depending what datastore you're using. So, at that, I think we've now sort of covered the concepts of clusters in Kubernetes environments, in multiple Kubernetes, or to be more specific Calico or Tigera Secure environments in multiple locations, public cloud, private cloud, et cetera. And we've talked about unifying Kubernetes view of the world and heritage view of the world in a given location within the same Calico or Tigera Secure cluster.
Christopher: So, we've now given you the ability to harmonize heritage and Kubernetes or other containerized environments that we support, like OpenShift et cetera. When I say Kubernetes, that's interchangeable with OpenShift, with Docker EE, et cetera, EKS, AKS, IKS, GKE. We've given you the ability to harmonize that with your heritage environments, and your ability to take, then, that harmonized view, and federate that across multiple instance of that harmonized view across different infrastructure providers.
Christopher: So, with that I've sort of exhausted … I haven't exhausted this topic, I've exhausted this topic at this level of detail. So, I'd love to hear any questions you've got, otherwise I'll turn it back over to you, Michael.
Michael: Yeah, so great. We're now open for questions, based on our presentation today, or anything else you may have. Just while you think about those questions you want to ask, and type them into the interface, just want to let you know that our next webinar … We do a webinar every two weeks, and the next one is Tuesday, December 4th-
Christopher: Which is more than two weeks away.
Michael: Which is a little bit more than two weeks away, but we get two a month in, so it's essentially two weeks. This is marketing. Let me market, please.
Christopher: We've got marketing here, and you've got CTO here, right? So I keep marketing honest, or try to.
Michael: Yeah, yeah. So, the next webinar is entitled "Enabling Zero Trust Security With Istio." We know that Istio is a hugely hot topic service meshes in general, so because you've attended this webcast, there is no signup for you. All you have to do is go there and say you want to attend. There's no form to fill out, because you're already in. So, there's the link, it is in our channel, if you want to signup for that webinar, or look at all the past webinars we have done, there are a lot of great topics.
Michael: So, let's see here. We actually have not received any questions, any further questions. We're not the kind of company that does canned questions to set up, so if you guys don't have any further questions, we'd like to thank you all for attending, and have a great day.
Michael: A copy of this presentation will be made available to you via email afterwards. This webinar will be posted immediately to BrightTALK and to Tigera.IO, and YouTube. So, if you want to review it, run over it again, you can.
Michael: So, thank you once again for attending, and we will see you next time.