Securing Kubernetes Applications in Google Cloud with Tigera

 

Tigera Calico was just recently embedded into Google GKE-On prem and we will demonstrate how to implement security controls on GKE. Don’t miss this webinar as we will be sharing some common network security challenges in the Kubernetes environment. In addition, we will explore enterprise-grade Calico features provided in Tigera Secure which enables enterprises to add network security support in hybrid cloud environments.

Michael: Hello everyone, and welcome to today’s Webinar; Securing Kubernetes Applications in Google Cloud with Tigera. I am pleased to introduce today’s speaker. It is Amit Gupta, he is the Vice President of Project Management with Tigera. Amit is responsible for the strategy and vision of Tigera’s products and leading the delivery of Tigera’s road map. He has expertise in building software products and services across cloud security, cloud native applications, public and private cloud infrastructure. He holds an MBA from the Haas School of Business and his Bachelor’s in Technology and Mechanical Engineering from IIT Kanpur.

Amit Gupta: Thank you, Michael. Good morning, good afternoon everybody. Looking forward to the discussion today: Security and Kubernetes. Those are two of my favorite topics. What I have for the agenda today is, we will start with a very quick overview of network security. How you should plan and design network security for Kubernetes. Then, I’ll walk you through our open source project called Tigera Calico. We will then jump into a hands-on tutorial where we will try a full demo on a cluster running in Google Kubernetes Engine. Then, we’ll close out with the advanced network security scenarios with Tigera Secure.

Let’s jump right into this. Now, before I go into the details of network security, I want to just start with giving everybody a quick overview on Tigera. Tigera has been around for a few years. We are currently the market leader when it comes to zero trust network security and compliance for Kubernetes Platforms. Our flagship project, project Calico today has more than 80,000 clusters across the globe that are being secured by Calico and this is only partial phone-home data. We make it easy for our users to turn off any kind of phone home information. We are also the co-lead for Istio Security Working Group. Really actively engaged there and it’s probably fair to say that Calico and Tigera technologies today are the default standard for network security.

I can very confidently say that, there is probably out there on the planet who is running production clusters on Kubernetes and is either not already using our technology or have it on their road map to do network security. One other testament for that is, if you look at our customer base today, it touches all segments in the markets. Whether it’s major enterprises running in financial services, manufacturing, some of the largest SaaS providers, Telecom companies. We have a whole set of medi gaming, entertainment companies running their production applications on Kubernetes and securing them through Calico. The last, but not the least obviously a whole bunch of software and technology firms who have relied on us for network security.

Now, it’s not just the end users who have been using Tigera products for security. It’s also, the Tigera products have been embraced by all major cloud providers. So, if you look at the Managed Kubernetes is offering including Amazon EKS, Microsoft Azure, Google GKE where we will spend most of the time today and IBM Kubernetes service. They all have embraced or have embedded Calico as part of their network security solution and if a user is deploying commercial distribution either on prime or in the cloud, Red Hat Openshift or Docker Enterprise for Kubernetes has Tigera batteries included in the deployment.

Now, today we get to spend most of the time talking about how we can leverage Tigera products on Google Cloud, specifically their GKE offering. Now, I’m very excited to announce, I’ll share with you our announcement from last week at Google Cloud Next where Tigera was included as the embedded technology in Google Cloud and Anthos offering for the GKE on prem. We’ve had a long standing relationship with Google for the GKE offering and they’ve now chosen Calico to be the baked-in technology for the GKE on prem as well. Here is a quote for one of their lead project managers for networking who we work with very closely.

So, what I’m going to do today is walk you through some of the key principles for network security and then we’ll go to a tutorial on GKE Cloud to do some hands-on demo there. Now, we before we go into the specifics of network security, it’s probably worthwhile to look at what we are seeing with respective application trends in the market. You’re probably are experiencing the same within your respective organizations. It’s probably fair to say that today’s modern application are built and deployed vastly different from how things were five, six, 10 years ago. Your big large monolithic applications are now being decomposed into many, many small micro-services.

Now, that has really different implications. Once you decompose a monolithic application into smaller micro-services, obviously these micro-services they’re staplers, affirm role, highly dynamic, but more important is that the communication between various components of that large monolith application that was typically happening as procedural cause or library cause in inter-process communication, it’s actually now, communication that happens over a network.

So, from that perspective, if you think about the attached service for your application, quite frankly, is a lot more expanded. In this specific illustration on the slide where you see an application with four major components is now decomposed into four different micro-services and all of these services are communicating over the network. So, if you were programming a security model where, in the past you were doing network security and on the entire application, well you need to think about designing that security model where each of these individual micro-services have a policy enforcement there. Policy enforcement done on each individual micro-service and that gives you a very controlled environment and a better security posture.

Now, if you’re at the early stages of your Kubernetes journey and you are just starting to define the network security model for that, I have few key points to call out, what I call the key building blocks for network security and compliance. This is where we will start with some of the basics and towards the end of the presentation today, we will go into some of the more advanced capabilities. The first thing that you want to think about a design for is how are you going to do identity for your workload or for your micro-services in this environment? How are you going to identify the workloads? How are you going to authenticate that?

Now, typically in a traditional model, most folks have done that relying on IP address based information. Now, we all know that in a Kubernetes world, that is not the model you can rely on. So, think about what are the aspects you would use to identify the workloads in a Kubernetes cluster. You can use Kubernetes meta-dat constructs or even go to a stronger security posture where you’re using certificate based identity. We’ll talk more about that towards the second half of the presentation.

The second aspect is that you ran your workloads and your clusters and the named spaces in what I call as a Default Deny Model. Fundamentally, what that means is that you are essentially going to white list all communication, all traffic going in and out of the service. It is super important that you design or you bake that assumption into your security design because, as we all know, most of the security breaches, they come from inside your own network. If that is indeed the case, you have to work in a model where you’re not relying on security based on a particular pod or a workload sitting on an aspect of your network. You have to still white list everything and that’s the fundamental building block for a good network security posture.

The third aspect that you should design for is, you automate the security model as part of your deployment. As we know, deploying Kubernetes workloads is all done in an automated fashion, part of your CI/CD pipeline. Well, make sure that the security model, the network policies you’re going to create, they’re all part of your deployment pipeline. One of the key aspects there is that these policies and the security controls they’re deployed in an automated fashion. You don’t want to fall back to, you don’t want to rely on your traditional models where there used to be tickets or manual processes around creating these security policies. Make this control set policies as good as part of your CI/CD pipeline.

Then, last but not the least, is make sure you have granular visibility into inside your clusters, into the communication. You are auditing all traffic, all policy enforcement, you’re auditing all service still usage access logs and so on. It’s super critical for you to have that visibility because as you advance on your Kubernetes journey, you probably are going to be running some mission critical regulated workloads there. It’s a critical requirement to meet any compliance benchmarks or frameworks that you or your organization may be subjective. This is a very high level but it quite, foundational building block for how you should think about security. There’s obviously details and components that we’ll cover there, but definitely plan for these things as early as possible in your design cycle.

Once we have these building blocks, now I’m going to talk about how you can implement these building blocks using some of the Tigera product and we’re going to specifically focus on Google Cloud Infrastructure. So, Tigera has two main offerings to solve these problems. The first is Tigera Calico, which is our open source tool, and we’ll talk more about that. Then, the second is a Tigera Secure enterprise edition, which essentially allows you to do zero trust network security and compliance across on prime infrastructure or cloud and it has advanced security capabilities. We’ll talk about those two products today, but let me first start with Calico.

As I was mentioning earlier, Calico has been around for quite a few years, I believe about four years. Calico right now is … we tend to use Calico for watching machines or bare metal servers. We have quite a few of our users who actually do the same level of fine grain access control, that they use for network policies across the legacy infrastructure. With that, what I would like to do today is jump to a live screen. What we will first do is we will set up a GKE cluster. For that, what I’m going to do is I’m going to my Google Cloud Console. I’m sure some of you have already used it. One of the key aspect here is that as you are setting up your GKE cluster, and we’ll walk through that here right now, let me zoom a little bit so that folks can see this.

This is the standard Google GKE screen. We’ll start with a standard cluster, it comes through the name Pre populated, I’m not going to change that. You can choose various different versions, but we’ll go with the default here. I’m going to set this to two node cluster, I’ll go with the default options. Now, this is where it is important that you go ahead and click on this tab where you will be able to choose some advanced options around availability and network security. We’ll leave the availability and the maintenance window as is, but then I want to scroll down to the networking aspect. We’ll keep using the default network and that is where you want to make sure that you have enabled network policy.

The moment you click this checkbox as you’re setting up your clusters, this is where you actually have Tigera Calico embedded into this. What I’ll do is I’ll click here, so you can get to the project Calico website as well. You can see all the documentation, but you want to make sure that you’ve enabled network policy. Go ahead and make sure, take the default options from security, and then we’ll scroll down and I’m going to skip some additional features, but you can see here, you can create a Kubernetes dashboard if you want to be able to use, a user interface for managing the workloads you can also use Istio for service match capabilities. We’re not going to talk about that in this webinar and go ahead and create on that. Typically, it takes about few minutes to get the cluster going. However, for our purposes for the demo, I actually already have a cluster created. What I want to do next is I will go through a simple tutorial on how do you do network policies using Calico on GKE. These links by the way are included in the slide. If you go to Project Calico website, you go to the resources and the documentation. There are quite a few simple tutorials available there. I’ll skip the first one, which I think is really simple, but we’ll do a Stars policy tutorial. This is an example of a simple application that has … let me see if I have it running, there you go. I had the application running in the cluster.

There’s a management UI that you’re seeing for the application and there’s a client services, there’s frontend service, and there’s a backend service. Those are illustrated in this picture as C as the client, F as the frontend, and B the backend. Now, in this namespace, everything is running in a default allow mode and that’s not a very good security posture. You want to be able to run the namespace in default deny model. As we talked about earlier, you want to wipe lists all traffic and then you specifically allow traffic that is intended in the name space. In this case, that will be specifically client is allowed to talk to the frontend, frontend is allowed to talk to their backend. We’ll just go through a quick demo on that.

Now, to do that, what I’m going to do is I’m going to just still use the Google Cloud platform connect. This is where you can open the Cloud Shell and just so that you can see it. I’ll open the Cloud Shell in a separate window. What this will do is it will basically give me just a shell command, my Kubectl is loaded there. We can just do a Kubectl get pods, all names spaces, make sure everything is looking good. Then, what we’ll do is we’ll start implementing some of the policies. It looks like everything is in good shape. Let’s do the first, excuse me, first set of policies, which is we going to implement isolation there. What essentially it does is for the stars namespace and the client name space, I am implementing a default deny.

I copied the command from my browser and I’m going to just paste it here, and right away because now we have the fault in our model. If I go back to my UI and do refresh here, I’m going to see a blank screen because there’s management UI has no access to any of the parts or anything. That’s the starting point, nobody can talk to nobody here. Then, we will start implementing policy one by one to enable certain traffic. The first thing I want to do is, I want to allow UI, this management UI that I was showing you to access the services, the various services running inside that namespace. Let’s copy this command and let’s run this inside our Cloud Shell. That’s done. I think, a moment to update it, and as you can see now my UI has been allowed to access the namespace.

The shaded line essentially implies that still if any of these components, they’re not able to talk to each other, which means the traffic inside the namespaces is so restricted as per our design. Now, what we’re going to do is we going to enable policies that will allow specific traffic. Let’s go back to our tutorial and the first policy that we will apply, and don’t worry, we’ll go to the actual details of the policy I’ll walk you through that. Let’s apply the policy that will allow traffic going from frontend to the backend. Let’s run that command here. There you go. The backend policies created, let’s go to refresh our UI one more time. It takes a few seconds to upload and now you can see the traffic is indeed flowing from frontend to backend.

We will now go back to the tutorial and we’re going to apply the policy for their front end service. Let’s copy the command. Let’s apply here. At this point my client should be able to talk to the frontend as well. Let’s go back and refresh the UI, it takes just a few seconds. The policies by the way are applied instantly. So, within milliseconds the policies are applied on the application, on the services. While this is uploading, while this is refreshing. I want to make sure we go through and look at some of these policies. Why don’t we take this policy you all know and open in another window. Let’s go the over here. Let me zoom a little bit.

This is just a quick anatomy off her policy inside Kubernetes. Let’s go back to the UI, which is now updated. You can see here from clients, The traffic is flowing to the frontend from frontend, the traffic is flowing to the backend, but everything else has not been white listed by our policies and hence, that part of the traffic is denied. I will do very simple, quick tutorial that walks you through the network, there’s actually more tutorials here, sorry, there’s just too much here. There are quite a few other tutorials on the project Calico website. There is a more advanced application layer, policy tutorial where you can go through layer three to layer seven policies. We won’t go through all of those here, but I want to give you an idea about how you create a GKE cluster Web Calico enabled, and then run through hands on tutorial to get you an idea about how to enforce some of these policies.

Now, let’s switch back to the slides. Give me one moment. There we go. You saw that we created a GKE clusters with Calico enabled on it. We went through the full tutorial where we whitelisted the traffic. We identified these workloads based on their Kubernetes identity and we applied segmentation rules there. Now, let me walk you through the policies that are there. Specifically, there were two policies that white listed the traffic.

Let’s start with the one on the left, which is the backend policy. Here you define the policy, the namespace, but you can also see in this spec this is where you’re defining a podSelector. Fundamentally, what this means is it allows you to define policies applied to which pod, in this case but strictly if the role is set to backend. If we find a pod with the labels were role had to backend and that’s really what this policy would apply to.

The next aspect that is, what is the traffic rules that we had applying this? In this case, we are defining ingress rules. By the way, you can define both ingress and egress rules with Calico policies. In this case, we are defining an ingress rule, where we are aligning traffic from any pod that has role frontend. We are going to allow that traffic to port 6-3-7-9 protocol TCP. That’s what the configuration is for this particular backend service, accepting traffic from frontend parts.

Now let’s look at our frontend policy, which again a very simple, the policy applies to all parts that have roles that as frontend. Then, there are ingress rules defined where you will accept traffic from any part, any workload where the role is climbed and we are accepting traffic on port 80.

Again, I highly recommend that as you define policies for your Kubernetes applications for your infrastructure. Make sure you don’t leave it open for all ports, even though your services is only listening on one port. It’s just a good design practice to define the policy all the way down to protocol and port, to only certain workloads that are allowed.

That was a quick tutorial on Tigera Calico and how you can do some fine grain segmentation on your TK cloud. Now, I’m going to switch the topic to Tigera Secure Enterprise Edition. Typically, we see our users as they advance on their journey on Kubernetes they start to put more regulated workloads, more mission critical workloads. They have additional requirements that come from security and compliance team. Typically, it is more fine grain segmentation all the way down to their web methods and that should be your roles. The security and the compliance teams have a need to look into what’s happening inside the cluster in terms of flow logs, connectivity logs, audit logs. It could be because the security teams won those logs to be consumed inside whatever your log aggregation platform is, your security operations center. Sometimes, you need those logs to be able to identify or troubleshoot connectivity into the services.

You may want to check if a particular service is down, whether it’s a policy denying the traffic or you have a network fabric issue. When your security and compliance requirements are much more advanced, that’s when we recommend our users to deploy Tigera Secure Enterprise Edition, they’re full major value pillars for the solution. The first one is the ability to deploy a full zero trust network security architecture. The second is get deep visibility observability and traceability inside your clusters. The third is around how do you actually enforce continuous compliance across your regulated workloads, build evidence reports added data and troubleshoot at moments. Last but not the least, if you have clusters running across heterogenous, excuse me, heterogenous infrastructure on prem or in the cloud. You’re defining policies, that red can talk to green then red or green to be sitting in any parts of your infrastructure, in any different clusters. You get a federated security model from Tigera Secure.

Those are the four main value pillars. Excuse me. What I’m going to do next is, going to each of these in detail, and then at the end I’ll do it quick preview of Tigera Secure Product but not a full demo here. There are five key principles to is zero trust network security model that we propose to our customers. The first is workload identity, just like you probably don’t want to allow a heartbeat user to consume a cloud service without multifactor authentication enabled. You want to follow the same model for your service to service communication inside the clusters. That’s very fundamental to our architecture. You have an identity or trust established for a service based on multiples sources of identity data.

The second aspect is as you’re defining your policy rules, your network policy rules, you define in a unified policy language in a unified policy model where you use layer three to layer seven rules identified in the same policy. The third aspect is you are deploying your workloads in at least privileged model. Fundamentally, you’re defining authorization rules whitelisting only the traffic that you intend to for that particular micro servers. The fourth aspect is defense in depth and it’s super critical. Primarily, in a highly dynamic environment where we do policy enforcement at multiple enforcement points. The reason it’s a good security posture is if, let’s say your network gets compromised, your application state secure or if a certain service, that’s part of your application gets compromised, you are assured that it’s not going to be able to laugh on the move to your network and infrastructure.

Last but not the least, you have full data and trials and encryption enabled using Tigera Secure. Those are the building blocks of a good zero trust network security architecture. I’ll double click on some of these as I was talking about as you establish trust on your service, we rely on two sources of identity data, one coming from your Kubernetes identity from your Kubernetes API servers. The second is a cryptographic identity based on the X.519 certificates, and then as you’re defining your authorization rules. You can include, your Kubernetes Metadata, Cloud Metadata your layer seven request attributes and gives you the advantage to define a true intent based security model for your Crown Jewels.

The second aspect here is, as I was talking about, we have a legacy to layer seven Enterprise Calico Policy Model, where you can define, all the way the controls to a web methods are a SJDB attributes specific URLs that a particular service is allowed to access. That gives you a very fine grained security model. The fact that I was calling out earlier is we do enforcement at multiple enforcement points at the host, at the Pod, and through a sidecar inside your pod. That gives you multiple trust boundaries, significantly reduces or eliminates the attack surface for your application.

The second key aspect of Tigera Secure is the visibility and traceability and you need visibility for various different reasons. Oftentimes, we run into three primary use cases. One is a network engineers, they’re trying to troubleshoot connectivity. There is a service incident and they’re trying to figure it out. Is it the policy that has denied traffic or is it the network fabric? The second is your security teams are looking for a granular log, primarily because they want to identify any anomalies, any indicators of compromise on the network, and then take corrective remediation actions on that. Last but not the least is your compliance teams who need to maintain a full detailed log of the traffic and the connector you log inside your clusters.

Those are the three primary use cases we often run into. There are various sub scenarios to this, whether it’s for your audits and so on. Let me quickly walk you through some of the capabilities from Tigera Secure. One of the key gaps that most users run into is when they’re looking for connect the logs from traditional tools or traditional architectures is that you get basic five topple information. It’s collected at either the parameter or the cluster ingress or through some appliance-based model. There are two primary issues with that. One is you are most likely looking at inaccurate information, because if you collected the log ad through an appliance, so that certain place in your architecture, the fact that a macro policy is going to deny the … you will see all except the traffic in your logs and that’s not accurate. That’s the first problem.

The second is these large typically have just five topple information, source destination, IP protocol hold. Now, that’s great, but as we all know, Kubernetes the workloads are highly affirmed role. An IP that was assigned to a frontend service today could be assigned to their backend service later in the afternoon. When you are looking at those log files, those IP addresses are quite useless and there’s nothing you can do with that kind of log. Now, when you compare and contrast that to what Tigera provides, as you can see in this slide, there’s 24 different attributes and we keep adding more and more metadata to your log. Here you’re seeing source destination, pod, namespace, connection vibes, connection count, traffic, allowed, denied, traffic direction.

There’s a vast amount of data that’s available to you to do whatever you need to, whether you’re looking for policy, mapping which policies allow more traffic. I’ll show you a quick demo of the still at the end of the presentation, but this is super rich information that security teams just love, when it comes to Kubernetes cluster visibility. In addition, to the role logs, we also provide a visualization on the product. The intent here is that you can actually troubleshoot and navigate through a vast amount of log data to a specific workload and what’s going on in terms of network connectivity or network graph or that specific workload. Comes in really handy in troubleshooting scenarios.

In addition to that, we also provide a whole set of network metrics, connection metrics, and it’s not just for each policies, but also individual rules that you may have defined in the policies. For example, if let’s say you have defined a policy with rules that allow, for example, let’s take the scenario of a compliance workload. You define a PCA policy that, only the PCI workloads are allowed to talk to PCI workloads. Now, if you set those rules and you want to be able to see any violations or alerts on that or any so you can set up your denied packet metrics, to that specific rule. As soon as you hit traffic from a non PCI workloads trying to connect to a PCI workload, you will see this metrics go, or you will see that denied packet metrics count on that rule and you can set up your thresholds and alerts on that and investigate, any kind of rogue traffic that you don’t want to allow in your clusters.

Again, it comes in very handy when you’re trying to monitor your infrastructure from a security and compliance perspective and helps you automate any security orchestration and incident response workflows. In addition to this base level of visibility, Tigera Secure also provides you anomaly detection. We look at your baseline patterns for connectivity, traffic and also, your Kubernetes API server logs and we, we baseline your systems and we will alert you if you see any kind of deviations protocol, deviations, volumetric deviations. If we see any kind of re connectivity inside your clusters, any port scans, IP scans happening and the probe coming from bad IPs known bad IPs, we will track all of that. We will alert you so that you can further build contents on that and take corrective actions to quarantine that workload or quarantine that traffic.

The third piece is around compliance, and oftentimes when we worked with our customers, there are various different teams that are involved in defining and implementing and evaluating various controls set for applications. Tigera Secure allows you to define a hierarchical policy model where individual teams and this illustration you’re seeing security team defining higher level policies for their specific security controls. These could be policies for traffic to embargo countries, compliance rules. Then, in the middle of the slide, you’re seeing the platform team, the team that’s actually managing, operating the Kubernetes cluster and they are defining rules for logging infrastructure, DNS infrastructure. Then last but not the least on the right, you see the development teams, the actual application or service owners. They are defining rules for the granular traffic for this specific applications and you can set all that up in Tigera Secure in a tiered evaluation order.

The last piece here is the Multi-cloud aspect of it. Even though, we spent most of the time today talking about how Tigera Secure, can do security across Kubernetes environment, we actually can extend this model to watch your machines and bare metal servers. But, more importantly, we provide federated security. If you define a policy that red pods are allowed to talk to the green pods, we will be able to enforce that security across any infrastructure. You have clusters running, in an Openshift environment and a darker environment, darker enterprise environment. You’re running open source Kubernetes and any kind of cloud infrastructure or if you’re using any kind of managed Kubernetes offerings, we’ll be able to enforce security across all of that.

Now, what I’d like to do is just give you a quick sneak peek into Tigera Secure Product, and then we’ll wrap up with some questions in the end. Let me go back to my screen share, here is just a quick view into the portal that comes with Tigera Secure. What we are looking at right now is a dashboard which tells you a little bit about your deployments, how many end points, how many policies. It also gives you real time metrics into what’s happening inside that cluster. These green bars that you are seeing, they are essentially policies that are protecting workloads. Green means that everything is in good shape, we are only seeing traffic that is intended for these workloads. However, if there is any malicious activity or miscalculation, these bars will indicate a red color to show any denial traffic, and you can track and manage that.

Here is a quick example of here’s our policy board, I was striking about hierarchical tiers. Let me just do one thing. Let me hide one of the tiers, but here you can see three tiers that I was talking about, the security or defining a quarantine policy, a PCI whitelist on the platform tier and defining my logging policies. Default tier, I have all my application policies on this. On this policy board you can see some additional information including the traffic stats. You can see how many end points of this policy is being applied to, connection towns, allowed denied past traffic as well. If there’s no traffic you’ll see not applicable here, but you can see all of the traffic stats for each policy.

Now, when we do one thing, when we just click on one of the policies here. We provide you a nice UI editor, if your developers are comfortable with metro policies and yammers, they can create those themselves or they could use at policy, and turn like this where you set the scope for the policy, you set the labels for end points this policy applies to, you can define ingress and egress rules, and send the rules accordingly. Once you’ve defined the policy in this UI, you can just download from here that will give you the yammer that you can just integrate as part of your CI/CD pipeline. As application gets deployed, these policies would be deployed.

Here is the visualization I was talking about. I will not go into the details of it, but there are three ways to navigate this. The outer ring tells you which name space. The middle ring is when you want to navigate by the actual pod name. Or if you’re just looking for a lot and denied traffic, you can go by that inner circle there. Just one last thing before we go to, questions here, all of the log data is also available into a Kibana dashboard. For example, there’s a couple of dashboards that come with the product, you can look at the flow logs here. As I was talking about this detailed metrics available. For example, if, let’s say you want to look at a specific flow log data from Tigera secure all those attributes that are talking about are available here, including policy, policy name, tiers, action. You can see here all the policies that were applied to this traffic and the policy action, that was applied here.

There’s a lot of rich information available here, that customers can use for various different scenarios. Now, let me go back to the slides. That was the quick preview of a Tigera Secure at this point. I will hand it over to you, Michael for any questions.

Michael: Thanks Amit. That was great. I’m sure there was a lot more about this that you can cover in an hour. If any of you guys or any of you are interested in going further and maybe receiving more customized demo, definitely let us know. You can pop a question or there’s some feedback. I’ll area also, or you can also always go to our website and fill out a contact request for a demo request. While we’re waiting for questions to come in, we currently, I’m really impressed with you guys. I never have had an audience that is full of a Kubernetes and Google Cloud experts and have no questions because you guys know everything.

I’d like to thank you all for attending. Amit, thank you for presenting and everyone have a great rest of your day.