Securing Kubernetes Applications on AWS & EKS

 

Watch this exclusive AWS webinar co-hosted with Tigera. Tigera will demo how to implement turnkey compliance and security controls for Kubernetes in AWS and Amazon EKS environments. This webinar explores how to extract data required for IT audits and implement network segmentation and encryption to meet your security and compliance requirements.

The webinar covers:

  • Security and compliance challenges that users face when deploying Kubernetes in AWS and Amazon EKS environments
  • How legacy security and compliance tools cannot provide sufficient coverage for Kubernetes
  • How DevOps teams can easily meet security and compliance objective for Kubernetes
  • How security practitioners can better assist DevOps teams in achieving their security requirements
  • A live demonstration of key capabilities within Tigera Secure Cloud Edition

Host: Hello everyone and welcome to today’s Webinar, Securing Kubernetes Applications on AWS & EKS, hosted by Tigera and Amazon Web Services. I’m pleased to announce today’s speakers, Carmen Puccio, the principal system architect at Amazon Web Services and Amit Gupta, the vice president of product at Tigera. Before I hand the mic over to Carmen, I have a few housekeeping items I’d like to cover about this presentation and the Webinar platform. First, today’s Webinar will be available on demand immediately after the live session, and will be accessible through the same link that you’re using now. We’ve also added some attachments and links which are available on the attachments tab at the bottom of your screen. You can also find today’s deck and other related content. Next, we’d love to hear from you during the presentation, so, if you have a question to ask our speakers, please feel free to send it through the “ask a question” tab at the bottom of your player.

We’ll be answering the questions at the end of the session and we don’t get to your questions during today’s Webinar, we’ll be sure to follow up afterwards and answer it. And lastly, I’d like to encourage you to share today’s Webinar if possible. So, without any further ado, I’d like to kick things off by welcoming our first speaker, AWS Principle System Architect, Carmen Puccio. Carmen, over to you.

Carmen Puccio: All right, thank you very much. Thank you for joining today’s Webinar. So, let’s just get right into this. One of the things that I like to talk about, when we talk about applications moving toward the AWS cloud, and coming from my background here at AWS, I spent the last two years as part of the AWS mass migrations team. And when we talked to our customers and our partners around the digital transformation journey, we typically talk about it in these stages. You’re typically seeing them referred to as the stages of adoption. I just briefly want to go through each stage and talk about how customers are adopting their workloads to the AWS cloud, whether their container based workloads or traditional monolithic style applications, they typically follow the same principles when moving.

So, the first phase is essentially, I call it, the kicking the tires phase or the project phase, and this is really where we’re seeing our customers just starting to run their projects in AWS and starting to experience the benefits of the cloud. And typically, you know, this is where AWS is evaluated and vetted on a project by project basis, and it might be where you’re doing your POC workloads, and trying to containerize your application as part of your migration strategy. The second stage is essentially the hybrid stage, and after experiencing the benefits and realizing the agility that it gives you, customers are really starting to build their foundation to scale out the cloud adoption. So this, I typically talk about it as creating your landing zone here, and this is really where you’re gonna lay your foundation, and the core constructs in AWS, and you need to think about things like your multi account structure, and your network connectivity and how you’re gonna handle Federation. It’s also where I’d like to see teams, essentially put their cloud center of excellence in place. Starting to think about their operations model, and how they’re going to be secure and compliant for their applications as they moved these applications over. And this is really where we start to see our customers start to adopt AWS at scale and starting to move more and more workloads over.

And it tends to lead right into this third phase, which is the scaling phase or just scaling aspect of the journey. And what we see here is, if they’re migrating their existing applications, and they’re starting to think about their mission critical applications. Or maybe they’re thinking about their entire data center. And really, this is where they want to scale their adoption to essentially match the growing portion of their IT portfolio. And the key thing that really call out here is, you don’t want to see essentially a great stall, meaning customers have migrated. Perhaps they hit a blocker, and that they haven’t retired that on premise version of that workload. Maybe it’s still not in use, but it’s still costing them money. And you want to make sure you keep migrating at this phase, and making sure that you’re agile as you move those, as you’re thinking about retiring those legacy workloads.

And then last stage, which I personally find to be the most fun stage, but it’s where you’re operating in the cloud and you’re focusing on reinvention, and you’re taking advantage of the capabilities and flexibility that AWS has, that gives you the customers the ability to transform their business, speeds up their time and their markets and innovation. And perhaps now, this is where they start to explore their serverless models for their application. And you’ll notice the adoption versus time component. And this is something key to call out here. Typically, we see customers way the various different Rs when it comes through a migration journey. And it’s important to understand the pros and cons of each. And at the end of the day, the route you choose boils down to time, cost and agility, which it tends to tell a very good story around containerizing your applications as part of your move.

So the digital transformation outcomes at a very high level, you see them right here. But I just want to highlight a few. So for instance, the one platform for traditional and container based applications. So, whether customers are self managing or running they’re containerized workloads on top of a service like Amazon ECS or EKS, at the end of the day, they’re really doing so by taking advantage of Amazon EC2, at the end of the day. And, as you know, Amazon, EC2 it’s a web service that provides that secure resizable compute capacity that our customers need, and it’s really designed to make webscale cloud computing easier for developers to run essentially any type of workload using that common interface. The other thing that I’d like to highlight is this accelerating delivery and modernization. When customers are looking to accelerate their application delivery, while reducing costs, the container story is really compelling here because it allows them to be more effective and agile.

Some infrastructure and operations teams that are under pressure to deliver these applications more quickly, the businesses realize that getting these software products and services to market, essentially it translates into a gain market share. And using containers can help enterprises modernize a legacy applications, and create new cloud native applications that are meeting those needs. They’re scalable and they’re agile, so frameworks like Docker provide a standardized way to package your applications, your including the code, you’re including the runtime and the libraries, and you run them across the entire software development lifecycle.

And, I think it was Gartner at one point, that had put a stat out there, something like 2020, more than 50 percent of the global organizations will be running containerized workloads in production. And that’s a tremendous uptick to where it was a couple of years ago if you really think about it. And then one last thing I just want to highlight as well is, not because I have to hand this over to Amit, is the growing partner and community ecosystem. So, the benefit regardless of type of workload, is the ability to take advantage of the numerous different partners in the ecosystem. And we have a variety of partners at AWS that provides software that can help you run and manage your containers on AWS. So partners like Tigera and they’re secure cloud edition are really going to help provide that ability to extract that data, perhaps, for IT audits and enables that network segmentation, or encryption, or visibility of your network traffic, if that’s your operational needs. So it’s a fantastic partnership there.

So I want to highlight what is Kubernetes. So, we’re all here to talk about Kubernetes. So, the first thing when we talk about Kubernetes, it’s been around for a few years now. And it’s essentially, it’s taken the world by storm, especially over the last 12 months. It’s rapidly grip gained traction amongst AWS customers, and at its core it’s an open source container management platform, and it helps facilitate that declarative configuration and automation that are operational and development teams need. And it helps build, it helps you run your and build your containers at scale and comes equipped with features and functions that really help you build that proper distributed cloud native 12 factor application pattern.

So you know, when we think about where you run it, it’s really about thinking about that functionality and thinking about, what are the building blocks for microservices. And Kubernetes was essentially designed to help you build cloud native applications and the quality of the underlying cloud platform is super important. So, you need to think about speed, and stability, and scalability, and integrations with the platform. And all of these things impacts the quality of the applications that you build. And how much do you have to do yourself? And ultimately how happy are your customers? Your customers are really perceiving the performance of your application, and they’re also perceiving how quickly new features are introduced. If your app is down when they need it most, that’s what they’re going to see. So, you have to think about that when you’re thinking about running your Kubernetes or any application.

So you know, we talk about that 57% Kubernetes customers on top of AWS. And our customers believe that the tremendous advantage to running Kubernetes on AWS, is essentially what you see here. So, the CNCF survey, I think it was last year or earlier this year, they stated that 57% of the workloads run on AWS today. And this is really entirely organic growth, fueled by strong community of developers, and customers, and partners. But the one thing that we heard was essentially, running Kubernetes isn’t easy. We think about how hard it is to actually manage. And we would rather spend our cycles on focusing on our apps. So, if you think of a thing that way, and we didn’t have to think about what does it take to run the Kubernetes masters, and thinking about deployment or configuration or FED, it would make things a lot easier for me as a customer.

And this is really the premise of what started the Amazon EKS Service. So, we listened to that, and this is why we built EKS. So, the Amazon elastic container service for Kubernetes is a managed service that makes it easy for you to run it on AWS, without needing to install or operate Kubernetes at the control plane level or at the management layer. So, we know how important it is to have a well functioning service to our customers. So, we didn’t build EKS haphazardly. There are a set of core tenets that we followed, which guided our decision making process for how the service should work. So, to quickly highlight these tenants, tenant one that you see here, is essentially that it’s a platform for enterprises to run production grade workloads. So we aim to provide features and management capabilities to allow enterprises to run real workloads at real scale.

So we want to be able to provide that reliability, visibility and scalability and ease of management as essentially our core tenant or one of our top priorities. Tenent two is that native and upstream experience. So, any modifications or improvements that we make in our service must be transparent to the Kubernetes end user. And this means that your existing Kubernetes experience and know how essentially applies directly to EKS. So your existing applications and investments in Kubernetes work right out of the box. The third tenent is, you know, EKS customers are not forced to use additional AWS services, but if they want to, the integrations are meant to be seamless and eliminate that undifferentiated heavy lifting. And we’re focused on making contributions to projects that will allow customers to use AWS components they currently know and love, with their applications in Kubernetes. And the last tenent is, the team is actively contributing back to the Kubernetes project to improve the overall experience for all AWS customers.

So at a really high level here, what you see is the Amazon EKS architecture. So, if you think about it, the masters, the FEDs and the worker nodes that you see here, the master nodes and FED is managed by AWS and the worker nodes are for you to manage. We’re taking the complexity of standing up your own Kubernetes control plane and it’s now simplified. So, instead of running those control planes in your account, you connect to a managed Kubernetes endpoint in the AWS cloud, and this endpoint abstracts the complexity of the Kubernetes control plane and your worker nodes can check into a cluster, and you can interact with that cluster through the tooling that you already know, and have experienced once.

And then lastly is the secure shared security model or shared responsibility model. And I just want to reinforce the importance of the AWS shared responsibility model because regardless of your application type, or what kind of workloads you’re running in AWS, you really need to think about this. Security and compliance is a shared responsibility between AWS and the customer, and we always talk about security being our top priority, and with services like EKS, AWS is responsible for the underlying infrastructure that runs your management nodes and you, the customer, are responsible for the security of your applications and your worker nodes. Tools like Tigera can assist you in making sure that your workloads are secure by enforcing that granular access control between Kubernetes in your AWS VPC resources, and it really helps you to make sure that your applications are secure and compliant and you meet that shared responsibility model. So with that said, I want to hand it over to Amit and let him tell a little bit about the Tigera secure cloud journey.

Amit Gupta: Great. Thank you, Carmen. Thanks for kicking off our presentation today. Those insights were very valuable in terms of application usage, but more specifically, Kubernetes usage on AWS space infrastructure. We do believe EKS brings in a significant value for our joint users in terms of running these modern applications. So with that, let me kind of a switch the discussion to the second phase of our presentation. And what I’m going to primarily focus on today is walk you through one of the top security and compliance requirements, or challenges, our operations teams are facing as they look to run a production Kubernetes cluster in a space environment, primarily in EKS, but it’s essentially any infrastructure, any Kubernetes infrastructure running specifically on AWS. And if I can just summarize the primary of feedback and comments that we hear from all the users we talked to, it kind of can be grouped into four major buckets.

So, the first step is to make sure that the applications that you’re deploying in this Kubernetes based infrastructure means subcritical, security and compliance controls that are typically given to you by your specific compliance teams and security teams. And it often comes down to a couple of broad areas. One is, you want to make sure that you have a fine grain access control around your workload. So you implemented a model of least privileges and you’re controlling which parts are allowed to talk to which parts. And quite frankly, other resources are reaping resources in your infrastructure. And the second step is making sure that you have a full data in transit encryption enabled, and that gets you a big check mark against various compliance frameworks and regulatory framework that you or your organizations may be subject to. So that’s the first aspect. The second aspect is once you’ve got the controls in place, then you want to make sure that you, as an ops team, as a security team, have full visibility into what’s going on with respect to network traffic inside these Kubernetes clusters.

That could be many drivers while you are looking for that visibility. One, you want to just troubleshoot connectivity between various parts containers. Two, you may want to explore, or you want to monitor that environment for any kind of anomalous behavior. You’re looking for any kind of indicators of compromise and you want to make sure that’s done on a continuous basis. And then last but not the least, obviously, compliance audits require you to manage and monitor the infrastructure so that you can look at what are the logs, and who’s talking to one, and what critical controls are being met. There are specific compliance frameworks that will require you to monitor your network logs on a periodic fashion, sometimes even daily. If you are coming up with a compliance audit, there’s a whole bunch of work that you may have to do to get prepared for those audits and again, those are some of the critical things that our ops guys have to deal with.

And the last but not least is all of this control implementation, all of this flow information, all of this visibility data. You have to make sure that all of this is available into your existing security workflows and also your security systems, whether it’s a security incident and event management system on a log aggregation platform. So those are the primary areas where we are going to help you with Tigera secure cloud edition, and the inference here is that if these are the requirements, and these are the capabilities that you’re looking to implement in your production Kubernetes clusters running inside AWS, then with a Tigera secure cloud edition, you can actually get all of these enabled very natively in your infrastructure. And I’m going to walk you through now, how all these functionalities are available, how they’re addressed, and we will also do some live demo to kind of showcase the capabilities.

So, there are four key tenants of Tigera secure cloud edition. One is making sure that you can implement fine grain access control for your parts, and services, and containers inside these Kubernetes clusters. You can also amend a full data in transit encryption. So that’s the first piece, making sure you get your security and compliance controls addressed right off the bat. The second aspect where I’m going to walk you through, is how we provide a very unique set of capabilities around how you can monitor and manage your network traffic, the log of metrics around that. The third piece will be around making sure how you can enable and manage your compliance with the ability and quite frankly, even your compliance audits around that. And then finally I’ll walk you through how the entire-

Then finally, I’ll walk you through how the entire user experience with Tigera Secure Cloud Edition is very native to AWS and very native to Kubernetes. For the ops teams, the security teams, it is a very seamless experience as they are implementing these controls and managing the visibility of this infrastructure for their specific use cases and requirements. Let’s jump into the first set of capabilities around implementing fine grain access control. Here’s an illustration of a typical environment.

Again, yours might be slightly different, but what we have typically seen is that more often than not, in side of virtual private cloud, customers are running a set of Kubernetes notes, where they’re actually running their Kubernetes parts and services. Then they also have a set of non-Kubernetes resources inside that VPCs, often time it’s databases running at EC2 instances, or just presuming database instances from AWS services, like RDS.

The step one is that you want to make sure that you implement different policies, so that you whitelist all pod to pod communication inside your Kubernetes cluster. That is essentially saying that, you can implement a default deny, and then you can explicitly whitelist traffic roles that will allow any pod to pod communication. That’s step one.

The second aspect, which is again, something that we have commonly seen in the infrastructure is, now that you have a fain grain access control for pod to pod communication, you also need to do the same thing where communication going out from your pod to non-Kubernetes resources. Let’s walk you through how you can enable access from your pods to EC2 instances or RDS instances in a fine grain access control model.

Then the per pieces around, now that you have enabled default deny on your pods and containers, and if you want your VPC resources to be able to granularly talk to these parts and containers, we also add capability around how you can actually just leverage security groups to annotate the network policy, so that depending on all those instances or all the resources that are part of that security group, they will be able to talk to these parts and containers.

The last piece around this is, and I’ll build this slide out for you guys is, any pod to pod traffic that is leaving the Kubernetes host, is going to be IPSec encrypted. That’s something that we would just enable by default on your cluster, and from there on, the system already has, it’s based on preshared keys that you as a customer or the user can manage. Then, as part of those system processes, we’ll be rotating the encryption keys every hour, on a regular basis.

Some of these capabilities are configurable, so if you don’t want to rotate the preshared keys, feel free to do so. There are other capabilities available there. At this point, what I’m going to do is, I’m going to switch to my screen share, and walk you through a couple of areas. Basically, I want to talk through how you can enable micropolicy enablement in your clusters, and how do you manage access from your parts to VPC resources.

For the for the purpose of the demonstration today, I’m actually going to use two [inaudible 00:22:55] interfaces. One, obviously, I’m going to go EC2 or AWS management console. I want to just make sure I’m still logged in, and nothing has logged out. That’s good. I’m also going to use a Kubernetes dashboard, which is directly linked to my ETS cluster that’s running. Let me just refresh this page to make sure everything is there. That’s good.

The third thing that I’m going to use today is this synthetic application, which is running as Kubernetes parts and services inside this EKS cluster. It’s a very simple application. Just for the purpose of this demonstration today, what you see here is three different nodes. C is for client, B for back end, F for front end. They’re running in two different namespaces.

Let’s just go here and see our pods here. I got a back end node that’s running in stars namespace, a front end node that’s running in stars namespace, and client running in the client namespace. Right now, as you can see, they can all talk to each other. What we’re going to do is, we’re going to implement a workflow or policy model where my clients are allowed to talk to front end, and front end is allowed to talk to the back end. That’s it. Clients should not be allowed to talk to the back end.

To enable that, what I’m actually going to do is, I am going to first enable a default deny on each of those namespaces. Then I will make sure that my management UI, which is the portal that we are seeing right now, has access to these namespaces. Once I implement default deny as you would imagine, none of the pods are allowed to talk to each other, because we will implement the default deny model.

That’s the step one, is you basically say that my clusters are set up in a default deny model. Now I’m going to whitelist specific policies. Let’s start with policies going from client to front end. The way I’m going to set up these policies, is I’m going to select all namespaces, go ahead and create here. I’ll write a policy text here. Just give me one moment. Let me get the policy information, and walk you through how these policies are set up.

The policy depiction here. As you can see, this is a Kubernetes network policy, that will be applied to the namespace stars. Name the policy frontend-policy. This policy will apply to all parts that have this label, are all equal to frontend. The policy particularly says that allow any traffic coming from client namespace to these parts. That’s a simple policy definition.

We’ll just click upload here and as some of these policies that are applied, created, Tigera Secure Cloud Edition is going to apply those policies. Now you can see that we have enabled connectivity going from your clients to the subject. Let’s do the next step there, which is how do we enable the connectivity from my frontend to the backend. We’re going to create a very similar policy there.

Let me get my policy text for you guys. Give me one moment. We’ll go back again to our dashboard and create a policy definition. For the purpose of the demo, I’m doing this in the UI. I expect that you guys in a sustaining deployment model are going to have the policies created as part of your infrastructure. Then these labels are applied in these parts as they show up in your infrastructure through your CID pipeline, will automatically just be enforcing these policies using Tigera Secure Cloud Edition.

Again, a very simple example of a network policy, again, applying to stars, name the policy called backend-policy. This one is going to apply to all pods that have this label, role backend. What it’s going to allow is, ingress from any pods that have role frontend, which is what we wanted to do. Let’s go ahead and save this policy. It takes a quick second for these policies to apply. Now you can see that I do have traffic going from my clients to frontend and frontend to backend.

That’s the first part of it. Now I got at least my pod to pod communication secure. The second part of this demo that I want to go through is, now let’s say in your infrastructure, in your application design, you want only your backend pods to have access to databases. In this case, your database instances are essentially running outside your Kubernetes structure. We’ll take the example here. Let’s just do our database management console.

I have this EC2 instance called DB instance. This is essentially running a database for me. I also have a post script instance running, again, as the RDS instance. These are the two where I’m running my databases. I want to make sure from my base environments, my backend pods have access to these two databases, and frontend do not.

Now this is something that you need Tigera Secure Cloud Edition, which is now we’re going to show you how you can create fine grain access control for resources that are actually secured using AWS security groups. Let’s go back to my EC2. I want to show you a couple of things there. All my database instances, both of those DBE instance and my RDS instance, they are secured by the security group called DBSG.

If we go back and look at the roles inside DBSG, you can see that it allows traffic from a security group called access db. That’s really the standard model that, let’s say, I’m using in this environment. You could have any of those security groups. Now what I’m going to do is, I’m going to make sure that I’m using our annotations in the features that we talked about. We can apply annotations on the backend part for these access Dropbox security group.

Then make sure my backend pod is allowed to talk to these two instances. Let’s run a shell inside this backend pod, just to make sure so far everything is, my current access anything. Let’s get the quick internal IP for … Sorry, give me one moment. Let’s get the IP for the database instance. I just run a ping command from here, you’ll see that I don’t have any access. Similarly, if let’s say I want to run and try to connect to that post script instance I had.

I’m going to run a very simple post script query there. Let’s close it and try it again, exec again. This is just running a quick post script command. I set up a time out of three seconds. It will time out because I was not able to get to the RDS instance. Let’s now do a couple of things. We are going to modify this pod, edit this pod. I’m going to add an annotation that basically will include security groups for this particular pods.

We’re adding annotation called aws.www.tigera.io/security-groups. You can, as part of this annotation, enter as many security group. This will essentially make sure that this pod has access enabled for VPC resources, just like any other EC2 instance. Let’s go ahead and click update. You can see these annotations are showing up here. Now let’s go back and do a post script query to that RDS instance. Seems like I was able to get to the RDS instance. If I enter my password, my query succeeded.

Just by adding a simple annotation to your workload, now you can enable fine grain access from your cumulative parts to VPC resources, to see if I have access to the, sorry, give me one moment, database instance as well. Let’s ping here, not this. Sorry. Give me the IP address. Then I’m going to … Sorry. I can’t do two things obviously. There you go. Now I have access to both my database instance and RDS instance.

Those are the two main capabilities I want to talk about. Now let’s switch back to our slides. Great. The next area where Tigera Secure Cloud Edition provides significant improvements in terms of your security and compliance capabilities, for ETS clusters is around visibility and traceability. This is what I’m going to walk you through some key enhancements in our product capability.

Let’s start with the first one, which is around network traffic visibility. Often times, if you’re using either a host base monitoring tool or any kind of appliance to capture network traffic, all you get is a fivetuple information, so its destination IP address, protocol, and code information. We all know that in a Kubernetes space environment, where things are where the pods and containers are [inaudible 00:33:46], that information is quite useless.

When you actually want to look at the data effect, you can’t really just rely on the IP address. The entire context is gone. That’s one big gap. The second aspect is, if you are implementing network policy just like I showed you a few minutes ago, most of these tool sets are actually going to show you that the traffic was accepted. Now, that’s actually incomplete or inaccurate information. If the network policy is denying the traffic at the edge of the container, you won’t be able to capture that information.

What looks like a traffic allowed or accepted connection in your log systems, is actually incomplete information. Those are the two primary problems we are solving here. Because we are capturing the data at the edge of the container, we actually give you true visibility into what’s going on in your environment. That’s one. The second key aspect here is that in our flowlogs that we create, you get the full work load context, particularly here.

You can see there are 24 different space day limited fields that you will see in the CloudWatch logs. That includes your source pod name, namespace, same for your destination pods, packet counts, collection bias, direction. A lot of context right in your log files, so that your engineers, your [inaudible 00:35:16], they can do what they need to do with respect to visibility. Let me walk through a couple of just examples of how this is going to look.

Here’s an example. Again, I’m showing you a screenshot from CloudWatch. Now that you have the entire context for your work load, in the log files, things like investigating traffic connectivity between a couple of work loads or [inaudible 00:35:42] namespace, or if you want to just do a query an ingress from a particular namespace, well guess what? You can just run those [inaudible 00:35:50] in CloudWatch itself, without taking the log data to some other system and doing complex [inaudible 00:35:57] analysis on that.

Similarly, if let’s say in addition to that, we’ve also added a set of network traffic metrics that again will show up as metrics within your CloudWatch environment. That’s where you can set up your thresholds and alarms. For example, in this case, if you are automating an anomaly detection workflow for your environments, you can set up your policies. Now you know what is the base line for the denied packet counting environment. You can set up your thresholds, not just for your security metrics, but there are a few operational metrics that we saw there.

Then set up your alarms. Those alarms can directly feed these violations of thresholds, reach directly into your security systems and it helps you automate your incident response workflows. With that, let’s switch back to my AWS management console. What I want to do is, I want to walk you through how somehow these log files show up there. As I mentioned, all of these logs and metrics, these are things that you can find inside CloudWatch natively in AWS.

I’ll go to my CloudWatch service, navigate to the logs. Essentially here, you will see a log group created called Tigera flowlogs, and it will be the cluster variety for each Kubernetes clusters. These are clusters that are being protected and monitored by Tigera Secure Cloud Edition. Let’s go down into this particular flowlogs. Within the log group, there will be a log stream for each worker node in your cluster. We don’t need to go down through the log stream. I can just go and search on the log group.

If I can just go and search on the log group. It get them to look at some of these things, but this essentially what I was showing in the slides. And now you can see that I have a lot of contacts in each of these flow logs around who’s talking to what and so on. So for for example, if I want to just monitor all connectivity for, let’s say, a particular mainstreams. In this case, let’s just say stars. I can just filter my events by that and it will show me all of the data around that. If I want to check that against native capabilities in Cloud Watch. If I want to look at all denied packets, or deny traffic, I can look at that. I’m just want to call out a couple of things there so you see all the information here. This is what the traffic looks like if you were investing this in an instance that you may have running inside your VPC or a host space monitoring tool. We’ll actually see that the traffic was accepted. So this is something unique that you get here with the Tigera Secure Cloud Edition.

Again, you can do all kinds of queries here. So for example, if I want to do a query where I want to see all traffic to public internet, I can do that. I can look at the last 30 minutes or all traffic, simple pointers. Something that’s really hard to do a native log we’ve improved significantly there.

Amit Gupta: So now let’s go to the metrics. So just like with the logs, in the metrics section, we publish a whole bunch of metrics various activities. So what I want to show you is, I’m gonna go inside my cluster, and you can see there’s a lot of metrics we publish around nodes being healthy, and healthy, and so on. But the one that I want to is really this denied packet metrics that will help us build a full workload around your anomaly detection and so on. So let’s just look at last one hour and this is probably the time at 10:20 a.m. which is when I actually started inventing some policies because prior to that I had a rule in place where everybody’s allowed to talk to everybody and that’s when we implemented a default denial so everything was stopped. So you can track all kind of activities around this. And once you’ve got stable policy definitions for your environment, you can go ahead and set up your thresholds and alarms accordingly. So if you see rogue traffic in your environment or a new policy being created that changes the posture for your application, you will get an alert right away and you manage the workload from there.

The visibilities and capabilities that are available again, all of this data natively goes into Cloud Watch. So let’s switch back to the slides one more time. And I want to walk through a couple of final pieces here and then we can take some questions towards the end.

Amit Gupta: So the third aspect is around how you manage your compliance around these infrastructure. And there’s multiple aspects to it. Obviously the first thing is to make sure you can enforce all the policy controls that are you are coming from your client streams. So we walked through earlier how you can create a decorative set up and policies to make sure you have the right segmentation in place in your continuous clusters. The second is being able to continuously monitor and audit those polices. And the last piece, and it actually comes from audit, you are already aware of log information that the auditors can potentially ask for or any standard reports that they typically ask for. So what I’m gonna do is now actually take you to an interference where all this flow log data that we have been seeing so far, I have set up a stream of this data going to Elasticsearch the name, again that I’m running inside [inaudible 00:42:31]. And that’s where I’m gonna show you how you can create a specific compliance report, how you can create your queries, how you can set up a dashboard. Now we’re gonna walk through some of those scenarios quickly.

So let’s go back to our screen share. And this where what I’m gonna do is I’m gonna go back to an Elasticsearch domain. So let me go back the  management console to make sure I show you an Elasticsearch domain and how it’s set up. So this is where I have a domain pre-created. And again, this is something that you may already have in your environment or you may want to add if your services does not come natively with Tigera Secure Cloud Edition, but we have seen users use Elasticsearch natively, then they want to take the data to some other non regulation platform that may be running on prim, doesn’t matter where you’re compiling the data. You can take all of the source information from Cloud though. So let me just show you one more thing here, so as you can see from my flow logs here I got this integration enabled where I’m streaming all of this data to Elasticsearch domain which is the Kibana end point. So let me just go back here quickly and make sure it’s logged in … so that’s good.

Now let’s go to Discover. This is where all of that flow log information that we have created earlier and it’s being directly streamed here. Again, let me just run some queries here as an example. So for example, if let’s say, you need to be able to run a query where you want to see all traffic going to public internet. I’m actually going to use a slightly different index. So let’s go with this and the one thing here is that I’m running a query where the destination and point type is internet so we don’t know the end points and it’s going to public internet. And you just run the query. I added another variable that says the source is coming from your parts and containers. You can quickly get the endpoint that are currently talking to public internet.

Again, if you were getting ready for your compliance audit, this is actually a common question your auditors will ask and say “Hey, give me all outbound public internet logs for endpoints in this main space.” That itself is a very simple query. Now let’s say a auditor comes and ask you like “Hey, can you show me all the connectivity going in and out of my main space boundaries?” And that could be a production name space. So in this case, running a query is all destination traffic into my star stream space, that could be again a production application where the traffic is coming from outside that main space, any of the main space. And it’s really common practice and a common compliance control that no non production main space should be allowed to talk to production. So this where you can see all the traffic that is coming from outside the main spaces. In this case, it’s largely coming from management UI pod, this is this portal pod, and that’s actually allowed so you’re in pretty good shape.

Again trying to address those audit queries from our compliance audit queries coming from your auditors, using all the workload context for your flow logs is not here to share. Now that’s not enough if you want to create … there are some standard queries that you often get from your auditors or reports that you have to generate for security compliance team, with all of this data and the metrics you can actually create those reports and have it ready or have it on the refresh every five seconds, I guess what you always a current Kibana dashboard for your compliance scenarios. On this one, I really just have one simple example, which is give me the number of endpoints with Egress to internet, I need that both for compliance and sometimes even for security reasons. You want to make sure that there is actual egress filtration happening on your end, one, and if somebody is talking to internet, you wanna check that. In this case, there’s only one end that is enabled which is actually the Tigera controller. So again, very easy for you to create those dashboards and queries.

Let’s switch back to our slides now. During lab set up capabilities, as you saw through the demonstration in this session, your entire experience building, managing and monitoring security and compliance for this infrastructure is all native AWS, all native Kubernetes. And that’s pretty cool because we do believe our users would want to enable these policies of monitoring all of this as part of their VDP appliance and as part of their deployment processes. So one of the key aspects of the product is that everything is enabled natively so you continue to use your Kubernetes control plane, all of the logs and the metrics data by default goes into CloudWatch, and from there on, obviously you can to take it to Elasticsearch or your favorite tool set. So that’s, well, in a nutshell, I covered the core set up capabilities for the product.

Just one last thing before we switch to the questions. Some of you maintained that native was all connected framework and if you look the six design principles in that framework that are focused around security, with some of the capabilities that we just talked about, you can actually go through each of those design principles and make sure you have a very solid security and compliance posture around these parts and containers running in these environments. So starting with the first one where you’re implementing a model for your lease purchases for your containers to making sure you have full visibility around metrics and logs. We’re also continuing to build up adding other enforcement points and you will see. Since the product, obviously we are enabling this through our policy based models so you automate the workflows. And last but no the least, if you integrate these workloads, you actually are going to be able to automate any kind of workloads around infinite response or for instance, say if this is what you need to be able to have and deploy a well connected framework for security, for your current dual applications running in Kubernetes and

Host: Well that is the end of our presentation portion of this show. We’d like to go to Q&A now so if you have questions for Amit or Carmen, please enter them into the chat window that you have, the question window. And we’ll answer them. We’ve got a couple questions. First up, for Carmen “With AWS, what really is the strategic value of containers and what do those containers enable your customers to achieve?”

Carmen Puccio: Yeah, yeah. Fantastic. Okay, so a vast majority, when you think about it, of our customers are either already deploying their applications on containers or they’re really starting to think about going down that path. And many of these applications are new in terms of strategic Cloud native applications and this represents a valuable opportunity for us to act as advisors in a trusted advisor role and help guide new customers to adopt that new para dime to really achieve that agility that comes along with running your workloads inside of containers. So developers have loved developing and running their production applications at scale on AWS for years and now we’re helping our customers make that transition to containers and they’re starting to continue that benefit from all the new features and services that have come along with running their containerized workloads on top of AWS.

Host: I think that probably answered him. Another question is “Why do customers want to run containers in AWS?” I think you may have answered a little bit of that, but do you have anything further to say?

Carmen Puccio: So when you think about our tenants, and the reasons people have been running workloads on top of AWS regardless if it’s containers or service or just classic compute. A lot of it just boils down to things like our track record for innovation. And the containerized workloads, if you really think about it, we were the first ones to offer a managed docker orchestration service with Amazon EKS. And we also offered a fully managed private container repository with VCR and they were all the way back in 2014 which I believe was a year after the first release of docker and since then, we’ve continued to innovate. We always talk about our roadmap is driven by what our customers ask us for and since then, we’ve continued to innovate in that space. And in 2017, we launched Fargate, which is that production ready technology for running containers without managing servers and obviously in 2018, we announced Amazon EKS and that allowed us to be the first provider to offer two fully managed production Cloud native container management services. So if you really think about that track record of innovation, that’s one of the reasons and one of the cool reasons the customers want to run their workloads on top of AWS in that containerized space.

Host: Great. Okay. So let me see, we have a question here “We see how to include pods in a security group, is it useful to allow pod access to our DS for instance?”

Amit Gupta: Yeah so I think definitely walked through that particular scenario where you eventually have … you can enable [inaudible 00:53:46] access to your pods to your audiences. Now if you are working in a default deny model in your container space infrastructure and you may have a need to control access from your VCP access to pods and cumulative services. We do provide a model where you can actually allocate an initial policy that you’re using with security matching. And that is also a capability that’s part of the product. I did not show that today in the demonstration, but it’s essentially part of a thing where you control model. You first implement policies for your pod to pod communications then we add security group innovations to your pods that will allow you to access from pod to VPC resources and then for your VPC resources to pod access, we do allow you to create. Again, you match labels in the policies to security groups and that’s actually well documented in the product documentation and with those capabilities, you should have all the capabilities you need when you find the way to access that. Okay.

Host: Okay. Well it looks like we have a mash question so “How does EKS fit into this picture? And how does it fit with Tigera and the AWS Kubernetes offering?”

Amit Gupta: That’s a great question actually. EKS is a critical part of our policy and security model and the product architecture so for those of you that may not know, I believe in Qcon last year, Tigera announced integration with STO and our application layer policies as part of our open source project called Tigera Calico. In this specific product that I showcased today, Tigera Secure Cloud Edition, in the current release, we do not yet have EKS integration and what that comes down to is that you won’t be able to set application layer policies yet, but that’s something that we are actively working on and you should be able to add that pretty soon in the future.

Host: Great. Okay, well, we’ve come almost to an hour so this is the end of our webinar. I’d like to also bring your attention to a special offer that we have for a free 90-day credit on Tigera Secure Cloud Edition on AWS running on Amazon EKS. And so for more information, you visit that link which is www.tigera.io/promotion/ and you can request to get that credit and you can try out Tigera yourself for 90 days and see how it works. So once again, I would like to thank our presenters for their time and thank you for attending. This webcast will be available on demand immediately after we finish. So if you want to review anything, you can. So once again, thank you for coming.