Kubernetes & Calico: Network Policies, Security, and Auditing

 

Of course, Tigera’s ability to provide Kubernetes pod networking and facilitate service discovery is extremely valuable, but its real superpower is that both Tigera’s commercial offerings and open-source Tigera Calico can implement network security policies inside a Kubernetes cluster.

Most external network security operates at the perimeter or at the physical network layer of Kubernetes. Because Tigera runs inside Kubernetes, it can provide policy and security based on Kubernetes structures like namespaces and deployments.

In this webinar, Senior Technical Solutions Engineer with Tigera, Drew Oetzel, will show you examples of implementing these types of policies for several common security and compliance use cases.

He’ll also show you why implementing these types of security policies is so important to keeping your ever-expanding Kubernetes workloads secure.

Michael: Hello everyone. Welcome to Kubernetes and Tigera, Network Policies, Security and Auditing. Before I hand the webinar over to our presenter I have a few housekeeping items I would like to cover on this presentation and the webinar platform in general. First off, today’s webinar will be made available on demand after the live session, it’ll be accessible through the same link that you’re using now and that should take about 15-20 minutes after the presentation is over. Copies of the presentation can be made available, you can just email [email protected] and we will try to get those to you.


Michael: And lastly, we’d love to hear from you during today’s presentation. This is interactive, so please feel free to ask questions as they come up. We like to field the questions as they come on the slides so that they’re relevant, so that you remember what the topic was. That being said, we will have a Q&A session at the end to field any questions we didn’t get to, or any further questions you might have. And you can ask questions by going to the Ask A Question tab in your Bright Talk interface.


Michael: With that I am pleased to introduce today’s speaker Drew Oetzel. Drew is a senior technical solutions engineer with Tigera. He’s been working in enterprise software since the 90s. So, he’s been around. He spent 7 years at splunk, and two and half years at Mesosphere, and then at Heptio. He is a master of distributed systems, containers, and everything that goes along with them. And we’re really pleased to have him present presenting today. Outside of technology, you know, he says to ask him about history, gardening, or what he’s doing to curb his Reddit addiction. So, I guess we’ll have to find out if there is a history of gardening sub Reddit somewhere, and see if he’s in there.


Michael: So, let me hand the webinar over to Drew to take you on a journey through Kubernetes and Tigera Calico. So Drew, it’s all you.


Drew Oetzel: All right. Hey thank you very much Michael. I appreciate that introduction. And after this webinar, I think I am going to start the History of Gardening sub Reddit. So, you’re not helping with my addiction. Welcome everyone. Good afternoon. Good evening. Good morning, depending on where you are in the world. Thanks for coming to our webinar. Today, we’re going to be talking about [inaudible 00:02:33] policies into [inaudible 00:02:37]. But before we do that, we need to talk a little bit about: Who is Tigera?


Drew Oetzel: So, that’s the question I get asked all the time, who is Tigera? My working theory was, for a while, that it was a super secret Pokemon. But unfortunately, that is not true, or fortunately, depending on your outlook. But we do have a conference room named after Meowth. And if you’ve never heard of Meowth, do what I did and ask your kids. All right. So, this is who Tigera really is. Tigera is the commercial enterprise that is attached to Project Calico. Many of you probably heard of Project Calico if you’ve been using Kubernetes or other distributed systems. It provides the networking plane for lots and lots of distributed systems worldwide. It’s very widely deployed, 80,000 plus known Kubernetes clusters worldwide, for those that haven’t turned off the phone home app.


Drew Oetzel: And so, it provides both CNI plug in and network policies for Kubernetes systems. Now, who is Tigera? As well, we also offer something called Tigera secure, which is our enterprise product that is built on top of Calico. It extends the benefit and use case of Calico even further. And we’d be happy to discuss that further with you as well, if you’d like, after the call. And you can see down here at the bottom, Calico and Tigera are in use across Silicon Valley and many other institutions, banks, manufacturing, cars, healthcare, places that tend to have security and compliance requirements are focused and are looking into Calico and Tigera Secure to meet their Kubernetes security needs. We also have strategic partnerships. We work very closely with all of the major cloud providers, so the Azure, AWS, Google, IBM Cloud.


Drew Oetzel: We also work very closely with both Docker and Red hat on their Kubernetes offerings. And in fact, for many of these offerings, we are the integral networking that they chose for their Kubernetes offerings, so both Google and Docker, by default, you’ll find Calico operating in their hosted and supported Kubernetes offerings. Okay. So, I mentioned this a little bit already. But what is Calico? So, Calico is networking for containers. So, I put networking in scare quotes there, because it’s a whole lot more. But that’s where it starts at. Calico provides the networking plane for your containers inside some kind of distributed system. Specifically today, we’re going to be focusing on Kubernetes. But you will see Calico in places like, Mesos and Openstack as well. But for today, we’re going to be focusing on Kubernetes.


Drew Oetzel: It is a fully open source project, a member of the cloud native compute foundation. And it is the most used CNI plug in for Kubernetes. If you don’t know what a CNI plug in is, we have a separate talk on that. So, maybe come back for the CNI talk. And you can see our documentation, information about the project, on projectcalico.org.


Drew Oetzel: All right. So, that’s great. But what can Calico do for me, you ask. Well, Calico can efficiently hand out IP addresses for your pods inside Kubernetes. But it does more than just hand out IP addresses. You can do things like create individual sub nets per name space for your pod IP. You can even create individual subnets for a rack or a data center if you’re … or a VPC, what have you, for your containers within Kubernetes. And you can also, believe it or not, create static IPs for specific pods. So, you know, if you want to make sure a single pod always has the same IP address, you can absolutely do that. That is the Calico CNI. So, one thing to keep in mind, there are two main parts to Calico. There’s the CNI, which hands out the IP addresses. But there’s also the Calico for network policy.


Drew Oetzel: And I’m really underlining the difference here because they are separate things. You can run the CNI without running Calico for policy. You can run Calico for policies without running Calico CNI. So, there a mix and match. You don’t have to run the CNI to get to Calico policies. Many of our offerings on things like [inaudible 00:08:06], they’re underlying CNI, but we still allow you to set the network policies that we’re going to discuss in today’s call, even though you’re not using the CNI. So, I just wanted to underline that important distinction here. Calico CNI is not required to run Calico policies.


Drew Oetzel: Now, what do Calico policies do? Well, if I have to give a short answer, if I’m only in an elevator, I would say Calico policies are like a firewall for Kubernetes. They’re a way for you to have explicit detailed complex firewall rules, for your workloads in side Kubernetes. And this is very important. Your security team is going to love this because they’re going to want to see the same kind of security that they’re used to outside of Kubernetes, inside Kubernetes. So, you can secure your East/West traffic. You can monitor your East/West traffic for compliance. And then, you can also block insider threat and insider error. I always like to say, sometimes the insider isn’t a threat. Maybe they just made a little mistake. So, there’s plenty of room for insider error to cause you network problems as well.


Drew Oetzel: All right. So, we’ll take a quick Calico anatomy review as it comes to running inside Kubernetes. My apologies for the little bit of a small graph here. But this is basically how Calico runs. So, Calico runs as a demon set on your Kubernetes cluster. So, each one of your Kubernetes worker nodes is going to have a Calico pod sitting on it that will be facilitating both the network gains and the network policies inside that cluster. Like the rest of Kubernetes, Calico uses Etcd to store it’s information. So, Calico you have the option. You can use the Etcd that comes … that Kubernetes uses. Or you could also use a separate Etcd. Now, in certain circumstances, for very high volume or very large clusters, it might make sense to move Calico to a separate Etcd. Most of the time, for default installs, you’re going to be … Calico is going to be using the Etcd that Kubernetes is using.


Drew Oetzel: Calico then sits inside each one of your nodes. It builds the IT table’s rules that you specify inside each one of your nodes, and then facilitates that communication. As you see here on the slide, we have a firewall at the node level of the … excuse me. Sorry. We have a firewall icon here at the pod level, because Calico can secure communication to individual workloads at the pod level. But we also have another firewall here, down at the node level, because Calico can also secure communication at the node level. So, if you’re wanting to control at the node level, absolutely. If you want to control at the pod level or [inaudible 00:11:25] level, the same as well. Then, once … Then, on top of that, Calico runs on any of the cloud providers networking solutions, or your own in house networking solution.


Drew Oetzel: But the key takeaway here from this slide is Calico is a distributed networking solution for inside your cluster. It runs as a demon set. So, the workload will be distributed evenly across your cluster. And it provides the most resilience as well. Okay. So, that’s the anatomy of a Calico install on Kubernetes. And that’s kind of who Tigera is. Now, let’s start to talk about declarative network policy. So, why do we need network policies inside Kubernetes? Sometimes, my customers do ask me that. Kubernetes was originally designed as a very open and forgiving networking environment. A typical Kubernetes install, all of the pods can talk to all of the other pods if they need to. All east/west traffic is going to be wide open. There’s no … By default, there’s no policies, no limits, on any of those things, by default, inside Kubernetes.


Drew Oetzel: And that’s great for developing things. It’s great for inventing stuff. But, here’s the thing to keep in mind. If you have N number of pods running inside your … N number of services, N number of pods, running inside your cluster, that’s going to be N squared for the number of possible connections. So, if I’m running nine services, that means there’s 81 possible connections that I’ve got to worry about with those nine services. I see here there’s a question. Is Calico running at layer three four. Yes, absolutely. Calico is a creature of layer three. Is there any feature to encrypt network in Calico similar to Dockers Swarm network encryption? No. There’s no encryption from open source Calico currently available. That’s definitely on some of the different feature requests that we see. But currently, Calico does not provide encryption. Tigera secure, you can absolutely start to use things like encryption in Dockers Swarm. Absolutely. Particularly, along with Istio. If you run Istio along with Tigera Secure, you’ll definitely get a decent level … no, a very high level of encryption and authentication for your workloads.


Drew Oetzel: Sorry about there, that was the two questions. But again, as I was saying, if you consider the number of workloads that you’re running, and then you mult … you square that. That’s the number of connections that are possible inside your Kubernetes cluster. And that’s great, until you think about one thing. If for some reason, one of your pods, one of your workloads gets corrupted, gets … an attacker is able to gain some control inside that pod, guess what? Now, they have access to all of those 81 connections. They can go anywhere they want once they get that beach head inside your cluster. So, wide open clustering is a great idea for maybe for dev. But even for dev I would argue it’s not. But, you know, it’s definitely not some way you’re going to wanna run any kind of production Kubernetes cluster. You’re going to want to have a handle on these connections, and make sure your micro services are only talking to the micro services they’re supposed to talk to.


Drew Oetzel: In fact, you wanna know if they’re trying to not to. We have another question here. How can Calico use same IP addresses to different name spaces? When I was in Japan, this happens to one pods where the cloud node with different name spaces but same IP address was typed into a different cloud node. The result was a complete cloud went down and the same IP address and two pod nodes.


Drew Oetzel: I don’t know about that. I’d have to dig into that particular problem there. It sounds like a mis-configuration somewhere. I’ve never honestly seen that happen. But it definitely sounds like a mis-configuration. Ah! All right. So, there are … Again, like I was saying here, all of this wide open networking. That’s kind of a bonanza for any kind of hacker. Does Calico replace cube net or work with cube net? It works with cube net. So, what we recommend at Tigera is for you to take a look and see, what are some of these possible unused connections? What possibly could happen if my load balancer has direct access to my database layer, and my middle ware layer has direct access to my load balancer? Things like that, things that you would never allow in a traditional stack, you’re going to need to start to take a look at those. And then, start to limit those.


Drew Oetzel: So, that’s what we’re going to talk about today, is how you can build rule sets inside Kubernetes, to limit these types of connections, and ensure that your workloads are only talking to what you want them to talk to. So, one option that is very easy to build with Calico is name space isolation. So, you build a global rule set for your Kubernetes cluster using Calico. And that global rule set would say, “All right. Namespace A is only allowed to talk to Namespace A. Any traffic that’s coming in from a different Namespace will automatically be dropped.” And then, the same thing for Namespace B. This would be very handy, for example, if you wanted to have a separate dev from a separate staging … a different dev from a separate staging cluster … namespaces inside your cluster. Maybe you wanted to have staging and dev along side each other. You could absolutely set up these namespace isolations to make sure that that kind of multi tendency would work very well.


Drew Oetzel: But we recommend a step beyond that. Namespace isolation is fine. In fact, it’s an important thing for multi tenancy and things of that nature. But we recommend to go beyond that and to do more fine grained policies within each Namespace as well. Because again, you’re still … If you have wide open Namespace, as in the previous slide here, that still can … it’s a lot of places for a hacker to be able to gain access across that Namespace. So, what we recommend is two tiered here. We recommend the Namespace isolation. And then, within each Namespace, custom rule sets for your micro services. You can set these rule sets such that the … you know, microservice A can only talk to microservice B, and microservice B can only talk to the data base, et cetera. So, that’s the kind of rule sets you can absolutely, and we highly recommend you, create inside your Namespace.


Drew Oetzel: So again, like I was saying, some examples of this type of policies is multi tenant, separate staging from dev, isolate prod from staging. You can translate traditional firewall rules here. So, you can literally use Calico to create the three tiered architecture we’re all used to. You got the DMZ on the top. You’ve got the middle layer on the middle. And then, you’ve got the data base layer on the bottom. You can absolutely recreate that infrastructure with inside Kubernetes. You could have a DMZ Namespace, a middle ware Namespace, and a data base Namespace, and recreate that same kind of firewall, 3 tiered, security infrastructure that we all know and love.


Drew Oetzel: The other thing, of course, is if you’re offering some kind of SAS, if you’re offering a SAS operation, you can use tech, you can use Namespace and Namespace isolation for tenant separation. Different paying customers really probably would not appreciate their data mingling, their network traffic mingling freely in a Kubernetes cluster. The other thing, of course, is they give you these fine grained firewalls so you can allow the communication you want and disallow the communication you don’t want. You can even achieve, if you’re very careful and do your due diligence, you can build what we call a zero trust network, so that the only traffic on your network is going to be white listed traffic. For things like PCI compliance, or HIPPA compliance. If you have a Namespace that is identified as PCI, you can write the rules that say, “Only things within the PCI Namespace can talk to other workloads.” So, any workload that’s not part of the PCI Namespace that tries to communicate with the PCI Namespace, that will automatically be dropped. You can show those rules to your auditors, which will of course make them happy.


Drew Oetzel: Besides Namespace, there’s one other key Kubernetes concept that I want to highlight here. And that is labels. So, workloads, secrets, any object that you create in Kubernetes can have a label. So, if you’ve got secrets, if you’ve got load balancers, if you’ve got different services, the different objects that you create in Kubernetes, you can create these labels. These are our arbitrary T-value pairs that you set up. And at this point in the talk, I always like to take a little step back and kind of climb up on my Kubernetes soapbox here. So, forgive me. But whether or not you use Calico, whether or not you use Tigera, I want you to spend some time thinking about your Kubernetes taxonomy. I want you and your team to sit down and figure out your Namespace taxonomy, and your label taxonomy, because that is how you can run a good safe secure Kubernetes. Whether you’re using Calico or not, obviously I think you should. But even if you’re not, I want you to spend the time and do the due diligence on your Namespace and label taxonomy, because they are so important to troubleshooting, to management, and to security.


Drew Oetzel: All of the security policies that we’re going to discuss hang off of Namespaces, hang off of labels. So, they are the key things for you to set up in your Kubernetes, before you’re going to be able to build any of this other security around it. So, take the time to sit down and figure out your taxonomy. All right. Forgive me. I’m going to climb back down off my soap box. And we’ll talk now about … Labels are a arbitrary T-value pair, and they are another way for you to establish rules for your workloads to talk to each other. So, you could establish a rule that says, “Label equals database.” And then, you can say, “Anything that has that particular label is allowed to talk to these microservices. If it’s not that label, they can’t talk to those microservices.” That’s the kind of rule that you can then build inside your Namespaces to limit that kind of communication. Now, when it comes to actually using these labels, you have multiple options in Kubernetes.


Drew Oetzel: You have equality based text for your labels, so the two equals sign or the bang equal sign. Or you also have the ability to set membership. So, you have “in”, “not in”, and “exists”. So, you could say, “I want this to apply to anything in production or QA. Or I want this to apply to anything in not, in the front end or the back end, so my middle layer for example.” So, you can use these two different types of label selectors when you’re building your policies inside Kubernetes. Now, out of the box, Kubernetes does offer some very simple network policies. Whether or not you’re running Calico, you can set up these policies. Kubernetes network policies are based on a pod label selector and a Namespace label selector. Kubernetes policies are limited to a Namespace. You cannot have a global Kubernetes policy. If you want to have a global policy, you’re going to have run Calico.


Drew Oetzel: But you can set up Namespace specific policies, saying, “All right. This pod can talk to that pod. This pod can’t. This pod can talk to the other pod.” You can set up those types of rules within just Kubernetes policy. You can specify protocols. You can specify ports. You know, this is going to improve as Kubernetes improves as well. But again, if you want to have any kind of global policy inside, you’re going to have to leverage Calico for that. So, I wanna/gotta go through a Kubernetes network policy example here.


Drew Oetzel: So, here in our example, we have a workload that is at the label of “role:frontend”. We have another workload that has the label “role:helper”. Helper and front end work together to do something at the top layer of our cluster. But down here at the bottom, we have another workload called … that has the label “role:database”. So, what we want to facilitate here is we want our front end to be able to initiate communication with our database on the TC port 6379. We also do not want our helper workload to be able to talk to the database at all. So, to do this, as with everything in Kubernetes, we will create a policy YAML. Now, you can tell this is a Kubernetes network policy YAML, because it says “network policy” under kind. So, how about that for truth in advertising? And then, here, let’s just go through this YAML and kind of dissect some of the different settings here.


Drew Oetzel: Obviously, at the top here, we have a generic .. this kind here tells us that it’s a network policy. Metadata. Again, like everything else in Kubernetes, policies can also have labels. So, we’ve given it a name, “name: my network policy”. And it’s associated with the Namespace “my namespace”. And then here, we have our specs. So, this is how we are going to select which workload this policy is going to apply to. And here is “pod selector: match labels: role:db”


Drew Oetzel: So, we’re saying this policy is going to apply to any workload, within the namespace of “my namespace”, that has the “role:db” label added to it. So, that’s how we select which ones going to … this rule is going to affect. And then, down at the bottom, we have the actual rules. So, the rule here is “ingress”. So, this about accepting connections. And then we say “from”. And then, we have another “pod selector”. And so, the pod selector “matchlabels:role:frontend”.


Drew Oetzel: So, this means that any workload for the label Namespace of “my namespace” will be able to communicate with the workload labeled “role:db” on port 6379, using TCP. So, that’s the anatomy of a typical Kubernetes network policy. The end result is, if something … if the helper wrote with the database pods, it will be blocked. That traffic will not be allowed, because there’s no rule allowing that. So, only traffic now on TCP 6379 from the front end there will be allowed.


Drew Oetzel: So, that’s a typical … That’s the anatomy of a very typical Kubernetes network policy. Now, we’re going to build on top of that. Calico network policies use Kubernetes network policies. But they also build on top of that. Calico network policies allow you to set up global rules. Calico network policies can also allow you to across Namespace. To communicate across a Namespace, you can use Calico network policies. Pod selector used select all pods, regardless of Namespace. Is this still the case? And if yes, are you using any workarounds? No. The pod selector in this previous example was only selecting pods in the Namespace of “my namespace”. So, no. That’s not the case anymore. How does this policy change when using an RDS instance for the database?


Drew Oetzel: It’s going to vary, depending on which particular database you’re running and what you’re trying to do with it. So, that’s a very generic policy just for an example here. The other thing that’s really handy with Calico policies that is not present in Kubernetes policies, is the ability to use service accounts label. So, if you’re familiar with service accounts … or sorry, if you’re not familiar with service accounts, service accounts in Kubernetes essentially are just user accounts, if you want to look at it that way, for your workloads. They’re a way for your workloads to authenticate themselves. So, you can actually have service account based policies with Calico networking, which is something that doesn’t exist at all for Kubernetes policies. We also allow richer label expressions. We also allow port specification so you can have a range of ports. Another key one is we have a deny rule. So, it will just drop or deny certain types of activity. That’s obviously not available in Kubernetes.


Drew Oetzel: Now, once we start getting global policies like this, one thing Calico is going to require is a policy order. Because, you know, if you’ve got a dropped policy, you don’t want that too high in your list, otherwise you’re going to drop everything. So, you have a policy order. So now, we have a much more firewall type construct here, where we have a list of rules with priority. And then, you could put like a default/deny down at the very bottom of that. Have a question here. What’s the best, by labels or Namespace? The answer there is going to be “depends”. If your workloads rarely work together or only work together in certain circumstances, it might make sense to put them in separate Namespaces. If you have different teams that manage those different workloads, it absolutely makes sense to put them in different Namespaces.


Drew Oetzel: But, you know, if you only have one team, and the workloads are talking to each other all the time, it probably makes sense to put them in the same Namespace. So, Namespaces are very plastic. But one thing to keep in mind is, role based access control in Kubernetes is going to be tied to Namespaces. So, if I’ve got a team that I only want them to access a certain part of the workloads running, I want those workloads to be separate in a separate Namespace, so I can build those R Back policies around it. All right. So, we got the deny rules, we got the policy order. Another key feature of Calico network policies, is network sets. So if you, for example, have … Maybe you’ve got a database that exists outside of Kubernetes and you want to be able to allow your workloads to talk to that, you could set up a network set, you know, the IP range that your databases are running on. And then, you can build a rule based around that.


Drew Oetzel: The other thing many people don’t know is that Calico will run on a standalone Linux node. So, if you’ve got a Linux server, you can install Calico on that server. It’s not running Kubernetes. Maybe it’s running a database. Maybe it’s running something else. You can install Calico on that. And then, that node can then take part in your Calico policies. You can then build that into your Kubernetes system. So, that’s one of the really cool features of Calico. That’s one of those Calico super powers that I’m telling everyone about all the time, is a lot of folks have Kubernetes running. But then, they have some services that Kubernetes needs but aren’t running inside Kubernetes. They’re running probably right next to Kubernetes, maybe in the same sub net, maybe in the same VPC. You can actually bring those workloads that are adjacent to Kubernetes. You can bring them under the Calico umbrella. So, that’s a huge huge win for security inside Kubernetes.


Drew Oetzel: We also have a host policies that are available. So again, you can go all the way down. Kubernetes policies secure pod level. Calico policies allow you to secure all the way down to the host level. All right. So, let’s take a look at a Calico policy example. So, this is a Calico policy example. In this example, we have a workload that something went wrong with. We have a workload that maybe it was compromised. It’s behaving strangely. It’s doing things it shouldn’t be doing. We’ve noticed that in the log file. We’ve got [inaudible 00:36:10] or splunk or something like that. And then, we notice this anomaly. It’s behaving strangely. What we can do, if we have Calico, is we can create a network policy called Quarantine. This network policy would become applied to your workload if you add the label to it of “Quarantine.”


Drew Oetzel: So, for example, I’ve got a pod running. It starts doing a port scan. It starts trying to connect to my web front end. It starts trying to connect to my database. It’s acting strangely. It’s up to no good. There’s definitely shenanigans. I can go into Kubernetes. I can add a label to that particular workload that says, “Quarantine equals true”. Once I add that label to that workload, this Calico policy is going to come into effect. And what does this policy do? This policy blocks all ingress and blocks all egress. That’s great. So now this thing … It can’t hurt anybody anymore. It’s been neutralized as far as it’s ability to communicate or receive further communications from the outside world. But that’s not enough. We need more. We need to figure out what went wrong. What’s going on? Who is it trying to contact? Who is trying to contact it? We need to do some forensics on this. So, we’re also going to add a rule to log all egress attempts and to log all ingress attempts, so that we can then go and see, “All right. Look. It’s trying to contact this shady IP address. It’s trying to join this botnet. Et cetera.”


Drew Oetzel: Or, “It’s trying to port scan our databases again.” You can see what it’s up to by logging those deny packets. Now, you might say, “Well, why don’t you just kill the pod?” Well, you could. And that’s not a terrible idea. But when you kill the pod, all of the information about the hack goes away with it. So, in this circumstance, we keep the pod running. We can then go in and do forensics if we need to. We can even access [inaudible 00:38:34] the node that it’s running on, and dig deeper inside that way. [inaudible 00:38:39].


Drew Oetzel: So, that’s kind of our scenario here. Now, let’s take a look at the actual YAML. That says … Oh! There’s a question here. Would you suggest installing Calico on non cube nodes that exist in other data status centers? Yes! If your Kubernetes cluster needs to talk to those nodes, it’s not a terrible idea to put Calico on there. Yeah. That’s not a terrible idea. I just mentioned being in the same subset because that’s the most typical sort of use case I’ve seen. But absolutely, if you’ve got your database on another continent, but your Kubernetes cluster wants to talk to it, you could absolutely do that. There’s another question here. Is it important that logging is declared before denied? Yes! Absolutely! Order matters! If I try to log something after it was denied, it’s too late, it’s already dropped and gone. So, absolutely yes. Order matters. So, we want to log and then deny.


Drew Oetzel: If we tried to deny then log, there’d be nothing left to log. Great questions. All right. So, let’s go through our YAML. Starting on the top here, how do I know this is a Calico network policy? Because it says “Projectcalico.org version three” for the API version. So, that’s your big clue. Now, kind here, we have “global network policy”. This is a global network policy. This is going to apply anywhere in this Kubernetes cluster. So, of course, we have our metadata name here. But here’s our specification. You’ll notice our specification is a little different. We have an order here. We have an order of 200. That’s a very high order, so that’s going to be at the top. Now quarantine …


Drew Oetzel: And then we have our label selector here “quarantine equals true”. So, any pod that has that label selector added to it will immediately become under the control of this policy. And here’s what this policy is doing. Like I said, any ingress, any packets that’s coming to this pod, to this workload, will be logged and dropped. Any packet that this workload tries to send out will be logged and dropped. Normally, you have log file when the packets behave strangely. Yes. But this is an additional logging. So, this is not just a flow log. We’re adding in additional … The log command in Calico does more than just a typical flow log. It’s going to give you more information about the packet than a typical flow log. So, the log is above and beyond.


Drew Oetzel: All right. So. You’re like, “Okay. That’s great Drew. I love Calico policies. How do I use them?” Well, Calico policies are applied using the “Calicoctl” on the Calico command line. Calicoctl is tied directly to Cubectl. So, the same authentication, the same R Bac, that you use for Cubectl, you will also use for Calicoctl. And like Cubectl, Calicoctl runs on Mac, Windows, Linux. You can also run … A lot of customers run Calicoctl in a pod inside their cluster, so they don’t have to install it on their workstations. But you can absolutely install it on your workstation as well. You can download Calicoctl from the Git. And you can get all of the information on all of the different commands from our web page. It is almost identical syntax to Cubectl. So, you’re going to be doing things like “Calicoctl apply minus F” and then the policy YAML. So, it’s going to be very very similar syntax.


Drew Oetzel: Have you Calico log for the show? No. But if you’d like to set up a meeting after, I’d be happy to show you those. But I don’t have any Calico log files in this particular slide deck. So, that’s kind of your lightning tour of network policies in Kubernetes, the baseline networking policies, and then on top of that the much more complex Calico policies. If you’re interested in doing a deeper dive, or want to talk to me one on one, I’m happy to set up a meeting. We can do deeper dives into these. I can show you Calico log files and everything. If you’re currently running Calico, you can take a look at Calico log files. They’re in [inaudible 00:43:42] Calico, on your worker nodes. Or if you’re doing some kind of log aggregation, you probably have access to them there. But I’d be happy to meet with you and show ’em to you.


Drew Oetzel: But I also want to talk a little bit about Tigera Secure. Because Tigera Secure is built on top of Calico. You get all of the goodness, all of the super powers of Calico. But beyond that, you also get tiered network policies. So, Calico network policies are one big long list. In Tigera Secure, we have what are called tiered network policies. So, you can have a set of golden rules that apply no matter what. And you have a set of middle rules that apply sometimes. And then, you have a set of rules that are workload specific. So, you’re sec ops team can set up the high level rules, like “no data ever goes to shady parts of the internet.”


Drew Oetzel: And then, your developers, you can give them little sandboxes so they can build specific rules for their workload. The other thing is we also have a robust GUI in Tigera Secure. We have reporting and compliance capabilities in Tigera Secure. And it is, of course, a fully supported enterprise offer. We have more Tigera webinars. I’m giving talks at meet ups all the time across the country, so, you know, come see me in person if you’d like, if there’s a meetup nearby. I’m actually delivering … If you’re in Dallas, I’m delivering at a meetup tonight here in Dallas. So, come on out. Otherwise, we have plenty more recorded webinars. And we’ll be offering more of these live webinars coming soon. All right.


Michael: Great. All right. Well, that … Drew, if that’s the end of your presentation. We’ll now … We’ll put the offer up for a final Q and A if anybody has any final questions. Let me just go back to that slide that Drew was on previously. So, the next webinar we have coming up … We do two webinars a month. This one will be after the fourth of July. It’s is on the tenth of July. And it is on Kubernetes, Helm, and Network Security. So, a lot of people are talking about Helm. And so, we’re going to bring up actually a qualified expert on Helm. He helped actually co-author some of the Helm that is in Red Het Open Shift.


Michael: So, that was a great webinar, very technical. And just … I’ve always … we have … All of our webinars are in Bright Talk. They are also on our website at www.tigera.io/webinars. One of the webinars that is a great one that is not on Bright Talk is a … During webinar, we deal with AWS and Atlassian where it was a case study on Atlassian using Tigera, using AWS, and being able to provide network security and migration to Kubernetes in an environment where people could execute arbitrary code, which is probably a nightmare scenario. So, they were able to secure that and keep that safe and secure. So, that’s a really good webinar to watch. The gentlemen from Atlassian are excellent presenters and very technical. I definitely learned something, and I’m in marketing. So, definitely something to learn. So, looks like we have a couple more questions. Drew?


Drew Oetzel: Yeah. I see one here. What does Calico Tigera Secure do better or worse than Istio? That’s kind of a false comparison. Tigera and Calico Secure … or, I’m sorry, Istio work together. So, Istio tends to be more of a creature of layer seven. Calico tends to be more of a creature of layer three. So, there is some overlap in their capabilities. But generally speaking, they work very well together. Now, if you have Tigera Secure and Istio, you can use Tigera Secure to control your Istio policy. So, they work very well together. They’re fully integrated.


Michael: Okay. Great. Well, I think that’s … That is all the questions we have, other than someone saying, “Great. Thank you so much for the presentation.” We always love those questions. So, without further ado, Hey Drew, thank you so much for presenting the information. That was a great talk.


Drew Oetzel: All right. My pleasure.


Michael: And thank you every body for attending. And we will see you hopefully at our next webinar on July 10th. And take care. Have a great day.