Containers are a great way to package applications, with minimal libraries required. It guarantees that you will have the same deployment experience, regardless of where the containers are deployed. Container orchestration software pushes this further by preparing the necessary foundation to create containers at scale.
Linux and Windows support containerized applications and can participate in a container orchestration solution. There is an incredible number of guides and how-to articles on Linux containers and container orchestration, but these resources get scarce when it comes to Windows, which can discourage companies from running Windows workloads.
This blog post will examine how to set up a Windows-based Kubernetes environment to run Windows workloads and secure them using Calico Open Source. By the end of this post, you will see how simple it is to apply your current Kubernetes skills and knowledge to rule a hybrid environment.
Windows containers
A container is similar to a lightweight packaging technique. Each container packages an application in an isolated environment that shares its kernel with the underlying host, making it bound by the limits of the host operating system. These days, everyone is familiar with Linux containers, a popular way to run Linux-based binary files in an isolated environment.
However, Windows also offers a container solution that allows users to package Windows-based applications in an isolated environment. Depending on your application’s framework and API calls, you can choose from several base images that Microsoft provides to create a Windows container. These base images range from full implementation of Windows APIs and services to a minimal version with a small footprint. It is worth noting that the build number of these base images must match your host Windows build number to run them on your operating system.
Container orchestration
After creating a container image, you will need a container orchestrator to deploy it at scale. Kubernetes is a modular container orchestration software that will manage the mundane parts of running such workloads.
To make this post more interesting, I will share all the commands required to set up a hybrid Kubernetes cluster in Azure; you can open up your Cloud Shell window from the Azure Web Portal and run the commands if you want to follow along.
If you don’t have an Azure account with a paid subscription, don’t worry—you can sign up for a free Azure account to complete the following steps.
Resource group
To run a Kubernetes cluster in Azure, you must create multiple resources that share the same lifespan and assign them to a resource group. A resource group is a way to group related resources in Azure for easier management and accessibility. Keep in mind that each resource group must have a unique name.
The following command creates a resource group named calico-win-container
in the australiaeast location. Feel free to adjust the location to a different zone.
az group create --name calico-win-container --location australiaeast
Calico for Windows
Calico for Windows is officially integrated into the Azure platform, so every time you add a Windows node, it will come with a preinstalled version of Calico. To check this, use the following command to ensure EnableAKSWindowsCalico
is in a Registered state:
az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnableAKSWindowsCalico')].{Name:name,State:properties.state}"
Expected output:
Name State ------------------------------------------------- ---------- Microsoft.ContainerService/EnableAKSWindowsCalico Registered
If your query returns a Not Registered state, use the following command to enable the AKS and Calico integration for your account:
az feature register --namespace "Microsoft.ContainerService" --name "EnableAKSWindowsCalico"
After EnableAKSWindowsCalico
becomes Registered, you can use the following command to add the Calico integration to your subscription:
az provider register --namespace Microsoft.ContainerService
Cluster deployment
A Linux control plane is necessary to run the Kubernetes system workloads, and Windows nodes can only join a cluster as participating worker nodes.
az aks create --resource-group calico-win-container --name CalicoAKSCluster --node-count 1 --node-vm-size Standard_B2s --network-plugin azure --network-policy calico --generate-ssh-keys
Windows node pool
Now that we have a running control plane, it is time to add a Windows node pool to our AKS cluster.
Note: Use windows
as the value for the --os-type
argument.
az aks nodepool add --resource-group calico-win-container --cluster-name CalicoAKSCluster --os-type Windows --name calico --node-vm-size Standard_B2s --node-count 1 --aks-custom-headers WindowsContainerRuntime=containerd
Exporting the cluster key
Kubernetes implements an API server that provides a REST interface to maintain and manage cluster resources. Usually, to authenticate with the API server, you must present a certificate, username, and password. The Azure command-line interface (Azure CLI) can export these cluster credentials for an AKS deployment.
Use the following command to export the credentials:
az aks get-credentials --resource-group calico-win-container --name CalicoAKSCluster --admin
After exporting the credential file, we can use the kubectl binary to manage and maintain cluster resources. For example, we can check which operating system is running on our nodes by using the OS labels.
kubectl get nodes -L kubernetes.io/os
You should see a similar result:
NAME STATUS ROLES AGE VERSION OS aks-nodepool1-64517604-vmss000000 Ready agent 6h8m v1.22.6 linux akscalico000000 Ready agent 5h57m v1.22.6 windows
Windows workloads
If you recall, the Kubernetes API server is the interface that we can use to manage or maintain our workloads.
We can use the same syntax to create a deployment, pod, service, or Kubernetes resource for our new Windows nodes. For example, we can use the same OS selector that we previously used for our deployments to ensure Windows and Linux workloads are deployed to their respective nodes:
kubectl apply -f https://raw.githubusercontent.com/frozenprocess/wincontainer/main/Manifests/00_deployment.yaml
Since our workload is a web server created by Microsoft’s .NET technology, the deployment YAML file also packages a service load balancer to expose the HTTP port to the Internet.
Use the following command to verify that the load balancer successfully acquired an external IP address:
kubectl get svc win-container-service -n win-web-demo
You should see a similar result:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE win-container-service LoadBalancer 10.0.203.176 20.200.73.50 80:32442/TCP 141m
Use the EXTERNAL-IP
value in a browser, and you should see a page with the following message:
Perfect! Our pod can communicate with the Internet.
Securing Windows workloads
The default security behavior for the Kubernetes NetworkPolicy resource permits all traffic. While this is a great way to set up a lab environment in a real-world scenario, it can severely impact your cluster’s security.
First, use the following manifest to enable the API server:
kubectl apply -f https://raw.githubusercontent.com/frozenprocess/wincontainer/main/Manifests/01_apiserver.yaml
Use the following command to get the API Server deployment status:
kubectl get tigerastatus
You should see a similar result:
NAME AVAILABLE PROGRESSING DEGRADED SINCE apiserver True False False 10h calico True False False 10h
Calico offers two security policy resources that can cover every corner of your cluster. We will implement a global policy since it can restrict Internet addresses without the daunting procedure of explicitly writing every IP/CIDR in a policy.
kubectl apply -f https://raw.githubusercontent.com/frozenprocess/wincontainer/main/Manifests/02_default-deny.yaml
If you go back to your browser and click the Try again button, you will see that the container is isolated and cannot initiate communication to the Internet.
Note: The source code for the workload is available here.
Clean up
If you have been following this blog post and did the lab section in Azure, please make sure that you delete the resources, as cloud providers will charge you based on usage.
Use the following command to delete the resource group:
az group delete -g calico-win-container
Conclusion
This post lists many reasons for running a containerized environment. If you feel like offering services at scale or an agile environment is your cup of tea, I recommend taking a look at Tigera’s certification courses.
Calico courses are self-paced, step-by-step tutorials that prepare you to build containerized environments on different cloud platforms or local test environments. On top of that, you will learn about Calico integrations and security measures that will allow you to build a secure environment from start to finish. You can also find the instructions detailed in this blog in a webinar format: CNCF On-Demand Webinar: Securing Windows Workloads.
Ready to become an Azure expert? Enroll in our Calico Azure course now.
Join our mailing list
Get updates on blog posts, workshops, certification programs, new releases, and more!