Using Kubernetes to orchestrate VMs

The benefits of containers and using Kubernetes for container orchestration are very well known. But what do you do if you have a workload that is not amenable to being containerized? Perhaps you have a third-party VM-based workload that you yourself can’t easily containerize, or perhaps requires a different kernel or base OS than your Kubernetes platform is running. 

What you would really want is a way for Kubernetes to orchestrate VMs alongside standard container based pods, in a way that looks and feels just like an ordinary pod. Two recent projects that aim to allow you to do just this are KubeVirt and OpenShift CNV.

In this blog I’ll get hands on with KubeVirt in a step-by-step guide that you can follow yourself to add KubeVirt to your cluster, using Calico networking, and then use Calico network policy to secure the VMs.

Before we begin

I’m using Ubuntu 20.04 and two bare metal servers for my development cluster. Although I have an explanation on how to create a similar development cluster in “Step 1”, you can safely skip it if you already have a different Kubernetes or OpenShift environment of choice.

Requirements:

  1. At least one host with 2 CPUs, 4GB Ram and 20GB of storage
  2. kubectl command line utility
  3. SSH client

Step 1: Create a cluster

Before we begin creating a cluster let’s make our host suitable for Kubernetes. Check out this excellent tutorial and prepare your host.

Let’s create a Kubernetes Cluster

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

Execute the following commands to configure kubectl:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Remove the taints on the master so that you can schedule pods on it.

kubectl taint nodes --all node-role.kubernetes.io/master-

It should return the following:

node/<your-hostname> untainted

Step 2: Install Calico

Install Calico using latest manifest

kubectl apply -f http://docs.projectcalico.org/manifests/calico.yaml

Step 3: Install KubeVirt

Using namespaces, we can isolate resources to logical blocks and manage them easier.

kubectl create namespace kubevirt

It is recommended to use a host that can support hardware virtualization, to ensure your host(s) are capable you can use virt-host-validate binary.

virt-host-validate qemu
QEMU: Checking for hardware virtualization                :PASS

If your host is missing this command you can install it using your distro package manager or simply check if the kvm folder is available using  ls /dev/kvm.


By default, KubeVirt tries to leverage hardware emulation. However, this feature is not available in all environments in this case you can enable software emulation using:

kubectl create configmap -n kubevirt kubevirt-config \
--from-literal debug.useEmulation=true

Apply these manifests and run the KubeVirt operator to automatically install all the required resources.

kubectl apply -f http://github.com/kubevirt/kubevirt/releases/download/v0.32.0/kubevirt-operator.yaml
kubectl apply -f http://github.com/kubevirt/kubevirt/releases/download/v0.32.0/kubevirt-cr.yaml

You can check KubeVirt installation progress using this command.

kubectl -n kubevirt wait kv kubevirt --for condition=Available

Step 4: Create a simple VM

First let’s create a namespace to isolate our resources for this demo.

kubectl create namespace kv-policy-demo

Using Virtual Machine Instance (VMI) custom resources we can now create VMs that are fully integrated with Kubernetes.

kubectl create -f - <<EOF
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachineInstance
metadata:
  name: vmi-cirros
  namespace: kv-policy-demo
  labels:
    special: l-vmi-cirros
spec:
  domain:
    devices:
      disks:
      - disk:
          bus: virtio
        name: containerdisk
    resources:
      requests:
        memory: 64M
  volumes:
  - name: containerdisk
    containerDisk:
      image: kubevirt/cirros-registry-disk-demo:latest
EOF

Note: If you are using software emulation then starting up a VM can be very slow and it might take 5-6 minutes for VM to finishing coming up with an IP address.

Create a service to map to the SSH port:

kubectl create -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: vmi-cirros-ssh-svc
  namespace: kv-policy-demo
spec:
  ports:
  - name: crrios-ssh-svc
    nodePort: 30000
    port: 27017
    protocol: TCP
    targetPort: 22
  selector:
    special: l-vmi-cirros
  type: NodePort
EOF

Confirm we can access VMs using SSH by accessing via the service node port using your node’s IP address. The default password is gocubsgo.

ssh [email protected] -p 30000

Confirm that the VM can access the outside world by pinging google from your new VM.

ping www.google.com -c 5

Step 5: Add network security

Apply the following policy to isolate the VM in its namespace. This locks down all incoming connects to SSH only, and prevents the VM from making outgoing connections. (Depending on your VM, you’ll want a different policy, but this simple policy is good for this tutorial.)

kubectl create -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
 name: only-allow-ingress-ssh-to-vm
 namespace: kv-policy-demo
spec:
 podSelector:
   matchLabels: 
    special: l-vmi-cirros
 policyTypes:
 - Ingress
 - Egress
 ingress:
 - from:
   ports:
   - port: 22
EOF

SSH into the VM and try to ping google again. 

You will not be able to since the policy will prevent all communication originated from the pod to the outside world. This is pretty powerful – you can secure your VMs using the same paradigms as securing pods!

Cleanup

To cleanup the namespace and VM I used in this guide, you can run this command

kubectl delete namespace kv-policy-demo

If you enjoyed this blog then you may also like:

Join our mailing list

Get updates on blog posts, workshops, certification programs, new releases, and more!

X