Deploying an application on Kubernetes can require a number of related deployment artifacts or spec files: Deployment, Service, PVCs, ConfigMaps, Service Account — to name just a few. Managing all of these resources and relating them to deployed apps can be challenging, especially when it comes to tracking changes and updates to the deployed application (actual state) and its original source (authorized or desired state). Versions of the application are locked up in the Kubernetes platform, making them totally decoupled to the versions of the specs themselves (which are typically tracked in external source code management repos).
Additionally, static specs aren’t typically reusable outside a given domain, environment or cloud provider but involve a significant time investment to author and debug. Tooling can be used to provide string replacement based on matching expressions but this kind of automation also needs to be authored or customized to perform the tasks we require and may be prone to errors.
Helm solves these problems by packaging related Kubernetes specs into one simple deployment artifact (called a chart) that can be parameterized for maximum flexibility. In addition, Helm enables users to customize app packages at runtime in much the same way that the helm of a ship enables a pilot to steer (hence the name). If you are familiar with OS package managers such as apt or yum and packages such as deb or rpm, then the concepts of Helm and Helm Charts should feel familiar.
This blog is a tutorial that will take you from basic Helm concepts in an example deployment of a chart to modifying charts to fit your needs; in the example we will add a network policy to the chart.
Prerequisites
Helm uses Kubernetes; you will need a Kubernetes cluster running somewhere, a local Docker client, and a pre-configured kubectl client and config to your K8s cluster. Helm will be using your kubectl context to deploy Kubernetes resources on the configured cluster. The cluster should be using an SDN that understands Kubernetes network policies, like Calico, which you can install from the Installing Calico on Kubernetes guide.
Helm’s default installation is insecure, so if you’re trying Helm for the first time, doing so on a cluster where you won’t adversely affect your friends and colleagues is best. A blog about how to secure Helm this is not.
Installing Helm
There are two parts to Helm: The Helm client (helm) and the Helm server (Tiller). To install Tiller on a Kubernetes cluster, we use the helm client; we don’t need to use the helm client to deploy the Tiller server but, as you’ll see, it is convenient to do so.
The helm client can be installed from source or pre-built binary releases, via Snap on Linux, Homebrew on macOS or Chocolatey on Windows. But the Helm GitHub repo also holds an installer shell script that will automatically grab the latest version of the helm client and install it locally. The examples here use an Ubuntu 16.04 instance where Kubernetes was installed locally using kubeadm.
$ curl http://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 7236 100 7236 0 0 29435 0 --:--:-- --:--:-- --:--:-- 29534 $
Make the script executable and run it to download and install the latest version of helm; this step will require sudo permissions.
$ chmod 700 get_helm.sh $ ./get_helm.sh Downloading http://kubernetes-helm.storage.googleapis.com/helm-v2.12.3-linux-amd64.tar.gz Preparing to install helm and tiller into /usr/local/bin helm installed into /usr/local/bin/helm tiller installed into /usr/local/bin/tiller Run 'helm init' to configure helm. $
We can use the version
command with the client only flag (-c
) to make sure the client is available:
$ helm version -c Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"} $
This command would hang without specifying the client only flag, as it will look for tiller using our kubeconfig and we don’t have tiller just yet.
By default, the helm client wants to set up a port-forward to the tiller pod using socat (more information in this github issue). In our case, socat is already installed as part of the preliminary setup of the Kubernetes cluster using kubeadm.
You do not need to perform this step but here is an example of when it was installed.
# apt-get install -y kubelet kubeadm kubectl kubernetes-cni Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: cri-tools ebtables socat The following NEW packages will be installed: cri-tools ebtables kubeadm kubectl kubelet kubernetes-cni socat ... Setting up kubernetes-cni (0.6.0-00) ... Setting up socat (1.7.3.1-1) ... Setting up kubelet (1.13.2-00) ... Setting up kubectl (1.13.2-00) ... Setting up kubeadm (1.13.2-00) ... Processing triggers for systemd (229-4ubuntu7) ... Processing triggers for ureadahead (0.100.0-19) ... #
(If you need to install socat you can do it from apt: sudo apt-get install socat
).
At this point we should be all set to deploy Tiller on our cluster.
Tiller
Tiller typically runs on your Kubernetes cluster as a Deployment. For development, it can also be run locally and configured to talk to a remote Kubernetes cluster–that’s handy!
The easiest way to install Tiller into the cluster is simply to run helm init
. Helm will validate that helm’s local environment is set up correctly (or set it up if necessary), use the current-context of the kubeconfig to connect to the same cluster as kubectl and install the tiller pod.
init
has a bunch of options to influence its behavior:
--canary-image
– install the canary build of Tiller (test out the latest features)--client-only
– locally configure, but not install Tiller--kube-context
– use the named context in place of the current-context from your~/.kube/config
file--node-selectors
– specify the node labels required for scheduling the Tiller pod--override
– manipulates the specified properties of the final Tiller manifest- Accepts any valid value for any valid property in the Tiller deployment manifest
--output
– skip the installation of Tiller’s deployment manifest and simply output the deployment manifest to stdout
in either JSON or YAML format--tiller-image
– use a particular Tiller version other than latest--upgrade
– upgrade Tiller to the newest version
You can find even more of them here: http://docs.helm.sh/helm/#helm-init
Let’s take a look at what init
is going to deploy by using the --output
flag and tell it to oputpt yaml (or json if
you prefer):
$ helm init --output yaml --- apiVersion: extensions/v1beta1 kind: Deployment metadata: creationTimestamp: null labels: app: helm name: tiller name: tiller-deploy namespace: kube-system spec: replicas: 1 strategy: {} template: metadata: creationTimestamp: null labels: app: helm name: tiller spec: automountServiceAccountToken: true containers: - env: - name: TILLER_NAMESPACE value: kube-system - name: TILLER_HISTORY_MAX value: "0" image: gcr.io/kubernetes-helm/tiller:v2.12.3 imagePullPolicy: IfNotPresent livenessProbe: httpGet: path: /liveness port: 44135 initialDelaySeconds: 1 timeoutSeconds: 1 name: tiller ports: - containerPort: 44134 name: tiller - containerPort: 44135 name: http readinessProbe: httpGet: path: /readiness port: 44135 initialDelaySeconds: 1 timeoutSeconds: 1 resources: {} status: {} --- apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: helm name: tiller name: tiller-deploy namespace: kube-system spec: ports: - name: tiller port: 44134 targetPort: tiller selector: app: helm name: tiller type: ClusterIP status: loadBalancer: {} ... $
Here we see the tiller deployment and its service; we could simply save these files and use kubectl to deploy but what fun would that be?
Note the two environment variables: TILLER_NAMESPACE
, which can be influenced by the --tiller-namespace
flag to use a namespace other than kube-system and TILLER_HISTORY_MAX
which is used to limit the maximum number of revisions saved per release (0 means no limit). Having an unlimited number of revisions has performance impacts, so in practice it should be set to something reasonable using the --history-max
flag.
Tiller and RBAC
It is a good idea to limit Tiller’s ability to install resources to certain namespaces. When using RBAC, we can scope any application’s access to the Kubernetes API by giving it an identity (a Kubernetes service account) and assigning it scoped permissions using Kubernetes roles and bindings.
This time out we can keep the configuration to a minimum, assigning Tiller the cluster-admin cluster role so it can deploy to any namespace.
Don’t do this at home if your cluster isn’t a local or test cluster!
First, create the service account:
$ kubectl create serviceaccount --namespace kube-system tiller serviceaccount/tiller created $
Now create the cluster role binding, assigning the cluster-admin role to the tiller service account:
$ kubectl create clusterrolebinding tiller-cluster-role \ --clusterrole=cluster-admin --serviceaccount=kube-system:tiller clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-role created $
Deploy Tiller
Now we can deploy Tiller, using the --service-account
flag to use our tiller
service account:
$ helm init --service-account tiller Creating /home/user/.helm Creating /home/user/.helm/repository Creating /home/user/.helm/repository/cache Creating /home/user/.helm/repository/local Creating /home/user/.helm/plugins Creating /home/user/.helm/starters Creating /home/user/.helm/cache/archive Creating /home/user/.helm/repository/repositories.yaml Adding stable repo with URL: http://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /home/user/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: http://docs.helm.sh/using_helm/#securing-your-helm-installation Happy Helming! $
Helm automatically places its config files in ~/.helm
; to put the helm client files somewhere other than ~/.helm
, set the $HELM_HOME
environment variable before running helm init
. Tiller is then deployed with an important note:
Please note: by default, Tiller is deployed with an insecure ‘allow unauthenticated users’ policy.
To prevent this, runhelm init
with the –tiller-tls-verify flag.
For more information on securing your installation see: http://docs.helm.sh/using_helm/#securing-your-helm-installation
Tiller is insecure by default — have we mentioned that?
We can find Tiller on our cluster like any other Kubernetes resource:
$ kubectl -n kube-system get po,deploy,svc -l name=tiller NAME READY STATUS RESTARTS AGE pod/tiller-deploy-dbb85cb99-fl929 1/1 Running 1 2m NAME READY UP-TO-DATE AVAILABLE AGE deployment.extensions/tiller-deploy 1/1 1 1 2m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/tiller-deploy ClusterIP 10.97.133.203 <none> 44134/TCP 2m $
Running the version
command without -c
should reveal both helm and tiller versions and ensure that helm can find and
talk to Tiller:
$ helm version Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"} $
Time to start Helming!
Explore Charts
If you recall, a chart is a collection of spec files that define a set of Kubernetes resources (like Services, Deployments, etc.). Charts typically include all of the resources that you would need to deploy an application as templates. The chart resource templates enable a user to customize the way the rendered resources are deployed at install time by providing values for some (or all) of the variables defined in the templates. Charts also include default values for all of the defined variables, making it easy to deploy the chart with little (or no) customization required.
As with other package managers, we want to get the latest list of, and updates to, charts from our configured repos using the update command:
$ helm repo update Hang tight while we grab the latest from your chart repositories... ...Skip local chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ⎈ Happy Helming!⎈ $
Note that helm skipped the “local chart repository” but received an update from our only other repository, the “stable” repo. When you first install Helm, it is preconfigured to talk to a local repo and the official Kubernetes charts repository. The official repository (named “stable”), contains a number of carefully curated and maintained charts for common software like elasticsearch, influxdb, mariadb, nginx, prometheus, redis, and many others.
List your helm repos to show what has been configured:
$ helm repo list NAME URL stable http://kubernetes-charts.storage.googleapis.com local http://127.0.0.1:8879/charts $
Other repos can be added at any time with the helm repo add
command. To get us started we’ll use the stable repo.
The helm search
command will show us all of the available charts in the official repository (since it is the only repo configured and updated):
$ helm search NAME CHART VERSION APP VERSION DESCRIPTION stable/acs-engine-autoscaler 2.2.2 2.1.1 DEPRECATED Scales worker nodes within agent pools stable/aerospike 0.2.1 v3.14.1.2 A Helm chart for Aerospike in Kubernetes stable/airflow 0.15.0 1.10.0 Airflow is a platform to programmatically author, schedul... stable/anchore-engine 0.11.0 0.3.2 Anchore container analysis and policy evaluation engine s... stable/apm-server 0.1.0 6.2.4 The server receives data from the Elastic APM agents and ... stable/ark 3.0.0 0.10.1 A Helm chart for ark stable/artifactory 7.3.1 6.1.0 DEPRECATED Universal Repository Manager supporting all ma... stable/artifactory-ha 0.4.1 6.2.0 DEPRECATED Universal Repository Manager supporting all ma... stable/atlantis 1.1.2 v0.4.11 A Helm chart for Atlantis http://www.runatlantis.io stable/auditbeat 0.4.2 6.5.4 A lightweight shipper to audit the activities of users an... stable/aws-cluster-autoscaler 0.3.3 Scales worker nodes within autoscaling groups. stable/bitcoind 0.1.5 0.15.1 Bitcoin is an innovative payment network and a new kind o... stable/bookstack 1.0.1 0.24.3 BookStack is a simple, self-hosted, easy-to-use platform ... stable/buildkite 0.2.4 3 DEPRECATED Agent for Buildkite stable/burrow 1.0.1 0.23.3 Burrow is a permissionable smart contract machine ...
Note the use of stable/
prepending all of the available charts. In the helm/charts project, the stable folder contains all of the charts that have gone through a rigorous promotion process and meet certain technical requirements. Incubator charts are also available but are still being improved until they meet these criteria. You can add the incubator repository (like any other repo) using the helm repo add
command and pointing it at the correct URL.
Also note the CHART VERSION and APP VERSION columns; the former is the version of the Helm chart and must follow SemVer 2 format per the rules of the Helm project. The latter is the version of the actual software and is freeform in Helm but tied to the software’s release rules.
With no filter, helm search
shows you all of the available charts. You can narrow down your results by searching with
a filter:
$ helm search ingress NAME CHART VERSION APP VERSION DESCRIPTION stable/gce-ingress 1.1.2 1.4.0 A GCE Ingress Controller stable/ingressmonitorcontroller 1.0.48 1.0.47 IngressMonitorController chart that runs on kubernetes stable/nginx-ingress 1.3.0 0.22.0 An nginx Ingress controller that uses ConfigMap to store ... stable/external-dns 1.6.0 0.5.9 Configure external DNS servers (AWS Route53, Google Cloud... stable/kong 0.9.2 1.0.2 The Cloud-Native Ingress and Service Mesh for APIs and Mi... stable/lamp 1.0.0 7 Modular and transparent LAMP stack chart supporting PHP-F... stable/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego stable/traefik 1.60.0 1.7.7 A Traefik based Kubernetes ingress controller with Let's ... stable/voyager 3.2.4 6.0.0 DEPRECATED Voyager by AppsCode - Secure Ingress Controlle... $
Why is traefik in the list? Because its package description relates it to ingress. We can use helm inspect chart
to see how:
$ helm inspect chart stable/traefik apiVersion: v1 appVersion: 1.7.7 description: A Traefik based Kubernetes ingress controller with Let's Encrypt support engine: gotpl home: http://traefik.io/ icon: http://traefik.io/traefik.logo.png keywords: - traefik - ingress - acme - letsencrypt maintainers: - email: kent.rancourt@microsoft.com name: krancour - email: emile@vauge.com name: emilevauge - email: daniel.tomcej@gmail.com name: dtomcej - email: ludovic@containo.us name: ldez name: traefik sources: - http://github.com/containous/traefik - http://github.com/helm/charts/tree/master/stable/traefik version: 1.60.0 $
The keywords section of the traefik chart includes the keyword “ingress” so it shows up in our search.
Spend a few moments performing some additional keyword searches – see what you come up with!
Deploy a Chart (a.k.a. Installing a Package)
We’ll explore the anatomy of a chart later but to illustrate how easy it is to deploy a chart we can use one from the stable repo. To install a the chart, use the helm install
command which only requires one argument: the name of the chart. Let’s start by doing just that, using the containerized Docker registry available from the official helm repo; you can check it out here.
Deploy the registry:
$ helm install stable/docker-registry NAME: kissable-clownfish LAST DEPLOYED: Tue Feb 5 16:39:29 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/Secret NAME TYPE DATA AGE kissable-clownfish-docker-registry-secret Opaque 1 0s ==> v1/ConfigMap NAME DATA AGE kissable-clownfish-docker-registry-config 1 0s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kissable-clownfish-docker-registry ClusterIP 10.107.153.56 <none> 5000/TCP 0s ==> v1beta1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kissable-clownfish-docker-registry 1 0 0 0 0s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE kissable-clownfish-docker-registry-5ccc49955-rh7vj 0/1 Pending 0 0s NOTES: 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace default -l "app=docker-registry,release=kissable-clownfish" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl port-forward $POD_NAME 8080:5000 $
What just happened?
Helm renders the Kubernetes resource templates by injecting the default values for all of the variables then deploys the resources on our Kubernetes cluster by submitting them to the Kubernetes API as static spec files. The act of installing a chart creates a new Helm release object; this release above is named “kissable-clownfish” (if you want to use your own release name, simply use the --name
flag with the install command).
A Helm Release is a set of deployed resources based on a chart; each time a chart is installed, it deploys a whole set of Kubernetes resources with its own release name. The unique naming helps us keep track of how the Kubernetes resources are related and lets us deploy the chart any number of times with different customizations.
During installation, the Helm will print useful information about which resources were created, in our case a ConfigMap, Deployment, Secret, and a Service. To see it again you can use helm status
with the release name. Let’s use our new registry server; the NOTES
section of the install output has some clues to using it, let’s try it out.
$ export POD_NAME=$(kubectl get pods --namespace default -l "app=docker-registry,release=kissable-clownfish" -o jsonpath="{.items[0].metadata.name}") $ kubectl port-forward $POD_NAME 8080:5000 Forwarding from [::1]:8080 -> 5000 Forwarding from 127.0.0.1:8080 -> 5000
At this point your terminal should be hijacked for the port-forward. Start a new terminal and use Docker to interact with the registry. From the Docker client on the Kubernetes host, pull a lightweight image like alpine:
$ docker image pull alpine Using default tag: latest latest: Pulling from library/alpine 6c40cc604d8e: Pull complete Digest: sha256:b3dbf31b77fd99d9c08f780ce6f5282aba076d70a513a8be859d8d3a4d0c92b8 Status: Downloaded newer image for alpine:latest $
Now re-tag it, prepending the image repo name with the IP:Port of our port-forwarded registry and try pushing it:
$ docker image tag alpine 127.0.0.1:8080/myalpine $ docker image push 127.0.0.1:8080/myalpine The push refers to repository [127.0.0.1:8080/myalpine] 503e53e365f3: Pushed latest: digest: sha256:25b4d910f4b76a63a3b45d0f69a57c34157500faf6087236581eca221c62d214 size: 528 $
Verify that the registry has our image by querying the registry API:
$ curl -X GET http://127.0.0.1:8080/v2/_catalog {"repositories":["myalpine"]} $
Success!
That was easy but we’re only using the default configuration options for this chart. Likely you will want to customize a chart prior to deployment. To see what options are configurable for a given chart, use helm inspect values
.
Kill your port-forward with Ctrl+C (^C
) and then inspect the chart values:
$ helm inspect values stable/docker-registry # Default values for docker-registry. # This is a YAML-formatted file. # Declare variables to be passed into your templates. replicaCount: 1 updateStrategy: # type: RollingUpdate # rollingUpdate: # maxSurge: 1 # maxUnavailable: 0 podAnnotations: {} image: repository: registry tag: 2.6.2 pullPolicy: IfNotPresent # imagePullSecrets: # - name: docker service: name: registry type: ClusterIP # clusterIP: port: 5000 # nodePort: annotations: {} # foo.io/bar: "true" ingress: enabled: false path: / # Used to create an Ingress record. hosts: - chart-example.local annotations: # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" labels: {} ... $
There are a number of configuration changes we can make; most notably, we can deploy an ingress record for the registry (useful if we have an ingress controller deployed). We can also configure a number of different storage backends like an S3 bucket and related Secret for storing AWS access keys.
But where are these values coming from?
(Partial) Anatomy of a Chart
A chart is a collection of files inside a directory named for the chart. Thus far we have only deployed a chart from a remote repo but if you looked at the link to the docker-registry chart on GitHub, you saw these files. When the chart is installed, Helm downloads the contents of the directory as an archive and caches it locally in the helm client’s workspace directory. The default location is ~/.helm
:
$ ls -l ~/.helm total 16 drwxr-xr-x 3 user user 4096 Jan 28 17:28 cache drwxr-xr-x 2 user user 4096 Jan 28 17:28 plugins drwxr-xr-x 4 user user 4096 Jan 28 17:28 repository drwxr-xr-x 2 user user 4096 Jan 28 17:28 starters $
The cache directory contains local clones of remote chart repositories in archive format:
$ ls -l ~/.helm/cache/archive/ total 8 -rw-r--r-- 1 user user 6316 Feb 5 16:39 docker-registry-1.7.0.tgz $
If we want to explore a chart we can expand the archive ourselves, or better yet, use a helm command to do it for us!
Using the fetch
command with the --untar
argument results in an unpacked chart on our local system:
$ helm fetch stable/docker-registry --untar $ ls -l docker-registry/ total 24 -rw-r--r-- 1 user user 391 Feb 5 17:42 Chart.yaml -rw-r--r-- 1 user user 62 Feb 5 17:42 OWNERS -rw-r--r-- 1 user user 7396 Feb 5 17:42 README.md drwxr-xr-x 2 user user 4096 Feb 5 17:42 templates -rw-r--r-- 1 user user 2595 Feb 5 17:42 values.yaml
The Helm docs are pretty good at explaining what most of these are; for now we are going to concentrate on values.yaml.
Previously we inspected the values with the helm inspect values
command; looking at the values.yaml we can see exactly
what we were shown:
$ cat docker-registry/values.yaml # Default values for docker-registry. # This is a YAML-formatted file. # Declare variables to be passed into your templates. replicaCount: 1 updateStrategy: # type: RollingUpdate # rollingUpdate: # maxSurge: 1 # maxUnavailable: 0 podAnnotations: {} image: repository: registry tag: 2.6.2 pullPolicy: IfNotPresent # imagePullSecrets: # - name: docker service: name: registry type: ClusterIP # clusterIP: port: 5000 # nodePort: annotations: {} # foo.io/bar: "true" ingress: enabled: false path: / # Used to create an Ingress record. hosts: - chart-example.local annotations: # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" labels: {} ... $
The values file is where the author(s) of a chart set the default values for all chart variables so all you have to do is type helm install
and the chart should work. Some charts have prerequisites but those are typically documented so you know ahead of time. For example, the WordPress chart declares these prerequisites:
Prerequisites
- Kubernetes 1.4+ with Beta APIs enabled
- PV provisioner support in the underlying infrastructure
Now that we know what can be changed, let’s change something!
Update a Release
When you want to change the configuration of a release, you can use the helm upgrade
command. Helm will only update things that have changed since the last release. Upgrade works with the same override flags as install so that you can customize a chart on initial install or sometime later.
Our original Docker Registry Service is ClusterIP type which is why we needed the port-forward:
$ helm inspect values stable/docker-registry |grep -B2 -A4 ClusterIP service: name: registry type: ClusterIP # clusterIP: port: 5000 # nodePort: annotations: {} # foo.io/bar: "true" $
To confirm it was deployed that way list the Kubernetes Service:
$ kubectl get service kissable-clownfish-docker-registry NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kissable-clownfish-docker-registry ClusterIP 10.107.153.56 <none> 5000/TCP 10m $
Let’s update the Service to use a NodePort so that we can expose the registry to the outside world.
There are two ways to pass configuration data during an update or upon initial install:
--values
(or-f
) – specify a YAML file with overrides--set
– specify overrides on the command line,- Basic:
--set name=value
==name: value
- Key value pairs are comma separated
- Multiple complex values are supported by set:
--set servers.port=80
becomes:
- Basic:
servers: port: 80
- `--set servers[0].port=80,servers[0].host=example` becomes:
servers: - port: 80 host: example
We know that type
is a child of service, so we can set its value with --set service.type=
$ helm upgrade --set service.type=NodePort kissable-clownfish stable/docker-registry Release "kissable-clownfish" has been upgraded. Happy Helming! LAST DEPLOYED: Wed Feb 6 11:48:32 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/Secret NAME TYPE DATA AGE kissable-clownfish-docker-registry-secret Opaque 1 11m ==> v1/ConfigMap NAME DATA AGE kissable-clownfish-docker-registry-config 1 19h ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kissable-clownfish-docker-registry NodePort 10.107.153.56 <none> 5000:31836/TCP 11m ==> v1beta1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kissable-clownfish-docker-registry 1 1 1 1 11m ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE kissable-clownfish-docker-registry-5ccc49955-rh7vj 1/1 Running 0 11m NOTES: 1. Get the application URL by running these commands: export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services kissable-clownfish-docker-registry) export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}") echo http://$NODE_IP:$NODE_PORT $
Based on the output above, Kubernetes has updated the configuration of our Service. The NOTES
section has even changed, indicating that we can now access our docker-registry service via http://NODEIP:NODEPORT
We can use helm get values
to see whether that new setting took effect (according to what helm knows).
$ helm get values kissable-clownfish service: type: NodePort $
Not as much information presented here. Helm only concerns itself with the changes to the yaml key/value pairs.
Let’s see if it worked:
$ export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services kissable-clownfish-docker-registry) $ export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}") $ curl -X GET http://$NODE_IP:$NODE_PORT/v2/_catalog {"repositories":["myalpine"]} $
Success! Let’s test it by pushing an image.
By default, Docker will only trust a secure remote registry or an insecure registry found on the localhost. Since Kubernetes runs our registry in a container, even when the Docker daemon and the docker-registry are running on the same host the registry is considered “remote”. Our port-forward used localhost so Docker allowed us to push, but won’t let us this time around. Try it:
$ docker image tag alpine $NODE_IP:$NODE_PORT/extalpine $ docker image push $NODE_IP:$NODE_PORT/extalpine The push refers to repository [192.168.225.251:31836/extalpine] Get http://192.168.225.251:31836/v2/: http: server gave HTTP response to HTTPS client $
There are two ways to address this situation; one way is to configure the registry server to support TLS. Instead of securing the docker-registry we can tell Docker to trust our non-secure registry (only do this in non-production environments). This allows us to use the registry without using SSL certificates.
Doing this next step on the Kubernetes host has a high chance of breaking a kubeadm-deployed cluster because it requires restarting Docker and all the Kubernetes services are running in containers. So use a Docker installation that is external to your Kubernetes host; after all, that is why we exposed the registry as a nodePort service!
Configure our Docker daemon by creating a config file under /etc/docker
that looks like this (replace the example IP with the IP of your node which you stored in NODE_IP earlier):
$ sudo cat /etc/docker/daemon.json { "insecure-registries": [ "192.168.225.251/32" ] } $
To put those changes into effect, you’ll need to restart Docker:
$ sudo systemctl restart docker
Now, whether your Docker daemon should trust our registry:
$ docker image push 192.168.225.251:31836/extalpine The push refers to repository [192.168.225.251:31836/extalpine] 503e53e365f3: Pushed latest: digest: sha256:25b4d910f4b76a63a3b45d0f69a57c34157500faf6087236581eca221c62d214 size: 528 $
See if it worked:
$ curl -X GET http://$NODE_IP:$NODE_PORT/v2/_catalog {"repositories":["extalpine","myalpine"]} $
Now you’ve got a registry that your team can share! But everyone else can use it too; let’s put a limit on that.
Network Policies
A network policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints. NetworkPolicy resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods. Network policies are implemented by a CNI network plugin, so you must use a CNI networking solution which supports NetworkPolicy (like Calico).
By default, pods are non-isolated; they accept traffic from any source. Pods become isolated by having a NetworkPolicy that selects them. Adding a NetworkPolicy to a namespace selecting a particular pod causes that pod to become “isolated”, rejecting any connections that are not explicitly allowed by a NetworkPolicy. Other pods in the namespace that are not selected by any NetworkPolicy will continue to accept all traffic.
Create a Blocking Network Policy
For our first network policy we’ll create a blanket policy that denies all inbound connections to pods in the default namespace. Create a one that resembles the following policy:
$ cat networkpolicy.yaml
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny spec: podSelector: {} policyTypes: - Ingress
Submit it to the Kubernetes API:
$ kubectl create -f networkpolicy.yaml networkpolicy.networking.k8s.io/default-deny created user@ubuntu:~/svc$ kubectl get networkpolicy NAME POD-SELECTOR AGE default-deny <none> 1m $
This policy selects all pods ({}
) and has no ingress policies (Ingress
) for them. By creating any network policy however, we automatically isolate all pods.
From an external client, query the docker-registry for its stored images:
external-client:~$ curl -X GET http://192.168.225.251:31836/v2/_catalog curl: (7) Failed to connect to 192.168.225.251 port 31836: Connection timed out
Perfect. The presence of a network policy shuts down our ability to reach the registry pod.
Create a Permissive Network Policy
To enable clients to access our registry pod we will need to create a network policy that selects the pod and allows ingress from a cidr. Network policies use labels to identify pods to target; the registry pod has the labels “app=docker-registry” and “release=kissable-clownfish” so we can use those to select it. However, these labels are from the current chart release and what we really want is a way to modify the chart with a NetworkPolicy template that uses a parameterized selector and an ingress rule that allows a user to customize an ingress cidr value.
A lot of what we need to author the NetworkPolicy spec file is in the existing templates directory of our chart. Let’s take a look at the local copy of the service.yaml template file as an example:
$ cat docker-registry/templates/service.yaml
apiVersion: v1 kind: Service metadata: name: {{ template "docker-registry.fullname" . }} labels: app: {{ template "docker-registry.name" . }} chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} {{- if .Values.service.annotations }} annotations: {{ toYaml .Values.service.annotations | indent 4 }} {{- end }} spec: type: {{ .Values.service.type }} {{- if (and (eq .Values.service.type "ClusterIP") (not (empty .Values.service.clusterIP))) }} clusterIP: {{ .Values.service.clusterIP }} {{- end }} ports: - port: {{ .Values.service.port }} protocol: TCP name: {{ .Values.service.name }} targetPort: 5000 {{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }} nodePort: {{ .Values.service.nodePort }} {{- end }} selector: app: {{ template "docker-registry.name" . }} release: {{ .Release.Name }}
The metadata name and labels sections can be copied verbatim. We want our new policy file to match what is set here (if you examine the other template files they are identical). Many of these variable values are generated automatically by Helm; template
uses a named template defined in a file (_helpers.tpl
in the templates directory) that can be used inother templates. See the helm docs for more info on named templates.
We can also use the Service’s selector to create the matchLabels for our NetworkPolicy spec.
Putting that all together in our new NetworkPolicy template looks like this:
$ cat docker-registry/templates/networkpolicy.yaml
kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: {{ template "docker-registry.fullname" . }} labels: app: {{ template "docker-registry.name" . }} chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: podSelector: matchLabels: app: {{ template "docker-registry.name" . }} release: {{ .Release.Name }} ingress: - from: - ipBlock: cidr: {{ .Values.networkPolicy.cidr }}
Finally, we will add a section to the values.yaml file that lets a user specify a cidr, like this:
$ tail docker-registry/values.yaml
fsGroup: 1000 priorityClassName: "" nodeSelector: {} tolerations: [] networkPolicy: cidr: 192.168.225.0/24
If we want to see what Helm does to render our new spec we can use helm template
and pass it the --execute
argument and the path to our new template file so that it only renders the single template:
$ helm template docker-registry/ --execute templates/networkpolicy.yaml --- # Source: docker-registry/templates/networkpolicy.yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: release-name-docker-registry labels: app: docker-registry chart: docker-registry-1.7.0 release: release-name heritage: Tiller spec: podSelector: matchLabels: app: docker-registry release: release-name ingress: - from: - ipBlock: cidr: 192.168.225.0/24 $
LGTM!
Run the upgrade command again, this time using the local version of the chart:
$ helm upgrade --set service.type=NodePort kissable-clownfish docker-registry/ Release "kissable-clownfish" has been upgraded. Happy Helming! LAST DEPLOYED: Wed Feb 6 14:32:41 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE kissable-clownfish-docker-registry-config 1 2h ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kissable-clownfish-docker-registry NodePort 10.107.153.56 <none> 5000:31836/TCP 2h ==> v1beta1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kissable-clownfish-docker-registry 1 1 1 1 21h ==> v1/NetworkPolicy NAME POD-SELECTOR AGE kissable-clownfish-docker-registry app=docker-registry,release=kissable-clownfish 0s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE kissable-clownfish-docker-registry-5ccc49955-rh7vj 1/1 Running 0 2h ==> v1/Secret NAME TYPE DATA AGE kissable-clownfish-docker-registry-secret Opaque 1 2h NOTES: 1. Get the application URL by running these commands: export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services kissable-clownfish-docker-registry) export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}") echo http://$NODE_IP:$NODE_PORT
Notice our new NetworkPolicy in the release message!
Ask Kubernetes what network policies are in place:
$ kubectl get networkpolicy NAME POD-SELECTOR AGE default-deny <none> 83m kissable-clownfish-docker-registry app=docker-registry,release=kissable-clownfish 16s $
From an external client, query the docker-registry once more:
external-client:~$ curl -X GET http://192.168.225.251:31836/v2/_catalog {"repositories":["extalpine","myalpine"]} $
Great, we have access once more! Now, try the same query from a test pod running on the Kubernetes cluster:
$ kubectl run client --generator=run-pod/v1 --image busybox --command -- tail -f /dev/null $ kubectl exec -it client -- wget -qO - http://192.168.225.251:31836/v2/_catalog wget: can't connect to remote host (192.168.225.251): Connection timed out command terminated with exit code 1 $
What happened?
Our client pod doesn’t belong to the approved cidr so it is not allowed to reach the registry pod. Now external Docker daemons from the given cidr can push and pull images to the registry but pods are unable to talk to it. Our policy and chart release are working as expected. There is a bit more work to do to make this example chart more friendly/reusable to others but that, and all the other steps taken should get you started using charts and augmenting them to fit your needs.
Happy Helming!
Christopher Hanson is a Tigera guest blogger. As a Cloud Native Consultant for RX-M LLC, he has taught hundreds of DevOps engineers at Fortune 100 companies how to successfully useKubernetes, Docker, OpenShift, Docker Compose and Swarm to build, package, deploy and orchestrate microservice applications at scale.
————————————————-
Free Online Training
Access Live and On-Demand Kubernetes Tutorials
Calico Enterprise – Free Trial
Solve Common Kubernetes Roadblocks and Advance Your Enterprise Adoption
Join our mailing list
Get updates on blog posts, workshops, certification programs, new releases, and more!