How to Enable Serverless Computing in Kubernetes

In the first two articles in this series about using serverless on an open source platform, I described how to get started with serverless platforms and how to write functions in popular languages and build components using containers on Apache OpenWhisk.

Here in the third article, I’ll walk you through enabling serverless in your Kubernetes environment. Kubernetes is the most popular platform to manage serverless workloads and microservice application containers and uses a finely grained deployment model to process workloads more quickly and easily.

Keep in mind that serverless not only helps you reduce infrastructure management while utilizing a consumption model for actual service use but also provides many capabilities of what the cloud platform serves. There are many serverless or FaaS (Function as a Service) platforms, but Kuberenetes is the first-class citizen for building a serverless platform because there are more than 13 serverless or FaaS open source projects based on Kubernetes.

However, Kubernetes won’t allow you to build, serve, and manage app containers for your serverless workloads in a native way. For example, if you want to build a CI/CD pipeline on Kubernetes to build, test, and deploy cloud-native apps from source code, you need to use your own release management tool and integrate it with Kubernetes.

Likewise, it’s difficult to use Kubernetes in combination with serverless computing unless you use an independent serverless or FaaS platform built on Kubernetes, such as Apache OpenWhiskRiff, or Kubeless. More importantly, the Kubernetes environment is still difficult for developers to learn the features of how it deals with serverless workloads from cloud-native apps.

Knative

Knative was born for developers to create serverless experiences natively without depending on extra serverless or FaaS frameworks and many custom tools. Knative has three primary components—BuildServing, and Eventing—for addressing common patterns and best practices for developing serverless applications on Kubernetes platforms.

To learn more, let’s go through the usual development process for using Knative to increase productivity and solve Kubernetes’ difficulties from the developer’s point of view.

Step 1: Generate your cloud-native application from scratch using Spring Initializror Thorntail Project Generator. Begin implementing your business logic using the 12-factor app methodology, and you might also do assembly testing to see if the function works correctly in many local testing tools.

No alt text provided for this image
No alt text provided for this image

 

Step 2: Build container images from your source code repositories via the Knative Build component. You can define multiple steps, such as installing dependencies, running integration testing, and pushing container images to your secured image registry for using existing Kubernetes primitives. More importantly, Knative Build makes developers’ daily work easier and simpler—”boring but difficult.” Here’s an example of the Build YAML:

apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
 name: docker-build
spec:
 serviceAccountName: build-bot
 source:
  git:
   revision: master
   url: http://github.com/redhat-developer-demos/knative-tutorial-event-greeter.git 
 steps: 
 - args:
  - --context=/workspace/java/springboot 
  - --dockerfile=/workspace/java/springboot/Dockerfile 
  - --destination=docker.io/demo/event-greeter:0.0.1 
  env:
  - name: DOCKER_CONFIG
   value: /builder/home/.docker
  image: gcr.io/kaniko-project/executor
  name: docker-push

 

Step 3: Deploy and serve your container applications as serverless workloads via the Knative Serving component. This step shows the beauty of Knative in terms of automatically scaling up your serverless containers on Kubernetes then scaling them down to zero if there is no request to the containers for a specific period (e.g., two minutes). More importantly, Istio will automatically address ingress and egress networking traffic of serverless workloads in multiple, secure ways. Here’s an example of the Serving YAML:

apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
 name: greeter
spec:
 runLatest: 
  configuration:
   revisionTemplate:
    spec:
     container:
      image: dev.local/rhdevelopers/greeter:0.0.1

 

Step 4: Bind running serverless containers to a variety of eventing platforms, such as SaaS, FaaS, and Kubernetes, via Knative’s Eventing component. In this step, you can define event channels and subscriptions, which are delivered to your services via a messaging platform such as Apache Kafka or NATS streaming. Here’s an example of the Event sourcing YAML:

apiVersion: sources.eventing.knative.dev/v1alpha1
kind: CronJobSource 
metadata:
 name: test-cronjob-source
spec:
 schedule: "* * * * *"
 data: '{"message": "Event sourcing!!!!"}'
 sink:
  apiVersion: eventing.knative.dev/v1alpha1
  kind: Channel
  name: ch-event-greeter

 

Conclusion

Developing with Knative will save a lot of time in building serverless applications in the Kubernetes environment. It can also make developers’ jobs easier by focusing on developing serverless applications, functions, or cloud-native containers.

This article is published originally by myself at http://opensource.com/article/19/4/enabling-serverless-kubernetes

This article originated from http://www.linkedin.com/pulse/how-enable-serverless-computing-kubernetes-daniel-oh/

Daniel Oh works as a principal technical product marketing manager at Red Hat as well as CNCF ambassador to encourage developers’ participation of cloud-native app development at scale and speed.

————————————————-

Free Online Training
Access Live and On-Demand Kubernetes Tutorials

Calico Enterprise – Free Trial
Solve Common Kubernetes Roadblocks and Advance Your Enterprise Adoption

Join our mailing list

Get updates on blog posts, workshops, certification programs, new releases, and more!

X