Kubernetes Operator vs. Controller
During the past few weeks, I’ve been trying to understand what a Kubernetes Operator is and what makes it different from a Kubernetes Controller. There are lots of conventions and unclear documentation about both terms and I think many may be confused too. In this post, I’ll try to summarize what each of these patterns involves and list some examples.
Kubernetes controller documentation starts with a short control loop definition, this is what the Wikipedia entry says:
A control loop is the fundamental building block of industrial control systems. It consists of all the physical components and control functions necessary to automatically adjust the value of a measured process variable (PV) to equal the value of a desired set-point (SP).
So in the Kubernetes world, a controller will basically monitor and measure the cluster resources state to adjust those resources that diverge from the desired state.
According to this definition, anything that automates a task that brings the cluster’s overall status to a defined desired state falls into this category.
The usual examples are the Kubernetes
DaemonSet controllers. The resource definition has a number of replicas defined in the resource’s spec. The controller is in charge of providing as many
Pods as desired/stated in the spec by creating or deleting them by monitoring the cluster’s
However, I can think of a few examples that may not seem that clear at first. A dumb controller that makes sure that all Pods are annotated with a
controlled-by: dumb annotation would also fall into this category. The controller monitors the cluster’s
Pod resources and automatically adds the annotation upon creation or modification (in case some other process removes the annotation). In this case, the desired state is that all Pods must contain this dumb annotation.
Another example can be a controller that enforces that no
Service is of
NodePort type. The controller monitors the
Service resources and deletes any
Service created or modified to be of
NodePort type. In this case, the desired state is that no Service is exposed as NodePort (maybe for security reasons).
The operator term is the most confusing concept for me since it refers to a specific human role in IT. The Kubernetes documentation itself states the following:
The Operator pattern aims to capture the key aim of a human operator who is managing a service or set of services.
However, the term was originally coined by CoreOS with a very specific meaning:
An Operator is a method of packaging, deploying and managing a Kubernetes application.
Furthermore, the Kubernetes documentation also states a few requirements for some controller to fall into the Operator pattern category.
- Operators make use of the controller pattern.
- Operators make use of Custom Resources to extend the Kubernetes API.
- Operators are focused on a single application and its components.
We can now find many publicly available operators that help you provision applications in your cluster. For example, the Strimzi Operator provides a way to run an Apache Kafka cluster on Kubernetes or OpenShift. For instance, this operator will automate the Kafka cluster installation process, but will also manage and monitor the deployed cluster.
My personal take is that controllers are any process that brings the cluster resources closer to a desired set state. Thus, all Operators are controllers that use custom resources to manage the state of a single application and its components.
Another of the key takeaways is that both concepts represent patterns and don’t involve language-specific implementations or frameworks. In order to write a Controller or an Operator, you’ll need to follow the convention but you’re free to use any language of your choice. The use of a framework or SDK can help and will certainly avoid writing boilerplate code, but again, nothing stops you from implementing something from scratch.