Build Kubernetes controllers with Fabric8 Kubernetes Client, Quarkus, and JKube
Introduction
Josh Long, one of my favorite Java champions and advocates, has recently published an article showing how to create a simple Kubernetes controller using Spring Boot Native and the Official Kubernetes Client.
Since this is one of my favorite topics, and I'm currently working on the Fabric8 Kubernetes Client, I thought it would be nice to create a port of his example using the Fabric8 Kubernetes Client, Quarkus, and Eclipse JKube instead. The structure of the original post has also been replicated so that the differences for each part can be easily spotted.
Please, don't take this post as an xxx
is better than zzz
article.
The intention of the article is to showcase the available alternatives so that developers can mix and match whatever they like.
I think that we are living a great moment for Java and that having so many choices for anything cloud-related is making Java shine again.
Fabric8 Kubernetes Client
The Fabric8 Kubernetes Client is an automatically code-generated Java client for the Kubernetes API. Quarkus has a built-in extension to support the Fabric8 client both in JVM and native mode. Your only concern should be to add the extension's dependency (and taking a little bit of care with generics).
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-kubernetes-client</artifactId>
</dependency>
Example KubernetesControllerApplication
package com.marcnuri.demo.booternetes.port;
import io.fabric8.kubernetes.api.model.ListOptionsBuilder;
import io.fabric8.kubernetes.api.model.Node;
import io.fabric8.kubernetes.api.model.Pod;
import io.fabric8.kubernetes.client.KubernetesClient;
import io.fabric8.kubernetes.client.KubernetesClientException;
import io.fabric8.kubernetes.client.informers.ResourceEventHandler;
import io.fabric8.kubernetes.client.informers.SharedIndexInformer;
import io.fabric8.kubernetes.client.informers.SharedInformerFactory;
import io.quarkus.runtime.Quarkus;
import io.quarkus.runtime.QuarkusApplication;
import io.quarkus.runtime.ShutdownEvent;
import io.quarkus.runtime.annotations.QuarkusMain;
import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.event.Observes;
import javax.inject.Inject;
import javax.inject.Singleton;
import java.util.Objects;
@QuarkusMain
public class KubernetesControllerApplication implements QuarkusApplication {
@Inject
KubernetesClient client;
@Inject
SharedInformerFactory sharedInformerFactory;
@Inject
ResourceEventHandler<Node> nodeEventHandler;
@Override
public int run(String... args) throws Exception {
try {
client.nodes().list(new ListOptionsBuilder().withLimit(1L).build());
} catch (KubernetesClientException ex) {
System.out.println(ex.getMessage());
return 1;
}
sharedInformerFactory.startAllRegisteredInformers().get();
final var nodeHandler = sharedInformerFactory.getExistingSharedIndexInformer(Node.class);
nodeHandler.addEventHandler(nodeEventHandler);
Quarkus.waitForExit();
return 0;
}
void onShutDown(@Observes ShutdownEvent event) {
sharedInformerFactory.stopAllRegisteredInformers(true);
}
public static void main(String... args) {
Quarkus.run(KubernetesControllerApplication.class, args);
}
@ApplicationScoped
static final class KubernetesControllerApplicationConfig {
@Inject
KubernetesClient client;
@Singleton
SharedInformerFactory sharedInformerFactory() {
return client.informers();
}
@Singleton
SharedIndexInformer<Node> nodeInformer(SharedInformerFactory factory) {
return factory.sharedIndexInformerFor(Node.class, 0);
}
@Singleton
SharedIndexInformer<Pod> podInformer(SharedInformerFactory factory) {
return factory.sharedIndexInformerFor(Pod.class, 0);
}
@Singleton
ResourceEventHandler<Node> nodeReconciler(SharedIndexInformer<Node> nodeInformer, SharedIndexInformer<Pod> podInformer) {
return new ResourceEventHandler<>() {
@Override
public void onAdd(Node node) {
// n.b. This is executed in the Watcher's WebSocket Thread
// Ideally this should be executed by a Processor running in a dedicated thread
// This method should only add an item to the Processor's queue.
System.out.printf("node: %s%n", Objects.requireNonNull(node.getMetadata()).getName());
podInformer.getIndexer().list().stream()
.map(pod -> Objects.requireNonNull(pod.getMetadata()).getName())
.forEach(podName -> System.out.printf("pod name: %s%n", podName));
}
@Override
public void onUpdate(Node oldObj, Node newObj) {}
@Override
public void onDelete(Node node, boolean deletedFinalStateUnknown) {}
};
}
}
}
This is a very simple example application
that iterates through the Pod
instances and prints their name whenever a Node
is added
(or the existent Nodes when the application starts).
It's a port of the original post's application and I tried to structure it in a very similar way to achieve the same result and keep the code visually similar.
You can compile the application to a native executable by running:
mvn -Pnative clean package
If you run the application you will get an output similar to:
$ ./target/kubernetes-controller-0.0.1-SNAPSHOT-runner
__ ____ __ _____ ___ __ ____ ______
--/ __ \\/ / / / _ | / _ \\/ //_/ / / / __/
-/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\\ \\
--\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/
2021-12-21 10:59:46,834 INFO [io.quarkus] (main) kubernetes-controller 0.0.1-SNAPSHOT native (powered by Quarkus 2.5.2.Final) started in 0.020s. Listening on: http://0.0.0.0:8080
2021-12-21 10:59:46,834 INFO [io.quarkus] (main) Profile prod activated.
2021-12-21 10:59:46,834 INFO [io.quarkus] (main) Installed features: [cdi, kubernetes-client, resteasy-jackson, smallrye-context-propagation, vertx]
node: fv-az210-846
pod name: kube-apiserver-fv-az210-846
pod name: kubernetes-controller-c44477d9b-zd4pt
pod name: kube-proxy-9sz5l
pod name: storage-provisioner
pod name: etcd-fv-az210-846
pod name: kube-scheduler-fv-az210-846
pod name: coredns-64897985d-49w7x
pod name: kube-controller-manager-fv-az210-846
That's 20 milliseconds, or 20 thousandths of a second. In addition, the resulting application takes up a trivially small memory footprint, 23.5 MiB of RAM.
Creating a container image (Docker Image)
Quarkus has several extensions to create Docker/Container images. However, since I'm part of the team maintaining JKube, I'll showyou how to perform the full Kubernetes experience with JKube's Kubernetes Maven Plugin.
In this case, the only necessary step is to add the required dependency to the plugins section:
<plugin>
<groupId>org.eclipse.jkube</groupId>
<artifactId>kubernetes-maven-plugin</artifactId>
<version>1.17.0</version>
</plugin>
You can now create the image by running:
mvn -Pnative k8s:build
JKube automatically detects the native profile and creates a tiny distribution using Red Hat's UBI minimal image.
Kubernetes resource manifest generation (YAML)
For this part, JKube also infers the configuration and generates the required configuration YAML files. For any standard project you wouldn't really need to add any extra configuration.
Notice that JKube is also compatible with Spring Boot, so it could be easily integrated with the original article.
Cluster Role Binding and Role
However, in this case we need to access the underlying cluster API, so we need to authorize the Pod's service account via Role Based Access Control (RBAC).
The original article binds the service account to the cluster-admin
role which can be dangerous.
To keep things simple we're just going to create a new cluster role with read access to Pods and Nodes, and bind
it to the default Service Account. It would be better to create a specific Service Account too.
To achieve this we'll add two fragments for the Cluster Role and the Cluster Role Binding that JKube will pick up and merge with the rest of generated resources:
Cluster Role src/main/jkube/kubernetes-controller-java-crole.yaml
rules:
- apiGroups: [""]
resources:
- nodes
- pods
verbs:
- list
- get
- watch
Cluster Role Binding src/main/jkube/kubernetes-controller-java-crb.yaml
subjects:
- kind: ServiceAccount
name: default
namespace: ${jkube.namespace}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-controller-java
Namespace
To keep things clean, we're going to create a dedicated Namespace
for our application.
To achieve this with JKube we only need to provide the following Maven properties:
<properties>
<jkube.namespace>kubernetes-controller-java</jkube.namespace>
<jkube.enricher.jkube-namespace.namespace>${jkube.namespace}</jkube.enricher.jkube-namespace.namespace>
<!-- ... -->
</properties>
Deploying to Minikube
The original article deploys the application to a Google Cloud Kubernetes environment (GKE), however I don't have any GKE cluster available. In this case I will show you how to deploy to Minikube.
To skip pushing the image to a container image registry we can share Minikube's Docker daemon by invoking the following command and build the image directly on Minikube:
eval $(minikube docker-env)
Next we can build, generate application manifests, and deploy them to our cluster by running:
mvn -Pnative k8s:build k8s:resource k8s:apply
Here's a trivial GitHub Actions file that performs the mentioned steps and deploys them to a dedicated Minikube cluster:
name: Deploy Tests
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
deploy-and-test:
name: Deploy and Test
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Setup Minikube-Kubernetes
uses: manusa/actions-setup-minikube@v2.4.3
with:
minikube version: v1.24.0
kubernetes version: v1.23.0
github token: ${{ secrets.GITHUB_TOKEN }}
- name: Setup Java 17
uses: actions/setup-java@v2
with:
java-version: '17'
distribution: 'temurin'
- name: Build
run: mvn -Pnative package
- name: Run and test from host
run: timeout 5s ./target/kubernetes-controller-0.0.1-SNAPSHOT-runner > out.txt || grep "node:" out.txt
- name: Deploy
run: mvn -Pnative k8s:build k8s:resource k8s:apply
- name: Test Deployment
run: |
kubectl wait --for=condition=available --timeout=60s --namespace kubernetes-controller-java deployments.apps/kubernetes-controller
kubectl logs --namespace kubernetes-controller-java --tail=-1 --selector app=kubernetes-controller | grep "node:"
- name: Print Application Logs
run: |
kubectl logs --namespace kubernetes-controller-java --tail=-1 --selector app=kubernetes-controller
Conclusion
As you can see, generating and deploying a Java Kubernetes controller with Quarkus and JKube can be even easier than with Go and just as performant thanks to GraalVM.
As Josh said in his original blog post, it's a great time to be alive. Again, I want to remark that this is not a Spring vs. Quarkus post. The intention is to show the amount of choices Java developers have these days and that Java is still alive and Kicking.
You can find the full source code for this post at GitHub.