Kafka® (self-managed)¶
This page describes how to install a Customer Application
You are solely responsible for Customer Applications.
If you are an Elastisys Managed Services customer, please review your responsibilities in ToS 5.2.
Specifically, you are responsible for performing due diligence for the project discussed in this page. At the very least, you must:
- assess project ownership, governance and licensing;
- assess project roadmap and future suitability;
- assess project compatibility with your use-case;
- assess business continuity, i.e., what will you do if the project is abandoned;
- subscribe to security advisories related to the project;
- apply security patches and updates, as needed;
- regularly test disaster recovery.
This page is a preview for self-managed cluster-wide resources
Compliant Kubernetes restricts application developers to manage CustomResourceDefinitions (CRDs) and other cluster-wide resources for security purposes. This means that some applications that require such cluster-wide resources are not possible to install for you as an application developer. As a trade-off, we are launching this preview feature that allows for self-management of specific cluster-wide resources required for certain popular applications. It is disabled by default, so please ask your platform administrator to enable this preview feature.
For Elastisys Managed Services Customers
You can ask for this feature to be enabled by filing a service ticket. This is a preview feature. For more information, please read ToS 9.1 Preview Features.
Apache Kafka® is an open-source distributed event streaming platform. To run an Apache Kafka® cluster on Kubernetes you can use an operator. This guide uses the Strimzi Kafka Operator.
Strimzi is a CNCF Sandbox project
This page will show you how to install Strimzi Kafka Operator on Elastisys Compliant Kubernetes. You can configure the operator to watch a single or multiple Namespaces.
Supported versions
This installation guide has been tested with Strimzi Kafka Operator version 0.38.0.
Enable Self-Managed Kafka¶
This guide depends on the self-managed cluster resources feature to be enabled. This is so Strimzi Kafka Operator gets the necessary CRDs and ClusterRoles installed.
Strimzi Kafka Operator also requires the image repository quay.io/strimzi
to be allowlisted. Ask your platform administrator to do this while enabling the self-managed cluster resources feature.
Setup CRDs and RBAC¶
In Kubernetes you will need to:
-
Install the required CRDs
-
Create a Namespace for Strimzi Kafka Operator.
-
Create Roles/RoleBindings for Strimzi Kafka Operator.
-
Create ServiceAccount and ConfigMap for Strimzi Kafka Operator.
CRDs¶
You need to apply the Custom Resource Definitions (CRDs) required by Strimzi Kafka Operator. This is typically not allowed in a Compliant Kubernetes Environment, but with Kafka enabled with the self-managed cluster resources feature, this allows you to apply these yourself.
mkdir crds
# Fetches Strimzi Kafka Operator CRDs for v0.38.0 and saves it in the crds directory
curl -L https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.38.0/strimzi-crds-0.38.0.yaml > crds/kafka-crds.yaml
kubectl apply -f crds/kafka-crds.yaml
Namespace¶
You need to create a Namespace where Strimzi Kafka Operator will work. This Namespace should be called kafka
. Create this sub-namespace under eg. production
.
kubectl hns create -n production kafka
Roles and RoleBindings¶
You need to create the necessary Roles for Strimzi Kafka Operator to function. This needs to be done in every Namespace that you want Strimzi Kafka Operator to work in.
Since Compliant Kubernetes uses the Hierarchical Namespace Controller, the easiest way to achieve this is to place the Roles and RoleBindings in the parent Namespace where kafka
was created from. By doing so, all Namespaces created under the same parent Namespace will inherit the Roles and RoleBindings.
If you have multiple Namespaces that ought to be targets for Strimzi Kafka Operator, you can add the Roles and RoleBindings to more than one "parent" Namespace. For instance, to staging
, to get Strimzi Kafka Operator to work with the staging
Namespace and any Namespace anchored to it.
mkdir roles
# Fetches the necessary Roles and saves it in the roles directory
curl https://raw.githubusercontent.com/elastisys/compliantkubernetes/main/docs/user-guide/self-managed-services/kafka-files/roles/kafka-role.yaml > roles/kafka-role.yaml
curl https://raw.githubusercontent.com/elastisys/compliantkubernetes/main/docs/user-guide/self-managed-services/kafka-files/roles/kafka-rolebinding.yaml > roles/kafka-rolebinding.yaml
curl https://raw.githubusercontent.com/elastisys/compliantkubernetes/main/docs/user-guide/self-managed-services/kafka-files/roles/kustomization.yaml > roles/kustomization.yaml
# If you created the Namespace kafka from another Namespace other than production, edit the Namespace in roles/kustomization.yaml
kubectl apply -k roles
ServiceAccount and ConfigMap¶
You need to create the ServiceAccount and ConfigMap that Strimzi Kafka Operator will use.
mkdir sa-cm
# Fetches the ServiceAccount and ConfigMap and saves it in the sa-cm directory
curl https://raw.githubusercontent.com/elastisys/compliantkubernetes/main/docs/user-guide/self-managed-services/kafka-files/sa-cm/kafka-sa-cm.yaml > sa-cm/kafka-sa-cm.yaml
kubectl apply -f sa-cm/kafka-sa-cm.yaml
Install Strimzi Kafka Operator¶
With the initial prep done, you are now ready to deploy the operator.
You can find the deployment manifest here. Deploying this on Compliant Kubernetes does require some securityContext to be added.
Edit the manifest and add this under spec.template.spec.containers[0]
:
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
After that you can apply the manifest with kubectl apply -f 060-Deployment-strimzi-cluster-operator.yaml -n kafka
.
Alternatively you can fetch an already edited file:
mkdir deployment
# Fetches the edited operator deployment and saves it in the deployment directory
curl https://raw.githubusercontent.com/elastisys/compliantkubernetes/main/docs/user-guide/self-managed-services/kafka-files/deployment/kafka-operator-deployment.yaml > deployment/kafka-operator-deployment.yaml
kubectl apply -f deployment/kafka-operator-deployment.yaml
To configure the Strimzi Kafka Operator to watch multiple Namespaces (e.g. running Kafka clusters in different Namespaces other than the kafka Namespace), refer to Further reading.
Deploy your Kafka cluster¶
You are now ready to deploy your Kafka cluster!
The example files provided by Strimzi here serves as a good starting point.
Compliant Kubernetes requires that resource requests are specified for all containers. By default, the Strimzi Cluster Operator does not specify CPU and memory resource requests and limits for its deployed operands.
Refer to Further reading for more information about resources.
You can fetch a modified persistent-single example that includes resource requests:
mkdir kafka-cluster
# Fetches the edited kafka cluster example and saves it in the kafka-cluster directory
curl https://raw.githubusercontent.com/elastisys/compliantkubernetes/main/docs/user-guide/self-managed-services/kafka-files/kafka-cluster/persistent-single.yaml > kafka-cluster/persistent-single.yaml
kubectl apply -f kafka-cluster/persistent-single.yaml
Note
The example above has very low resource requests. It is recommended to adjust these according to your cluster.
Refer to Further reading to learn more about how you can configure your Kafka cluster.
Testing¶
After you have deployed your Kafka cluster, you can test sending and receiving messages to see if it works!
To do this, you can use a producer and consumer as seen here, under the section "Send and receive messages". But since Compliant Kubernetes requires resource requests to be specified, just copy pasting those commands will not work.
You need to create a Pod manifest using the image quay.io/strimzi/kafka:0.38.0-kafka-3.6.0
, and then you need to add your resource requests to this manifest. You also need to have an initial sleep command in the Pod manifest, to sleep the container for a while, this is to avoid the Pod going into the "Completed" stage instantly.
Alternatively you can download a ready to use producer and consumer Pod manifests:
mkdir kafka-testing
# Fetches pod manifests for a producer and consumer and saves it in the kafka-testing directory
curl https://raw.githubusercontent.com/elastisys/compliantkubernetes/main/docs/user-guide/self-managed-services/kafka-files/kafka-testing/kafka-producer.yaml > kafka-testing/kafka-producer.yaml
curl https://raw.githubusercontent.com/elastisys/compliantkubernetes/main/docs/user-guide/self-managed-services/kafka-files/kafka-testing/kafka-consumer.yaml > kafka-testing/kafka-consumer.yaml
kubectl apply -f kafka-testing/kafka-producer.yaml
kubectl apply -f kafka-testing/kafka-consumer.yaml
After the pods have started you can send and receive messages with kubectl exec
.
kubectl exec -it kafka-producer -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic
kubectl exec -it kafka-consumer -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning
Note
If you are running the producer and/or consumer in different Namespace than where your Kafka cluster is, make sure you specify the path to the bootstrap service. E.g. "my-cluster-kafka-bootstrap.kafka.svc:9092", if the Kafka cluster is in the "kafka" Namespace.