Skip to content

Getting Started with Compliant Kubernetes

Click here to watch demos

This documentation includes a user demo application which allows you to quickly explore the benfits of Compliant Kubernetes. The provided artifacts, including Dockerfile and Helm Chart, allow you to quickly get started on your journey to become an agile organization with zero compromise on compliance with data protection regulations.

Install Prerequisites

As a user, you will need the following before you get started with Compliant Kubernetes:

The easier is to request a dev environment from a managed Compliant Kubernetes provider. You should receive:

  • URLs for Compliant Kubernetes UI components, such as the dashboard, container registry, logs, etc.
  • A kubeconfig file for configuring kubectl access to the cluster.
  • (Optionally) Static username and password. Normally, you should log in via a username and a password of your organizations identity provider.

Make sure you configure your tools properly:

export KUBECONFIG=path/of/kubeconfig.yaml  # leave empty if you use the default of ~/.kube/config
export DOMAIN=  # the domain you received from the administrator

To verify if the required tools are installed and work as expected, type:

docker version
kubectl version  --client
helm version
# You should see the version number of installed tools and no errors.

To verify the received KUBECONFIG, type:

# Notice that you will be asked to complete browser-based single sign-on
kubectl get nodes
# You should see the Nodes of your Kubernetes cluster

To verify the received URLs, type:

curl --head https://dex.$DOMAIN/healthz
curl --include https://harbor.$DOMAIN/api/v2.0/health
curl --head https://grafana.$DOMAIN/healthz
curl --head https://kibana.$DOMAIN/api/status
# All commands above should return 'HTTP/2 200'

Prepare Your Application

To make the most out of Compliant Kubernetes, prepare your application so it features:

Bonus:

Feel free to clone our user demo for inspiration:

git clone https://github.com/elastisys/compliantkubernetes/
cd compliantkubernetes/user-demo

Push Your Application Container Images

Configure container registry credentials

First, retrieve your Harbor CLI secret and configure your local Docker client.

  1. In your browser, type harbor.$DOMAIN where $DOMAIN is the information you retrieved from your administrator.
  2. Log into Harbor using Single Sign-On (SSO) via OpenID.
  3. In the right-top corner, click on your username, then "User Profile".
  4. Copy your CLI secret.
  5. Now log into the container registry: docker login harbor.$DOMAIN.
  6. You should see Login Succeeded.

Create a registry project

Example

Here is an example Dockerfile and .dockerignore to get you started. Don't forget to run as non-root.

If you haven't already done so, create a project called demo via the Harbor UI, which you have accessed in the previous step.

Clone the user demo

If you haven't done so already, clone the user demo:

git clone https://github.com/elastisys/compliantkubernetes/
cd compliantkubernetes/user-demo

Build and push the image

REGISTRY_PROJECT=demo  # Name of the project, created above
TAG=v1                 # Container image tag

docker build -t harbor.$DOMAIN/$REGISTRY_PROJECT/ck8s-user-demo:$TAG .
docker push harbor.$DOMAIN/$REGISTRY_PROJECT/ck8s-user-demo:$TAG

You should see no error message. Note down the sha256 of the image.

Verification

  1. Go to harbor.$DOMAIN.
  2. Choose the demo project.
  3. Check if the image was uploaded successfully, by comparing the tag's sha256 with the one returned by the docker push command above.
  4. (Optional) While you're at it, why not run the vulnerability scanner on the image you just pushed.

Deploy your Application

Pre-verification

Make sure you are in the right namespace on the right cluster:

kubectl get nodes
kubectl config view --minify --output 'jsonpath={..namespace}'; echo

Configure an Image Pull Secret

To start, make sure you configure the Kubernetes cluster with an image pull secret. Ideally, you should create a container registry Robot Account, which only has pull permissions and use its token.

Important

Using your own registry credentials as an image pull secret, instead of creating a robot account, is against best practices and may violate data privacy regulations.

Your registry credentials identify you and allow you to both push and pull images. A robot account should identify the Kubernetes cluster and be only allowed to pull images.

DOCKER_USER="robot\$name"      # enter robot account name
DOCKER_PASSWORD=               # enter robot secret

Now create a pull secret and (optionally) use it by default in the current namespace.

# Create a pull secret
kubectl create secret docker-registry pull-secret \
    --docker-server=harbor.$DOMAIN \
    --docker-username=$DOCKER_USER \
    --docker-password=$DOCKER_PASSWORD

# Set default pull secret in current namespace
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "pull-secret"}]}'

Note

For each Kubernetes namespace, you will have to create an image pull secret and configure it to be default. Aim to have a one-to-one-to-one mapping between Kubernetes namespaces, container registry projects and robot accounts.

Deploy user demo

Example

Here is an example Helm Chart to get you started.

If you haven't done so already, clone the user demo and ensure you are in the right folder:

git clone https://github.com/elastisys/compliantkubernetes/
cd compliantkubernetes/user-demo

Ensure you use the right registry project and image tag, i.e., those that you pushed in the previous example:

REGISTRY_PROJECT=demo
TAG=v1

You are ready to deploy the application.

helm upgrade \
    --install \
    myapp \
    deploy/ck8s-user-demo/ \
    --set image.repository=harbor.$DOMAIN/$REGISTRY_PROJECT/ck8s-user-demo \
    --set image.tag=$TAG \
    --set ingress.hostname=demo.$DOMAIN

Verification

Verify that the application was deployed successfully:

kubectl get pods
# Wait until the status of your Pod is Running.

Verify that the certificate was issued successfully:

kubectl get certificate
# Wait until your certificate shows READY True.

Verify that your application is online. You may use your browser or curl:

curl --include https://demo.$DOMAIN
# First line should be HTTP/2 200

Do not expose $DOMAIN to your users.

Although your administrator will set *.$DOMAIN to point to your applications, prefer to buy a branded domain. For example, register the domain myapp.com and point it via a CNAME or ALIAS record to myapp.$DOMAIN.

Use topologySpreadConstraints if you want cross-data-center resilience

If you want your application to tolerate a whole zone (data-center) to go down, you need to add topologySpreadConstraints by uncommenting the relevant section in values.yaml.

In order for this to work, your administrator must configure the Nodes with zone labels. You can verify if this was performed correctly typing kubectl get nodes --show-labels and checking if Nodes feature the topology.kubernetes.io/zone label.

Search on Application Logs

The user demo application already includes structured logging: For each HTTP request, it logs the URL, the user agent, etc. Compliant Kubernetes further adds the Pod name, Helm Chart name, Helm Release name, etc. to each log entry.

The screenshot below gives an example of log entries produced by the user demo application. It was obtained by using the index pattern kubernetes* and the filter kubernetes.labels.app_kubernetes_io/instance:myapp.

Example of User Demo Logs

Note

You may want to save frequently used searches as dashboards. Compliant Kubernetes saves and backs these up for you.

Monitor your Application

The user demo already includes a ServiceMonitor, as required for Compliant Kubernetes to collect metrics from its /metrics endpoint:

{{- if .Values.serviceMonitor.enabled -}}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: {{ include "ck8s-user-demo.fullname" . }}
  labels:
    {{- include "ck8s-user-demo.labels" . | nindent 4 }}
spec:
  selector:
    matchLabels:
    {{- include "ck8s-user-demo.selectorLabels" . | nindent 6 }}
  endpoints:
  - port: http
{{- end }}

The screenshot below shows a Grafana dashboard featuring the query rate(http_request_duration_seconds_count[1m]). It shows the request rate for the user demo application for each path and status code. As can be seen, the /users endpoint is getting popular.

Example of User Demo Metrics

Note

You may want to save frequently used dashboards. Compliant Kubernetes saves and backs these up for you.

Alert on Application Metrics

The user demo already includes a PrometheusRule, to configure an alert:

{{- if .Values.prometheusRule.enabled -}}
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: {{ include "ck8s-user-demo.fullname" . }}
  labels:
    {{- include "ck8s-user-demo.labels" . | nindent 4 }}
spec:
  groups:
  - name: ./example.rules
    rules:
    - alert: ApplicationIsActuallyUsed
      expr: rate(http_request_duration_seconds_count[1m])>1
{{- end }}

The screenshot below gives an example of the application alert, as seen in AlertManager.

Example of User Demo Alerts

Back up Application Data

Compliant Kubernetes takes a daily backup of all Kubernetes Resources in all user namespaces. Persistent Volumes will be backed up if they are tied to a Pod. If backups are not wanted the label compliantkubernetes.io/nobackup can be added to opt-out of the daily backups.

Application metrics (Grafana) and application log (Kibana) dashboards are also backup up by default.

By default, backups are stored for 720 hours (30 days).