Skip to content

Standard Template for on-prem Environment

This document contains instructions on how to set-up a new Compliant Kubernetes on-prem environment.


Decision to be taken

Decisions regarding the following items should be made before venturing on deploying Compliant Kubernetes.

  • Overall architecture, i.e., VM sizes, load-balancer configuration, storage configuration, etc.
  • Identity Provider (IdP) choice and configuration. See this page.
  • On-call Management Tool (OMT) choice and configuration
  1. Make sure you install all prerequisites on your laptop.

  2. Prepare Ubuntu-based VMs: If you are using public clouds, you can create VMs using the scripts included in Kubespray:

  3. Create a git working folder to store Compliant Kubernetes configurations in a version-controlled manner. Run the following commands from the root of the config repo.


    The following steps are done from the root of the git repository you created for the configurations.


    You can choose names for your Management Cluster and Workload Cluster by changing the values for SERVICE_CLUSTER and WORKLOAD_CLUSTERS respectively.

    export CK8S_CONFIG_PATH=./
    export CK8S_ENVIRONMENT_NAME=<my-ck8s-cluster>
    export CK8S_CLOUD_PROVIDER=[exoscale|safespring|citycloud|elastx|aws|baremetal]
    export CK8S_FLAVOR=[dev|prod] # defaults to dev
    export CK8S_PGP_FP=<PGP-fingerprint> # retrieve with gpg --list-secret-keys
  4. Add the Elastisys Compliant Kubernetes Kubespray repo as a git submodule to the configuration repo and install pre-requisites as follows:

    git submodule add
    git submodule update --init --recursive
    cd compliantkubernetes-kubespray
    pip3 install -r kubespray/requirements.txt  # this will install ansible
    ansible-playbook -e 'ansible_python_interpreter=/usr/bin/python3' --ask-become-pass --connection local --inventory, get-requirements.yaml
  5. Add the Compliant Kubernetes Apps repo as a git submodule to the configuration repo and install pre-requisites as follows:

    git submodule add
    cd compliantkubernetes-apps
    ansible-playbook -e 'ansible_python_interpreter=/usr/bin/python3' --ask-become-pass --connection local --inventory, get-requirements.yaml
  6. Create the domain name. You need to create a domain name to access the different services in your environment. You will need to set up the following DNS entries (replace with your domain name).

    • Point these domains to the Workload Cluster ingress controller (this step is done during Compliant Kubernetes app installation):
      • *
    • Point these domains to the Management Cluster ingress controller (this step is done during Compliant Kubernetes app installation):
      • *
    If both Management and Workload Clusters are in the same subnet

    If both the Management and Workload Clusters are in the same subnet, it would be great to configure the following domain names to the private IP addresses of Management Cluster's worker nodes. (Replace with your domain name.)

    • *
    • *
  7. Create S3 credentials and add them to .state/s3cfg.ini.

  8. Set up load balancer

    You need to set up two load balancers, one for the Workload Cluster and one for the Management Cluster.

  9. Make sure you have all necessary tools.

Deploying Compliant Kubernetes using Kubespray

How to change Default Kubernetes Subnet Address

If the default IP block ranges used for Docker and Kubernetes are the same as the internal IP ranges used in the company, you can change the values to resolve the conflict as follows. Note that you can use any valid private IP address range, the values below are put as an example.

* For Management Cluster: Add `kube_service_addresses:` and `kube_pods_subnet:` in `${CK8S_CONFIG_PATH}/sc-config/group_vars/k8s_cluster/ck8s-k8s-cluster.yaml` file.
* For Workload Cluster:  Add `kube_service_addresses:` and `kube_pods_subnet:` in `${CK8S_CONFIG_PATH}/wc-config/group_vars/k8s_cluster/ck8s-k8s-cluster.yaml` file.
* For Management Cluster: Added `docker_options: "--default-address-pool base=,size=24"` in `${CK8S_CONFIG_PATH}/sc-config/group_vars/all/docker.yml` file.
* For Workload Cluster:  Added `docker_options: "--default-address-pool base=,size=24"` in `${CK8S_CONFIG_PATH}/wc-config/group_vars/all/docker.yml` file.

Init Kubespray config in your config path.

  compliantkubernetes-kubespray/ck8s-kubespray init $CLUSTER $CK8S_CLOUD_PROVIDER $CK8S_PGP_FP

Configure OIDC

To configure OpenID access for Kubernetes API and other services, Dex should be configured with your identity provider. Check what Dex needs from your identity provider.

Configure OIDC endpoint

Set kube_oidc_url in ${CK8S_CONFIG_PATH}/sc-config/group_vars/k8s_cluster/ck8s-k8s-cluster.yaml and ${CK8S_CONFIG_PATH}/wc-config/group_vars/k8s_cluster/ck8s-k8s-cluster.yaml based on your cluster. For example, if your domain is then kube_oidc_url is set as kube_oidc_url: in both files.

Copy the VMs information to the inventory files

Add the host name, user and IP address of each VM that you prepared above in ${CK8S_CONFIG_PATH}/sc-config/inventory.inifor Management Cluster and ${CK8S_CONFIG_PATH}/sc-config/inventory.ini for Workload Cluster. Moreover, you also need to add the host names of the master nodes under [kube_control_plane], etcd nodes under [etcd] and worker nodes under [kube_node].


Make sure that the user has SSH access to the VMs.

Run Kubespray to deploy the Kubernetes clusters

  compliantkubernetes-kubespray/bin/ck8s-kubespray apply $CLUSTER --flush-cache


The kubeconfig for wc .state/kube_config_wc.yaml will not be usable until you have installed dex in the Management Cluster (by deploying apps).

Set up Rook

Only for Infrastructure Providers that doesn't natively support storage kubernetes.

Run the following command to set up Rook.

for CLUSTER in  sc wc; do
  export KUBECONFIG=${CK8S_CONFIG_PATH}/.state/kube_config_$CLUSTER.yaml


If the kubeconfig files for the clusters are encrypted with SOPS, you need to decrypt them before using them:

sops--decrypt ${CK8S_CONFIG_PATH}/.state/kube_config_$CLUSTER.yaml > $CLUSTER.yaml

Please restart the operator pod, rook-ceph-operator*, if some pods stall in the initialization state as shown below:

rook-ceph     rook-ceph-crashcollector-minion-0-b75b9fc64-tv2vg    0/1     Init:0/2   0          24m
rook-ceph     rook-ceph-crashcollector-minion-1-5cfb88b66f-mggrh   0/1     Init:0/2   0          36m
rook-ceph     rook-ceph-crashcollector-minion-2-5c74ffffb6-jwk55   0/1     Init:0/2   0          14m


Pods in pending state usually indicate resource shortage. In such cases you need to use bigger instances.

Test Rook

To test Rook, proceed as follows:

  kubectl --kubeconfig ${CK8S_CONFIG_PATH}/.state/kube_config_$CLUSTER.yaml apply -f

  kubectl --kubeconfig ${CK8S_CONFIG_PATH}/.state/kube_config_$CLUSTER.yaml get pvc

You should see PVCs in Bound state. If you want to clean the previously created PVCs:

  kubectl --kubeconfig ${CK8S_CONFIG_PATH}/.state/kube_config_$CLUSTER.yaml delete pvc rbd-pvc

Deploying Compliant Kubernetes Apps

How to change local DNS IP if you change the default Kubernetes subnet address

You need to change the default coreDNS default IP address in common-config.yaml file if you change the default IP block used for Kubernetes services above. To get the coreDNS IP address, run the following commands.

${CK8S_CONFIG_PATH}/compliantkubernetes-apps/bin/ck8s ops kubectl sc get svc -n kube-system coredns
Once you get the IP address edit ${CK8S_CONFIG_PATH}/common-config.yaml file and set the value to global.clusterDns field.

Configure the load balancer IP on the loopback interface for each worker node

The Kubernetes data plane nodes (i.e., worker nodes) cannot connect to themselves with the IP address of the load balancer that fronts them. The easiest is to configure the load balancer's IP address on the loopback interface of each nodes. Create /etc/netplan/20-eip-fix.yaml file and add the following to it. ${loadblancer_ip_address} should be replaced with the IP address of the load balancer for each cluster.

  version: 2
        name: lo
      dhcp4: false
      - ${loadblancer_ip_address}/32
After adding the above content, run the following command in each worker node:

sudo netplan apply

Initialize the apps configuration

compliantkubernetes-apps/bin/ck8s init

This will initialize the configuration in the ${CK8S_CONFIG_PATH} directory. Generating configuration files common-config.yaml, sc-config.yaml and wc-config.yaml, as well as secrets with randomly generated passwords in secrets.yaml. This will also generate read-only default configuration under the directory defaults/ which can be used as a guide for available and suggested options.

Configure the apps and secrets

The configuration files contain some predefined values. You may want to check and edit based on your current environment requirements. The configuration files that require editing are ${CK8S_CONFIG_PATH}/common-config.yaml, ${CK8S_CONFIG_PATH}/sc-config.yaml, ${CK8S_CONFIG_PATH}/wc-config.yaml and ${CK8S_CONFIG_PATH}/secrets.yaml and set the appropriate values for some of the configuration fields. Note that, the latter is encrypted.

vim ${CK8S_CONFIG_PATH}/sc-config.yaml

vim ${CK8S_CONFIG_PATH}/wc-config.yaml

vim ${CK8S_CONFIG_PATH}/common-config.yaml

Edit the secrets.yaml file and add the credentials for:

sops ${CK8S_CONFIG_PATH}/secrets.yaml


The default configuration for the Management Cluster and Workload Cluster are available in the directory ${CK8S_CONFIG_PATH}/defaults/ and can be used as a reference for available options.


Do not modify the read-only default configurations files found in the directory ${CK8S_CONFIG_PATH}/defaults/. Instead configure the cluster by modifying the regular files ${CK8S_CONFIG_PATH}/sc-config.yaml and ${CK8S_CONFIG_PATH}/wc-config.yaml as they will override the default options.

Create S3 buckets

You can use the following command to create the required S3 buckets. The command uses s3cmd in the background and gets configuration and credentials for your S3 provider from the ~/.s3cfg file.

compliantkubernetes-apps/bin/ck8s s3cmd create

Install Compliant Kubernetes apps

This will set up apps, first in the Management Cluster and then in the Workload Cluster:

compliantkubernetes-apps/bin/ck8s apply sc
compliantkubernetes-apps/bin/ck8s apply wc



Leave sufficient time for the system to settle, e.g., request TLS certificates from LetsEncrypt, perhaps as much as 20 minutes.

Check if all helm charts succeeded.

compliantkubernetes-apps/bin/ck8s ops helm wc list -A --all

You can check if the system settled as follows.

  compliantkubernetes-apps/bin/ck8s ops kubectl $CLUSTER get --all-namespaces pods

Check the output of the command above. All Pods need to be Running or Completed status.

  compliantkubernetes-apps/bin/ck8s ops kubectl $CLUSTER get --all-namespaces issuers,clusterissuers,certificates

Check the output of the command above. All resources need to have the Ready column True.


After completing the installation step you can test if the apps are properly installed and ready using the commands below.

Start with the Management Cluster:

compliantkubernetes-apps/bin/ck8s test sc

Then the Workload Clusters:

compliantkubernetes-apps/bin/ck8s test wc