Skip to content

This page is out of date

We are currently working on internal documentation to streamline Compliant Kubernetes onboarding for selected cloud providers. Until those documents are ready, and until we have capacity to make parts of that documentation public, this page is out-of-date.

Nevertheless, parts of it are useful. Use at your own risk and don't expect things to work smoothly.

Compliant Kubernetes on Openstack

This document contains instructions on how to set up a Compliant Kubernetes environment (consisting of a service cluster and one or more workload clusters) on Openstack.

  1. Infrastructure setup for two clusters: one service and one workload cluster
  2. Deploying Compliant Kubernetes on top of the two clusters.
  3. Creating DNS Records
  4. Deploying Compliant Kubernetes apps

Before starting, make sure you have all necessary tools. In addition to these general tools, you will also need: - Openstack credentials (either using openrc or the clouds.yaml configuration file) for setting up the infrastructure.

Note

Although recommended OpenStack authentication method is clouds.yaml, it is more convenient to use the openrc method with Compliant Kubernetes as it works both with Kubespray and Terraform. If you are using the clouds.yaml method, at the moment, Kubespray will still expect you to set a few environment variables.

Note

This guide is written for compliantkubernetes-apps v0.17.0

Setup

Choose names for your service cluster and workload cluster(s):

SERVICE_CLUSTER="sc"
WORKLOAD_CLUSTERS=( "wc0" "wc1" )

export CK8S_CONFIG_PATH=~/.ck8s/<environment-name>
export SOPS_FP=<PGP-fingerprint> # retrieve with gpg --list-secret-keys

Infrastructure setup using Terraform

Before trying any of the steps, clone the Elastisys Compliant Kubernetes Kubespray repo as follows:

git clone --recursive https://github.com/elastisys/compliantkubernetes-kubespray

Expose Openstack credentials to Terraform

Terraform will need access to Openstack credentials in order to create the infrastructure. More details can be found here. We will be using the declarative option with the open.rc file.

Expose Openstack credentials to Terraform For authentication create or download, from your provider, the file openstack-rc and source path/to/your/openstack-rc. The file should contain the following variables:

export OS_USERNAME=
export OS_PASSWORD=
export OS_AUTH_URL=
export OS_USER_DOMAIN_NAME=
export OS_PROJECT_DOMAIN_NAME=
export OS_REGION_NAME=
export OS_PROJECT_NAME=
export OS_TENANT_NAME=
export OS_AUTH_VERSION=
export OS_IDENTITY_API_VERSION=
export OS_PROJECT_ID=

Customize your infrastructure

Start by initializing a Compliant Kubernetes environment using Compliant Kubernetes Kubespray. All of this is done from the root of the compliantkubernetes-kubespray repository.

for CLUSTER in "${SERVICE_CLUSTER}" "${WORKLOAD_CLUSTERS[@]}"; do
  ./bin/ck8s-kubespray init "${CLUSTER}" openstack "${SOPS_FP}"
done

Configure Terraform by creating a cluster.tfvars file for each cluster. The available options can be seen in kubespray/contrib/terraform/openstack/variables.tf. There is a sample file that can be copied to get something to start from.

for CLUSTER in ${SERVICE_CLUSTER} ${WORKLOAD_CLUSTERS[@]}; do
  cp kubespray/contrib/terraform/openstack/sample-inventory/cluster.tfvars "${CK8S_CONFIG_PATH}/${CLUSTER}-config/cluster.tfvars"
done

Note

You really must edit the values in these files. There is no way to set sane defaults for what flavor to use, what availability zones or networks are available across providers. In the section below some guidance and samples are provided but remember that they might be useless to you depending on your needs and setup.

Infrastructure guidance

The minimum infrastructure sizing requirements are at least three worker nodes with 4 cores and 8 GB memory each, and we recommend you to have at least 2 cores and 4 GB for your control plane nodes.

Note

A recommended production infrastructure sizing is available in the architecture diagram.

Below is example cluster.tfvars for a few select openstack providers. The examples are copy-pastable, but you might want to change cluster_name and network_name (if neutron is used!).

# your Kubernetes cluster name here
cluster_name = "your-cluster-name"

# list of availability zones available in your OpenStack cluster
#az_list = ["nova"]

# SSH key to use for access to nodes
public_key_path = "~/.ssh/id_rsa.pub"

# image to use for bastion, masters, standalone etcd instances, and nodes
image = "Ubuntu 20.04 Focal Fossa 20200423"

# user on the node (ex. core on Container Linux, ubuntu on Ubuntu, etc.)
ssh_user = "ubuntu"

# 0|1 bastion nodes
number_of_bastions = 0

# standalone etcds
number_of_etcd = 0

# masters
number_of_k8s_masters = 1

number_of_k8s_masters_no_etcd = 0

number_of_k8s_masters_no_floating_ip = 0

number_of_k8s_masters_no_floating_ip_no_etcd = 0

# Flavor depends on your openstack installation
# you can get available flavor IDs through `openstack flavor list`
flavor_k8s_master = "89afeed0-9e41-4091-af73-727298a5d959""

# nodes
number_of_k8s_nodes = 3

number_of_k8s_nodes_no_floating_ip = 0

# Flavor depends on your openstack installation
# you can get available flavor IDs through `openstack flavor list`
flavor_k8s_node = "ecd976c3-c71c-4096-b138-e4d964c0b27f"

# networking
# ssh access to nodes
k8s_allowed_remote_ips = ["0.0.0.0/0"]

# List of CIDR blocks allowed to initiate an API connection
master_allowed_remote_ips = ["0.0.0.0/0"]

worker_allowed_ports = [
  { # Node ports
    "protocol"         = "tcp"
    "port_range_min"   = 30000
    "port_range_max"   = 32767
    "remote_ip_prefix" = "0.0.0.0/0"
  },
  { # HTTP
    "protocol"         = "tcp"
    "port_range_min"   = 80
    "port_range_max"   = 80
    "remote_ip_prefix" = "0.0.0.0/0"
  },
  { # HTTPS
    "protocol"         = "tcp"
    "port_range_min"   = 443
    "port_range_max"   = 443
    "remote_ip_prefix" = "0.0.0.0/0"
  }
]

# use `openstack network list` to list the available external networks
network_name = "name-of-your-network"

# UUID of the external network that will be routed to
external_net = "your-external-network-uuid"
floatingip_pool = "ext-net"

# If 1, nodes with floating IPs will transmit internal cluster traffic via floating IPs; if 0 private IPs will be used instead. Default value is 1.
use_access_ip = 0

# Create and use openstack nova servergroups, default: false
use_server_groups = true

subnet_cidr = "172.16.0.0/24"
# your Kubernetes cluster name here
cluster_name = "your-cluster-name"

# list of availability zones available in your OpenStack cluster
#az_list = ["nova"]

# SSH key to use for access to nodes
public_key_path = "~/.ssh/id_rsa.pub"

# image to use for bastion, masters, standalone etcd instances, and nodes
image = "Ubuntu 20.04 Focal Fossa 20200423"

# user on the node (ex. core on Container Linux, ubuntu on Ubuntu, etc.)
ssh_user = "ubuntu"

# 0|1 bastion nodes
number_of_bastions = 0

# standalone etcds
number_of_etcd = 0

# masters
number_of_k8s_masters = 1

number_of_k8s_masters_no_etcd = 0

number_of_k8s_masters_no_floating_ip = 0

number_of_k8s_masters_no_floating_ip_no_etcd = 0

# Flavor depends on your openstack installation
# you can get available flavor IDs through `openstack flavor list`
flavor_k8s_master = "96c7903e-32f0-421d-b6a2-a45c97b15665"

# nodes
number_of_k8s_nodes = 3

number_of_k8s_nodes_no_floating_ip = 0

# Flavor depends on your openstack installation
# you can get available flavor IDs through `openstack flavor list`
flavor_k8s_node = "572a3b2e-6329-4053-b872-aecb1e70d8a6"

# networking
# ssh access to nodes
k8s_allowed_remote_ips = ["0.0.0.0/0"]

# List of CIDR blocks allowed to initiate an API connection
master_allowed_remote_ips = ["0.0.0.0/0"]

worker_allowed_ports = [
  { # Node ports
    "protocol"         = "tcp"
    "port_range_min"   = 30000
    "port_range_max"   = 32767
    "remote_ip_prefix" = "0.0.0.0/0"
  },
  { # HTTP
    "protocol"         = "tcp"
    "port_range_min"   = 80
    "port_range_max"   = 80
    "remote_ip_prefix" = "0.0.0.0/0"
  },
  { # HTTPS
    "protocol"         = "tcp"
    "port_range_min"   = 443
    "port_range_max"   = 443
    "remote_ip_prefix" = "0.0.0.0/0"
  }
]
# use `openstack network list` to list the available external networks
network_name = "name-of-your-network"

# UUID of the external network that will be routed to
external_net = "your-external-network-uuid"
floatingip_pool = "ext-net"

# If 1, nodes with floating IPs will transmit internal cluster traffic via floating IPs; if 0 private IPs will be used instead. Default value is 1.
use_access_ip = 0

# Create and use openstack nova servergroups, default: false
use_server_groups = true

subnet_cidr = "172.16.0.0/24"
# your Kubernetes cluster name here
cluster_name = "your-cluster-name"

# SSH key to use for access to nodes
public_key_path = "~/.ssh/id_rsa.pub"

# image to use for bastion, masters, standalone etcd instances, and nodes
image = "ubuntu-20.04"

# user on the node (ex. core on Container Linux, ubuntu on Ubuntu, etc.)
ssh_user = "ubuntu"

# 0|1 bastion nodes
number_of_bastions = 0

use_neutron = 0

# standalone etcds
number_of_etcd = 0

# masters
number_of_k8s_masters = 0

number_of_k8s_masters_no_etcd = 0

number_of_k8s_masters_no_floating_ip = 1

number_of_k8s_masters_no_floating_ip_no_etcd = 0

# Flavor depends on your openstack installation
# you can get available flavor IDs through `openstack flavor list`
flavor_k8s_master = "8a707999-0bce-4f2f-8243-b4253ba7c473"

# nodes
number_of_k8s_nodes = 0

number_of_k8s_nodes_no_floating_ip = 3

# Flavor depends on your openstack installation
# you can get available flavor IDs through `openstack flavor list`
flavor_k8s_node = "5b40af67-9d11-45ed-a44f-e876766160a5"

# networking
# ssh access to nodes
k8s_allowed_remote_ips = ["0.0.0.0/0"]

# List of CIDR blocks allowed to initiate an API connection
master_allowed_remote_ips = ["0.0.0.0/0"]

worker_allowed_ports = [
  { # Node ports
    "protocol"         = "tcp"
    "port_range_min"   = 30000
    "port_range_max"   = 32767
    "remote_ip_prefix" = "0.0.0.0/0"
  },
  { # HTTP
    "protocol"         = "tcp"
    "port_range_min"   = 80
    "port_range_max"   = 80
    "remote_ip_prefix" = "0.0.0.0/0"
  },
  { # HTTPS
    "protocol"         = "tcp"
    "port_range_min"   = 443
    "port_range_max"   = 443
    "remote_ip_prefix" = "0.0.0.0/0"
  }
]

# use `openstack network list` to list the available external networks
network_name = "public"

# UUID of the external network that will be routed to
external_net = "your-external-network-uuid"

# If 1, nodes with floating IPs will transmit internal cluster traffic via floating IPs; if 0 private IPs will be used instead. Default value is 1.
use_access_ip = 1

# Create and use openstack nova servergroups, default: false
use_server_groups = true

subnet_cidr = "172.16.0.0/24"

Initialize and apply Terraform

MODULE_PATH="$(pwd)/kubespray/contrib/terraform/openstack"

for CLUSTER in "${SERVICE_CLUSTER}" "${WORKLOAD_CLUSTERS[@]}"; do
  pushd "${MODULE_PATH}"
  terraform init
  terraform apply -var-file="${CK8S_CONFIG_PATH}/${CLUSTER}-config/cluster.tfvars" -state="${CK8S_CONFIG_PATH}/${CLUSTER}-config/terraform.tfstate"
  popd
done

Warning

The above will not work well if you are using a bastion host. This is due to some hard coded paths. This is fixed in kubespray release-2.17. If you are using an older version of kubespray, you may link the kubespray/contrib folder to the correct relative path, or make sure your CK8S_CONFIG_PATH is already at a proper place relative to the same.

Deploying Compliant Kubernetes using Kubespray

Before we can run Kubespray, we will need to go through the relevant variables. Additionally we will need to expose some credentials so that Kubespray can set up cloud provider integration.

You will need to change at least one value: kube_oidc_url in ${CK8S_CONFIG_PATH}/${CLUSTER}-config/group_vars/k8s_cluster/ck8s-k8s-cluster.yaml, normally this should be set to https://dex.BASE_DOMAIN.

For cloud provider integration, you have a few options as described here. We will be going with the external cloud provider and simply source the Openstack credentials.

Setting up Kubespray variables

Below are some examples for ${CK8S_CONFIG_PATH}/${CLUSTER}-config/group_vars/k8s_cluster/ck8s-k8s-cluster-openstack.yaml for a few selected openstack providers. The examples are copy-pastable, but you will have to change some of the values.

etcd_kubeadm_enabled: true

cloud_provider: external
external_cloud_provider: openstack
calico_mtu: 1480

cinder_csi_enabled: true
persistent_volumes_enabled: true
expand_persistent_volumes: true
openstack_blockstorage_ignore_volume_az: true

## Cinder CSI is enabled by default along with the configuration options to enable persistent volumes and the expansion of these volumes.
## It is also set to ignore the volume availability zone to allow volumes to attach to nodes in different or mismatching zones. The default works well with both CityCloud and SafeSpring.
storage_classes:
  - name: cinder-csi
    is_default: true
    parameters:
      availability: nova
      allowVolumeExpansion: true
      ## openstack volume type list
      type: default_encrypted


## If you want to set up LBaaS in your cluster, you can add the following config:
external_openstack_cloud_controller_extra_args:
  ## Must be different for every cluster in the same openstack project
  cluster-name: "<your-cluster-name>.cluster.local"

## use `openstack subnet list` to list the available subnets
external_openstack_lbaas_subnet_id: "your-cluster-subnet-uuid"

## use `openstack network list` to list the available external networks
external_openstack_lbaas_floating_network_id: "your-external-network-uuid"

external_openstack_lbaas_method: "ROUND_ROBIN"
external_openstack_lbaas_provider: "octavia"
external_openstack_lbaas_use_octavia: true
external_openstack_lbaas_create_monitor: true
external_openstack_lbaas_monitor_delay: "1m"
external_openstack_lbaas_monitor_timeout: "30s"
external_openstack_lbaas_monitor_max_retries: "3"
external_openstack_network_public_networks:
  - "ext-net"

## if you have use_access_ip = 0 in cluster.tfvars, you should add the public ip address of the master nodes to this variable
supplementary_addresses_in_ssl_keys: ["master-ip-address1", "master-ip-address2", ...]
etcd_kubeadm_enabled: true

cloud_provider: external
external_cloud_provider: openstack
calico_mtu: 1480

cinder_csi_enabled: true
persistent_volumes_enabled: true
expand_persistent_volumes: true
openstack_blockstorage_ignore_volume_az: true

storage_classes:
  - name: cinder-csi
    is_default: true
    parameters:
      availability: nova
      allowVolumeExpansion: true
      ## openstack volume type list
      type: ceph_hdd_encrypted

external_openstack_cloud_controller_extra_args:
  ## Must be different for every cluster in the same openstack project
  cluster-name: "<your-cluster-name>.cluster.local"

## use `openstack subnet list` to list the available subnets
external_openstack_lbaas_subnet_id: "your-cluster-subnet-uuid"

## use `openstack network list` to list the available external networks
external_openstack_lbaas_floating_network_id: "your-external-network-uuid"

external_openstack_lbaas_method: "ROUND_ROBIN"
external_openstack_lbaas_provider: "octavia"
external_openstack_lbaas_use_octavia: true
external_openstack_lbaas_create_monitor: true
external_openstack_lbaas_monitor_delay: "1m"
external_openstack_lbaas_monitor_timeout: "30s"
external_openstack_lbaas_monitor_max_retries: "3"
external_openstack_network_public_networks:
  - "ext-net"

## if you have use_access_ip = 0 in cluster.tfvars, you should add the public ip address of the master nodes to this variable
supplementary_addresses_in_ssl_keys: ["master-ip-address1", "master-ip-address2", ...]

Note

At this point if the cluster is running on Safespring and you are using kubespray v2.17.0+ it is possible to create an application credential. Which will give the cluster its own set of credentials instead of using your own.

To create a set of credentials use the following command: openstack application credential create <name>

And set the following environment variables

export OS_APPLICATION_CREDENTIAL_NAME: <name>
export OS_APPLICATION_CREDENTIAL_ID: <project_id>
export OS_APPLICATION_CREDENTIAL_SECRET: <secret>

Run Kubespray

Copy the script for generating dynamic ansible inventories:

for CLUSTER in "${SERVICE_CLUSTER}" "${WORKLOAD_CLUSTERS[@]}"; do
  cp kubespray/contrib/terraform/terraform.py "${CK8S_CONFIG_PATH}/${CLUSTER}-config/inventory.ini"
  chmod +x "${CK8S_CONFIG_PATH}/${CLUSTER}-config/inventory.ini"
done

Now it is time to run the Kubespray playbook!

for CLUSTER in "${SERVICE_CLUSTER}" "${WORKLOAD_CLUSTERS[@]}"; do
  ./bin/ck8s-kubespray apply "${CLUSTER}" --flush-cache
done

Correct the Kubernetes API IP addresses

Locate the encrypted kubeconfigs in ${CK8S_CONFIG_PATH}/.state/kube_config_*.yaml and edit them using sops. Copy the public IP address of the load balancer (usually one of the masters public IP address) and replace the private IP address for the server field in ${CK8S_CONFIG_PATH}/.state/kube_config_*.yaml.

for CLUSTER in ${SERVICE_CLUSTER} "${WORKLOAD_CLUSTERS[@]}"; do
    sops ${CK8S_CONFIG_PATH}/.state/kube_config_$CLUSTER.yaml
done

Test access to the clusters as follows

You should now have an encrypted kubeconfig file for each cluster under $CK8S_CONFIG_PATH/.state. Check that they work like this:

for CLUSTER in "${SERVICE_CLUSTER}" "${WORKLOAD_CLUSTERS[@]}"; do
  sops exec-file "${CK8S_CONFIG_PATH}/.state/kube_config_${CLUSTER}.yaml" \
    'kubectl --kubeconfig {} get nodes'
done

Deploying Compliant Kubernetes Apps

Now that the Kubernetes clusters are up and running, we are ready to install the Compliant Kubernetes apps.

Clone compliantkubernetes-apps and Install Pre-requisites

If you haven't done so already, clone the compliantkubernetes-apps repo and install pre-requisites.

git clone https://github.com/elastisys/compliantkubernetes-apps.git
cd compliantkubernetes-apps
ansible-playbook -e 'ansible_python_interpreter=/usr/bin/python3' --ask-become-pass --connection local --inventory 127.0.0.1, get-requirements.yaml

Initialize the apps configuration

export CK8S_ENVIRONMENT_NAME=my-environment-name
#export CK8S_FLAVOR=[dev|prod] # defaults to dev
export CK8S_CONFIG_PATH=~/.ck8s/my-cluster-path
export CK8S_CLOUD_PROVIDER=# [exoscale|safespring|citycloud|aws|baremetal]
export CK8S_PGP_FP=<your GPG key fingerprint>  # retrieve with gpg --list-secret-keys

./bin/ck8s init

This will initialise the configuration in the ${CK8S_CONFIG_PATH} directory. Generating configuration files sc-config.yaml and wc-config.yaml, as well as secrets with randomly generated passwords in secrets.yaml. This will also generate read-only default configuration under the directory defaults/ which can be used as a guide for available and suggested options.

ls -l $CK8S_CONFIG_PATH

Configure the apps

Edit the configuration files ${CK8S_CONFIG_PATH}/sc-config.yaml, ${CK8S_CONFIG_PATH}/wc-config.yaml and ${CK8S_CONFIG_PATH}/secrets.yaml and set the appropriate values for some of the configuration fields. Note that, the latter is encrypted.

vim ${CK8S_CONFIG_PATH}/sc-config.yaml
vim ${CK8S_CONFIG_PATH}/wc-config.yaml
sops ${CK8S_CONFIG_PATH}/secrets.yaml

Tip

The default configuration for the service cluster and workload cluster are available in the directory ${CK8S_CONFIG_PATH}/defaults/ and can be used as a reference for available options.

Warning

Do not modify the read-only default configurations files found in the directory ${CK8S_CONFIG_PATH}/defaults/. Instead configure the cluster by modifying the regular files ${CK8S_CONFIG_PATH}/sc-config.yaml and ${CK8S_CONFIG_PATH}/wc-config.yaml as they will override the default options.

The following are the minimum change you should perform:

# ${CK8S_CONFIG_PATH}/sc-config.yaml and ${CK8S_CONFIG_PATH}/wc-config.yaml
global:
  baseDomain: set-me         # set to $CK8S_ENVIRONMENT_NAME.$DOMAIN
  opsDomain: set-me          # set to ops.$CK8S_ENVIRONMENT_NAME.$DOMAIN
  # issuer: letsencrypt-prod # set as default for prod flavor, defaults to "letsencrypt-staging" for dev

objectStorage:
  # type: s3 # set as default for prod flavor, defaults to "none" for dev
  s3:
    region: set-me         # Kna1 for Karlskrona/Sweden, Fra1 for Frankfurt/Germany
    regionEndpoint: set-me # https://s3-<region>.citycloud.com:8080 # kna1 or fra1
    # forcePathStyle: true # set as default

## This block is set as default for using service load balancers
# ingressNginx:
#     controller:
#       useHostPort: false
#       service:
#         enabled: true
#         type: LoadBalancer
#         annotations: ""

clusterAdmin:
  users: # set to the cluster admin users
    - set-me
    - admin@example.com
# ${CK8S_CONFIG_PATH}/sc-config.yaml (in addition to the changes above)
user:
  grafana:
    oidc:
      allowedDomains: # set to your domain(s), or unset using [] to deny all
        - set-me
        - example.com

harbor:
  persistence:
    # type: swift           # set as default for prod flavor, defaults to "filesystem" for dev
    # disableRedirect: true # set as default
    swift:
      authURL: set-me           # https://<region>.citycloud.com:5000 # kna1 or fra1
      regionName: set-me        # Kna1 for Karlskrona/Sweden, Fra1 for Frankfurt/Germany
      projectDomainName: set-me
      userDomainName: set-me
      projectName: set-me
      projectID: set-me
      tenantName: set-me
  oidc:
    groupClaimName: set-me # set to group claim name used by OIDC provider
    adminGroupName: set-me # name of the group that automatically will get admin

elasticsearch:
  extraRoleMappings: # set to configure elasticsearch access, or unset using []
    - mapping_name: kibana_user
      definition:
        users:
          - set-me
    - mapping_name: kubernetes_log_reader
      definition:
        users:
          - set-me
    - mapping_name: all_access
      definition:
        users:
          - set-me

alerts:
  opsGenieHeartbeat:
    # enabled: true # set as default for prod flavour, defaults to "false" for dev
    name: set-me    # set to name the hearbeat if enabled

issuers:
  letsencrypt:
    prod:
      email: set-me # set this to an email to receive LetsEncrypt notifications
    staging:
      email: set-me # set this to an email to receive LetsEncrypt notifications
# ${CK8S_CONFIG_PATH}/wc-config.yaml (in addition to the changes above)
user:
  namespaces: # set this to create user namespaces, or unset using []
    - set-me
    - production
    - staging
  adminUsers: # set this to create admins in the user namespaces, or unset using []
    - set-me
    - admin@example.com
  adminGroups: # set this to create admin groups in the user namespaces, or unset using []
    - set-me
  # alertmanager: # add this block to enable user accessible alertmanager
  #   enabled: true
  #   namespace: alertmanager # note that this namespace must be listed above under "user.namespaces"

opa:
  imageRegistry:
    URL: # set this to the allowed image registry, or unset using [] to deny all
      - set-me
      - harbor.example.com
# ${CK8S_CONFIG_PATH}/secrets.yaml
objectStorage:
  s3:
    accessKey: set-me # set to your s3 accesskey
    secretKey: set-me # set to your s3 secretKey
# ${CK8S_CONFIG_PATH}/sc-config.yaml and ${CK8S_CONFIG_PATH}/wc-config.yaml
global:
  baseDomain: set-me         # set to $CK8S_ENVIRONMENT_NAME.$DOMAIN
  opsDomain: set-me          # set to ops.$CK8S_ENVIRONMENT_NAME.$DOMAIN
  # issuer: letsencrypt-prod # set as default for prod flavor, defaults to "letsencrypt-staging" for dev

objectStorage:
  # type: s3 # set as default for prod flavor, defaults to "none" for dev
  s3:
    region: sto2
    regionEndpoint: https://s3.sto2.safedc.net
    # forcePathStyle: true # set as default

clusterAdmin:
  users: # set to the cluster admin users
    - set-me
    - admin@example.com
# ${CK8S_CONFIG_PATH}/sc-config.yaml (in addition to the changes above)
user:
  grafana:
    oidc:
      allowedDomains: # set to your domain(s), or unset using [] to deny all
        - set-me
        - example.com

harbor:
  # persistence:
  #   type: objectStorage   # set as default for prod flavor, defaults to "filesystem" for dev
  #   disableRedirect: true # set as default
  oidc:
    groupClaimName: set-me # set to group claim name used by OIDC provider
    adminGroupName: set-me # name of the group that automatically will get admin

elasticsearch:
  extraRoleMappings: # set to configure elasticsearch access, or unset using []
    - mapping_name: kibana_user
      definition:
        users:
          - set-me
    - mapping_name: kubernetes_log_reader
      definition:
        users:
          - set-me
    - mapping_name: all_access
      definition:
        users:
          - set-me

alerts:
  opsGenieHeartbeat:
    # enabled: true # set as default for prod flavour, defaults to "false" for dev
    name: set-me    # set to name the hearbeat if enabled

issuers:
  letsencrypt:
    prod:
      email: set-me # set this to an email to receive LetsEncrypt notifications
    staging:
      email: set-me # set this to an email to receive LetsEncrypt notifications
# ${CK8S_CONFIG_PATH}/wc-config.yaml (in addition to the changes above)
user:
  namespaces: # set this to create user namespaces, or unset using []
    - set-me
    - production
    - staging
  adminUsers: # set this to create admins in the user namespaces, or unset using []
    - set-me
    - admin@example.com
  adminGroups: # set this to create admin groups in the user namespaces, or unset using []
    - set-me
  # alertmanager: # add this block to enable user accessible alertmanager
  #   enabled: true
  #   namespace: alertmanager # note that this namespace must be listed above under "user.namespaces"

opa:
  imageRegistry:
    URL: # set this to the allowed image registry, or unset using [] to deny all
      - set-me
      - harbor.example.com
# ${CK8S_CONFIG_PATH}/secrets.yaml
objectStorage:
  s3:
    accessKey: set-me # set to your s3 accesskey
    secretKey: set-me # set to your s3 secretKey

Create S3 buckets

You can use the following script to create required S3 buckets. The script uses s3cmd in the background and gets configuration and credentials for your S3 provider from $CK8S_CONFIG_PATH/.state/s3cfg.ini file.

# To get your s3 access and secret keys run:
 openstack --os-interface public ec2 credentials list
# If you don't have any create them with:
 openstack --os-interface public ec2 credentials create

# Use your default s3cmd config file: "$CK8S_CONFIG_PATH/.state/s3cfg.ini" that should contain:
access_key =
secret_key =
host_base = s3-<region>.citycloud.com:8080
host_bucket = s3-<region>.citycloud.com:8080
signurl_use_https = True
use_https = True

./scripts/S3/entry.sh --s3cfg "$CK8S_CONFIG_PATH/.state/s3cfg.ini" create

DNS

If are using service loadbalancers on citycloud you must provision it before you can setup the DNS. You can do that by running the following:

# for the service cluster
bin/ck8s bootstrap sc

bin/ck8s ops helmfile sc -l app=common-psp-rbac -l app=service-cluster-psp-rbac apply

bin/ck8s ops helmfile sc -l app=kube-prometheus-stack apply

bin/ck8s ops helmfile sc -l app=ingress-nginx apply

bin/ck8s ops kubectl sc get svc -n ingress-nginx

# for the workload clusters
for CLUSTER in "${WORKLOAD_CLUSTERS[@]}"; do
    ln -sf $CK8S_CONFIG_PATH/.state/kube_config_${CLUSTER}.yaml $CK8S_CONFIG_PATH/.state/kube_config_wc.yaml

    bin/ck8s bootstrap wc

    bin/ck8s ops helmfile wc -l app=common-psp-rbac -l app=workload-cluster-psp-rbac apply

    bin/ck8s ops helmfile wc -l app=kube-prometheus-stack apply

    bin/ck8s ops helmfile wc -l app=ingress-nginx apply

    bin/ck8s ops kubectl wc get svc -n ingress-nginx

done

Now that we have the loadbalancer public IPs we can setup the DNS.

  1. If you are using Exoscale as your DNS provider make sure that your have Exoscale cli installed and you can follow this guide for more details.
  2. If you are using AWS make sure you have AWS cli installed and follow the below instructions:
    vim ${CK8S_CONFIG_PATH}/dns.json
    
    # add this lines
    {
      "Comment": "Manage test cluster DNS records",
      "Changes": [
        {
          "Action": "UPSERT",
          "ResourceRecordSet": {
            "Name": "*.CK8S_ENVIRONMENT_NAME.DOMAIN",
            "Type": "A",
            "TTL": 300,
            "ResourceRecords": [{ "Value": "<wc_cluster_lb_ip>"}]
          }
        },
        {
          "Action": "UPSERT",
          "ResourceRecordSet": {
            "Name": "*.ops.CK8S_ENVIRONMENT_NAME.DOMAIN",
            "Type": "A",
            "TTL": 300,
            "ResourceRecords": [{ "Value": "<sc_cluster_lb_ip>"}]
          }
        },
        {
          "Action": "UPSERT",
          "ResourceRecordSet": {
            "Name": "grafana.CK8S_ENVIRONMENT_NAME.DOMAIN",
            "Type": "A",
            "TTL": 300,
            "ResourceRecords": [{ "Value": "<sc_cluster_lb_ip>"}]
          }
        },
        {
          "Action": "UPSERT",
          "ResourceRecordSet": {
            "Name": "harbor.CK8S_ENVIRONMENT_NAME.DOMAIN",
            "Type": "A",
            "TTL": 300,
            "ResourceRecords": [{ "Value": "<sc_cluster_lb_ip>"}]
          }
        },
        {
          "Action": "UPSERT",
          "ResourceRecordSet": {
            "Name": "notary.harbor.CK8S_ENVIRONMENT_NAME.DOMAIN",
            "Type": "A",
            "TTL": 300,
            "ResourceRecords": [{ "Value": "<sc_cluster_lb_ip>"}]
          }
        },
        {
          "Action": "UPSERT",
          "ResourceRecordSet": {
            "Name": "kibana.CK8S_ENVIRONMENT_NAME.DOMAIN",
            "Type": "A",
            "TTL": 300,
            "ResourceRecords": [{ "Value": "<sc_cluster_lb_ip>"}]
          }
        },
        {
          "Action": "UPSERT",
          "ResourceRecordSet": {
            "Name": "dex.CK8S_ENVIRONMENT_NAME.DOMAIN",
            "Type": "A",
            "TTL": 300,
            "ResourceRecords": [{ "Value": "<sc_cluster_lb_ip>"}]
          }
        }
      ]
    }
    
    
    # set your profile credentials
    AWS_ACCESS_KEY_ID='my-access-key'
    AWS_SECRET_ACCESS_KEY='my-secret-key'
    
    aws --configure default ${AWS_ACCESS_KEY_ID} ${AWS_SECRET_ACCESS_KEY}
    aws configure set region <region_name>
    
    # get your hosted zone id
    aws route53 list-hosted-zones
    
    # apply the DNS changes
    aws route53 change-resource-record-sets --hosted-zone-id <hosted_zone_id> --change-batch file://${CK8S_CONFIG_PATH}/dns.json
    

Install Compliant Kubernetes apps

Start with the service cluster:

ln -sf $CK8S_CONFIG_PATH/.state/kube_config_${SERVICE_CLUSTER}.yaml $CK8S_CONFIG_PATH/.state/kube_config_sc.yaml
./bin/ck8s apply sc  # Respond "n" if you get a WARN

Then the workload clusters:

for CLUSTER in "${WORKLOAD_CLUSTERS[@]}"; do
    ln -sf $CK8S_CONFIG_PATH/.state/kube_config_${CLUSTER}.yaml $CK8S_CONFIG_PATH/.state/kube_config_wc.yaml
    ./bin/ck8s apply wc  # Respond "n" if you get a WARN
done

Settling

Important

Leave sufficient time for the system to settle, e.g., request TLS certificates from LetsEncrypt, perhaps as much as 20 minutes.

You can check if the system settled as follows:

for CLUSTER in ${SERVICE_CLUSTER} "${WORKLOAD_CLUSTERS[@]}"; do
    sops exec-file ${CK8S_CONFIG_PATH}/.state/kube_config_$CLUSTER.yaml \
        'kubectl --kubeconfig {} get --all-namespaces pods'
done

Check the output of the command above. All Pods needs to be Running or Completed.

for CLUSTER in ${SERVICE_CLUSTER} "${WORKLOAD_CLUSTERS[@]}"; do
    sops exec-file ${CK8S_CONFIG_PATH}/.state/kube_config_$CLUSTER.yaml \
        'kubectl --kubeconfig {} get --all-namespaces issuers,clusterissuers,certificates'
done

Check the output of the command above. All resources need to have the Ready column True.

Testing

After completing the installation step you can test if the apps are properly installed and ready using the commands below.

Start with the service cluster:

ln -sf $CK8S_CONFIG_PATH/.state/kube_config_${SERVICE_CLUSTER}.yaml $CK8S_CONFIG_PATH/.state/kube_config_sc.yaml
./bin/ck8s test sc  # Respond "n" if you get a WARN

Then the workload clusters:

for CLUSTER in "${WORKLOAD_CLUSTERS[@]}"; do
    ln -sf $CK8S_CONFIG_PATH/.state/kube_config_${CLUSTER}.yaml $CK8S_CONFIG_PATH/.state/kube_config_wc.yaml
    ./bin/ck8s test wc  # Respond "n" if you get a WARN
done

Done. Navigate to the endpoints, for example grafana.$BASE_DOMAIN, kibana.$BASE_DOMAIN, harbor.$BASE_DOMAIN, etc. to discover Compliant Kubernetes's features.

Teardown

Removing Compliant Kubernetes Apps from your cluster

To remove the applications added by compliant kubernetes you can use the two scripts clean-sc.sh and clean-wc.sh, they are located here in the scripts folder.

They perform the following actions:

  1. Delete the added helm charts
  2. Delete the added namespaces
  3. Delete any remaining PersistentVolumes
  4. Delete the added CustomResourceDefinitions

Note: if user namespaces are managed by Compliant Kubernetes apps then they will also be deleted if you clean up the workload cluster.

Remove infrastructure

To teardown the infrastructure, please switch to the root directory of the Kubespray repo (see the Terraform section). Make sure you remove all PersistentVolumes and Services with type=LoadBalancer. These objects may create cloud resources that are not managed by Terraform, and therefore would not be removed when we destroy the infrastructure.

MODULE_PATH="$(pwd)/kubespray/contrib/terraform/openstack"

for CLUSTER in "${SERVICE_CLUSTER}" "${WORKLOAD_CLUSTERS[@]}"; do
  pushd "${MODULE_PATH}"
  terraform init
  terraform destroy -var-file="${CK8S_CONFIG_PATH}/${CLUSTER}-config/cluster.tfvars" -state="${CK8S_CONFIG_PATH}/${CLUSTER}-config/terraform.tfstate"
  popd
done

# Remove DNS records

Don't forget to remove any DNS records and object storage buckets that you may have created.