Standard Template for on-prem Environment¶
This document contains instructions on how to set-up a new Compliant Kubernetes on-prem environment.
Prerequisites¶
Decision to be taken
Decisions regarding the following items should be made before venturing on deploying Compliant Kubernetes.
- Overall architecture, i.e., VM sizes, load-balancer configuration, storage configuration, etc.
- Identity Provider (IdP) choice and configuration. See this page.
- On-call Management Tool (OMT) choice and configuration
-
Make sure you install all prerequisites on your laptop.
-
Prepare Ubuntu-based VMs: If you are using public clouds, you can create VMs using the scripts included in Kubespray:
- For Azure, use AzureRM scripts.
- For other clouds, use their respective Terraform scripts.
-
Create a git working folder to store Compliant Kubernetes configurations in a version-controlled manner. Run the following commands from the root of the config repo.
Note
The following steps are done from the root of the git repository you created for the configurations.
Note
You can choose names for your Management Cluster and Workload Cluster by changing the values for
SERVICE_CLUSTER
andWORKLOAD_CLUSTERS
respectively.export CK8S_CONFIG_PATH=./ export CK8S_ENVIRONMENT_NAME=<my-ck8s-cluster> export CK8S_CLOUD_PROVIDER=[exoscale|safespring|citycloud|elastx|aws|baremetal] export CK8S_FLAVOR=[dev|prod] # defaults to dev export CK8S_PGP_FP=<PGP-fingerprint> # retrieve with gpg --list-secret-keys SERVICE_CLUSTER="sc" WORKLOAD_CLUSTERS="wc"
-
Add the Elastisys Compliant Kubernetes Kubespray repo as a
git submodule
to the configuration repo and install pre-requisites as follows:git submodule add https://github.com/elastisys/compliantkubernetes-kubespray.git git submodule update --init --recursive cd compliantkubernetes-kubespray pip3 install -r kubespray/requirements.txt # this will install ansible ansible-playbook -e 'ansible_python_interpreter=/usr/bin/python3' --ask-become-pass --connection local --inventory 127.0.0.1, get-requirements.yaml
-
Add the Compliant Kubernetes Apps repo as a
git submodule
to the configuration repo and install pre-requisites as follows:git submodule add https://github.com/elastisys/compliantkubernetes-apps.git cd compliantkubernetes-apps ansible-playbook -e 'ansible_python_interpreter=/usr/bin/python3' --ask-become-pass --connection local --inventory 127.0.0.1, get-requirements.yaml
-
Create the domain name. You need to create a domain name to access the different services in your environment. You will need to set up the following DNS entries (replace
example.com
with your domain name).- Point these domains to the Workload Cluster ingress controller (this step is done during Compliant Kubernetes app installation):
*.example.com
- Point these domains to the Management Cluster ingress controller (this step is done during Compliant Kubernetes app installation):
*.ops.example.com
dex.example.com
grafana.example.com
harbor.example.com
opensearch.example.com
If both Management and Workload Clusters are in the same subnet
If both the Management and Workload Clusters are in the same subnet, it would be great to configure the following domain names to the private IP addresses of Management Cluster's worker nodes. (Replace
example.com
with your domain name.)*.thanos.ops.example.com
*.opensearch.ops.example.com
- Point these domains to the Workload Cluster ingress controller (this step is done during Compliant Kubernetes app installation):
-
Create S3 credentials and add them to
.state/s3cfg.ini
. -
Set up load balancer
You need to set up two load balancers, one for the Workload Cluster and one for the Management Cluster.
-
Make sure you have all necessary tools.
Deploying Compliant Kubernetes using Kubespray¶
How to change Default Kubernetes Subnet Address
If the default IP block ranges used for Docker and Kubernetes are the same as the internal IP ranges used in the company, you can change the values to resolve the conflict as follows. Note that you can use any valid private IP address range, the values below are put as an example.
* For Management Cluster: Add `kube_service_addresses: 10.178.0.0/18` and `kube_pods_subnet: 10.178.120.0/18` in `${CK8S_CONFIG_PATH}/sc-config/group_vars/k8s_cluster/ck8s-k8s-cluster.yaml` file.
* For Workload Cluster: Add `kube_service_addresses: 10.178.0.0/18` and `kube_pods_subnet: 10.178.120.0/18` in `${CK8S_CONFIG_PATH}/wc-config/group_vars/k8s_cluster/ck8s-k8s-cluster.yaml` file.
* For Management Cluster: Added `docker_options: "--default-address-pool base=10.179.0.0/24,size=24"` in `${CK8S_CONFIG_PATH}/sc-config/group_vars/all/docker.yml` file.
* For Workload Cluster: Added `docker_options: "--default-address-pool base=10.179.4.0/24,size=24"` in `${CK8S_CONFIG_PATH}/wc-config/group_vars/all/docker.yml` file.
Init Kubespray config in your config path.¶
for CLUSTER in ${SERVICE_CLUSTER} "{WORKLOAD_CLUSTERS}"; do
compliantkubernetes-kubespray/ck8s-kubespray init $CLUSTER $CK8S_CLOUD_PROVIDER $CK8S_PGP_FP
done
Configure OIDC¶
To configure OpenID access for Kubernetes API and other services, Dex should be configured with your identity provider. Check what Dex needs from your identity provider.
Configure OIDC endpoint¶
Set kube_oidc_url
in ${CK8S_CONFIG_PATH}/sc-config/group_vars/k8s_cluster/ck8s-k8s-cluster.yaml
and ${CK8S_CONFIG_PATH}/wc-config/group_vars/k8s_cluster/ck8s-k8s-cluster.yaml
based on your cluster. For example, if your domain is example.com
then kube_oidc_url is set as kube_oidc_url: https://dex.example.com
in both files.
Copy the VMs information to the inventory files¶
Add the host name, user and IP address of each VM that you prepared above in ${CK8S_CONFIG_PATH}/sc-config/inventory.ini
for Management Cluster and ${CK8S_CONFIG_PATH}/sc-config/inventory.ini
for Workload Cluster. Moreover, you also need to add the host names of the master nodes under [kube_control_plane]
, etcd nodes under [etcd]
and worker nodes under [kube_node]
.
Note
Make sure that the user has SSH access to the VMs.
Run Kubespray to deploy the Kubernetes clusters¶
for CLUSTER in ${SERVICE_CLUSTER} ${WORKLOAD_CLUSTERS}; do
compliantkubernetes-kubespray/bin/ck8s-kubespray apply $CLUSTER --flush-cache
done
Note
The kubeconfig for wc .state/kube_config_wc.yaml
will not be usable until you have installed dex in the Management Cluster (by deploying apps).
Set up Rook¶
Only for Infrastructure Providers that doesn't natively support storage kubernetes.
Run the following command to set up Rook.
for CLUSTER in sc wc; do
export KUBECONFIG=${CK8S_CONFIG_PATH}/.state/kube_config_$CLUSTER.yaml
compliantkubernetes-kubespray/rook/deploy-rook.sh
done
Note
If the kubeconfig files for the clusters are encrypted with SOPS, you need to decrypt them before using them:
sops--decrypt ${CK8S_CONFIG_PATH}/.state/kube_config_$CLUSTER.yaml > $CLUSTER.yaml
export KUBECONFIG=$CLUSTER.yaml
Please restart the operator pod, rook-ceph-operator*
, if some pods stall in the initialization state as shown below:
rook-ceph rook-ceph-crashcollector-minion-0-b75b9fc64-tv2vg 0/1 Init:0/2 0 24m
rook-ceph rook-ceph-crashcollector-minion-1-5cfb88b66f-mggrh 0/1 Init:0/2 0 36m
rook-ceph rook-ceph-crashcollector-minion-2-5c74ffffb6-jwk55 0/1 Init:0/2 0 14m
Important
Pods in pending state usually indicate resource shortage. In such cases you need to use bigger instances.
Test Rook¶
To test Rook, proceed as follows:
for CLUSTER in ${SERVICE_CLUSTER} "${WORKLOAD_CLUSTERS[@]}"; do
kubectl --kubeconfig ${CK8S_CONFIG_PATH}/.state/kube_config_$CLUSTER.yaml apply -f https://raw.githubusercontent.com/rook/rook/release-1.5/cluster/examples/kubernetes/ceph/csi/rbd/pvc.yaml
done
for CLUSTER in ${SERVICE_CLUSTER} "${WORKLOAD_CLUSTERS[@]}"; do
kubectl --kubeconfig ${CK8S_CONFIG_PATH}/.state/kube_config_$CLUSTER.yaml get pvc
done
You should see PVCs in Bound
state. If you want to clean the previously created PVCs:
for CLUSTER in ${SERVICE_CLUSTER} "${WORKLOAD_CLUSTERS[@]}"; do
kubectl --kubeconfig ${CK8S_CONFIG_PATH}/.state/kube_config_$CLUSTER.yaml delete pvc rbd-pvc
done
Deploying Compliant Kubernetes Apps¶
How to change local DNS IP if you change the default Kubernetes subnet address
You need to change the default coreDNS default IP address in common-config.yaml
file if you change the default IP block used for Kubernetes services above. To get the coreDNS IP address, run the following commands.
${CK8S_CONFIG_PATH}/compliantkubernetes-apps/bin/ck8s ops kubectl sc get svc -n kube-system coredns
${CK8S_CONFIG_PATH}/common-config.yaml
file and set the value to global.clusterDns
field.
Configure the load balancer IP on the loopback interface for each worker node
The Kubernetes data plane nodes (i.e., worker nodes) cannot connect to themselves with the IP address of the load balancer that fronts them. The easiest is to configure the load balancer's IP address on the loopback interface of each nodes. Create /etc/netplan/20-eip-fix.yaml
file and add the following to it. ${loadblancer_ip_address}
should be replaced with the IP address of the load balancer for each cluster.
network:
version: 2
ethernets:
lo0:
match:
name: lo
dhcp4: false
addresses:
- ${loadblancer_ip_address}/32
sudo netplan apply
Initialize the apps configuration¶
compliantkubernetes-apps/bin/ck8s init
This will initialize the configuration in the ${CK8S_CONFIG_PATH}
directory. Generating configuration files common-config.yaml
, sc-config.yaml
and wc-config.yaml
, as well as secrets with randomly generated passwords in secrets.yaml
. This will also generate read-only default configuration under the directory defaults/
which can be used as a guide for available and suggested options.
Configure the apps and secrets¶
The configuration files contain some predefined values. You may want to check and edit based on your current environment requirements. The configuration files that require editing are ${CK8S_CONFIG_PATH}/common-config.yaml
, ${CK8S_CONFIG_PATH}/sc-config.yaml
, ${CK8S_CONFIG_PATH}/wc-config.yaml
and ${CK8S_CONFIG_PATH}/secrets.yaml
and set the appropriate values for some of the configuration fields.
Note that, the latter is encrypted.
vim ${CK8S_CONFIG_PATH}/sc-config.yaml
vim ${CK8S_CONFIG_PATH}/wc-config.yaml
vim ${CK8S_CONFIG_PATH}/common-config.yaml
Edit the secrets.yaml file and add the credentials for:
- s3 - used for backup storage
- dex - connectors -- check your identity provider.
- On-call management tool configurations-- Check supported on-call management tools
sops ${CK8S_CONFIG_PATH}/secrets.yaml
Tip
The default configuration for the Management Cluster and Workload Cluster are available in the directory ${CK8S_CONFIG_PATH}/defaults/
and can be used as a reference for available options.
Warning
Do not modify the read-only default configurations files found in the directory ${CK8S_CONFIG_PATH}/defaults/
. Instead configure the cluster by modifying the regular files ${CK8S_CONFIG_PATH}/sc-config.yaml
and ${CK8S_CONFIG_PATH}/wc-config.yaml
as they will override the default options.
Create S3 buckets¶
You can use the following command to create the required S3 buckets. The command uses s3cmd
in the background and gets configuration and credentials for your S3 provider from the ~/.s3cfg
file.
compliantkubernetes-apps/bin/ck8s s3cmd create
Install Compliant Kubernetes apps¶
This will set up apps, first in the Management Cluster and then in the Workload Cluster:
compliantkubernetes-apps/bin/ck8s apply sc
compliantkubernetes-apps/bin/ck8s apply wc
Settling¶
Info
Leave sufficient time for the system to settle, e.g., request TLS certificates from LetsEncrypt, perhaps as much as 20 minutes.
Check if all helm charts succeeded.
compliantkubernetes-apps/bin/ck8s ops helm wc list -A --all
You can check if the system settled as follows.
for CLUSTER in ${SERVICE_CLUSTER} "${WORKLOAD_CLUSTERS[@]}"; do
compliantkubernetes-apps/bin/ck8s ops kubectl $CLUSTER get --all-namespaces pods
done
Check the output of the command above. All Pods need to be Running
or Completed
status.
for CLUSTER in ${SERVICE_CLUSTER} "${WORKLOAD_CLUSTERS[@]}"; do
compliantkubernetes-apps/bin/ck8s ops kubectl $CLUSTER get --all-namespaces issuers,clusterissuers,certificates
done
Check the output of the command above.
All resources need to have the Ready
column True
.
Testing¶
After completing the installation step you can test if the apps are properly installed and ready using the commands below.
Start with the Management Cluster:
compliantkubernetes-apps/bin/ck8s test sc
Then the Workload Clusters:
compliantkubernetes-apps/bin/ck8s test wc