Use HostNetwork or LoadBalancer for Ingress¶
- Status: accepted
- Deciders: Axel, Cristian, Fredrik, Johan, Olle, Viktor
- Date: 2021-02-09
Technical Story: Ingress configuration
Context and Problem Statement¶
Many regulations require traffic to be encrypted over public Internet. Compliant Kubernetes solves this problem via an Ingress Controller and cert-manager. As of February 2021, Compliant Kubernetes comes by default with Ingress-NGINX, but Ambassador is planned as an alternative. The question is, how does traffic arrive at the Ingress Controller?
Decision Drivers¶
- We want to obey the Principle of Least Astonishment.
- We want to cater to hybrid cloud deployments, including bare-metal ones, which might lack support for Kubernetes-controlled load balancer.
- Some deployments, e.g., Bring-Your-Own VMs, might not allow integration with the underlying load balancer.
- We want to keep things simple.
Considered Options¶
- Via the host network, i.e., some workers expose the Ingress Controller on their port 80 and 443.
- Over a NodePort service, i.e.,
kube-proxy
exposes the Ingress Controller on a port between 30000-32767 on each worker. - As a Service type LoadBalancer, i.e., above plus Kubernetes provisions a load balancer via Service controller.
Decision Outcome¶
Chosen options:
-
Use host network if Kubernetes-controlled load balancer is unavailable or undesired. If necessary, front the worker nodes with a manual or Terraform-controlled load-balancer. This includes:
- Where load-balancing does not add value, e.g., if a Deployment is planned to have only a single-node or single-worker for the foreseeable future: Point the DNS entry to the worker IP instead.
- Exoscale currently falls in this category, due to its Kubernetes integration being rather recent.
- Safespring falls in this category, since it is missing load balancers.
- If the Infrastructure Provider is missing a storage controller, it might be undesirable to perform integration "just" for load-balancing.
-
Use Service type LoadBalancer when available. This includes: AWS, Azure, GCP and CityCloud.
Additional considerations: This means that, generally, it will not be possible to set up the correct DNS entries until after we apply Compliant Kubernetes Apps. There is a risk for "the Internet" -- Let's Encrypt specifically -- to perform DNS lookups too soon and cause negative DNS caches with a long lifetime. Therefore, placeholder IP addresses must be used, e.g.:
*.$BASE_DOMAIN 60s A 203.0.113.123
*.ops.$BASE_DOMAIN 60s A 203.0.113.123
203.0.113.123 is in TEST-NET-3 and okay to use as placeholder. This approach is inspired by kops and should not feel astonishing.
Positive Consequences¶
- We make the best of each Infrastructure Provider.
- Obeys principle of least astonishment.
- We do not add a load balancer "just because".
Negative Consequences¶
- Complexity is a bit increased, however, this feels like essential complexity.