I’ve alluded to this point several times before; but im creating this post to reiterate how easy it is to manage core cluster components with helm + terraform.
resource "helm_release" "nginx-ingress" {
name = "wan"
namespace = "default"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
version = "4.7.5"
set {
name = "controller.service.loadBalancerIP"
value = "172.30.190.63"
}
}
All we need to do is bump up the version:
~ version = "4.7.5" -> "4.11.3"
Heres the opentofu plan:
OpenTofu will perform the following actions:
# helm_release.nginx-ingress will be updated in-place
~ resource "helm_release" "nginx-ingress" {
id = "wan"
~ metadata = [
- {
- app_version = "1.8.5"
- chart = "ingress-nginx"
- name = "wan"
- namespace = "default"
- revision = 2
- values = jsonencode(
{
- controller = {
- service = {
- loadBalancerIP = "172.30.190.63"
}
}
}
)
- version = "4.7.5"
},
] -> (known after apply)
name = "wan"
~ version = "4.7.5" -> "4.11.3"
# (25 unchanged attributes hidden)
# (1 unchanged block hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
We let it run through and it rolls out one pod at a time with minimal downtime mostly unseen by short tcp connections.
If we give it a bogus value:
~ version = "4.11.3" -> "54.11.3"
Tofu just errors out:
Error: chart "ingress-nginx" version "54.11.3" not found in https://kubernetes.github.io/ingress-nginx repository
Now of course you can do all this with the helm command line, but the point here is that terraform or opentofu lets you version control and abstract all of this critical cluster configuration within your organization.
For example, you might create a module that represents ‘kubernetes cluster base configuration for our org’ which looks like this at the top level:
module "k8s-prod" {
source = "./modules/k8s"
context = "prod"
region = "us-east"
ingress_ip = "172.10.90.1"
nginx-ingress = "4.7.1"
coredns = "1.2.1"
rook = "4.5.1"
cert-manager = "7.8.1"
}
module "k8s-dev" {
source = "./modules/k8s"
context = "dev"
region = "us-west"
ingress_ip = "44.10.32.2"
nginx-ingress = "4.7.5"
coredns = "1.2.9"
rook = "4.5.8"
cert-manager = "7.8.6"
}
This would let us go through and rehearse critical upgrades on a per-cluster level, while fundamentally using the same module for the all clusters.