Multi-Cloud Kubernetes in One Keystroke

This weekend I created mck, a tool to provision a kubernetes cluster on generic compute instances spanning multiple cloud providers. Whether such an endeavor is practical or worthwhile IRL is up for debate; I just wanted to see if it could be done.

implementation

mck wraps opentofu (formerly terraform) and ansible in python, enabling a single commandline entrypoint for various CRUD tasks. There is zero manual work other than initial API token setup for each provider.

Wrapping opentofu rather than using client libraries directly gives us a well-proven means of provisioning base resources, as well as a well understood user interface - the cluster is described in intances.tf.

Once compute resources are provisioned, the tf state is examined and an ansible inventory is generated. Base software installation is then performed with ansible_runner.

The tool then follows a basic ‘operator’ design pattern - the status of each node is inspected and compared with the desired cluster state to determine which nodes need to join the cluster. If no instances have any peers, the cluster is bootstrapped. If individual instances are found with no peers, they join the leader who has the most peers. If the leader has dead peers that are not found in the current infrastructure state, they must have just been removed, so they are purged from kubernetes as well.

Ultimately what this means for the user is that we can just edit instances.yml and run the tool, and it will try its best to form a cluster. Of course there are some caveats - the tool currently won’t protect you from destroying the cluster if too much changes at once, so with three nodes we should only edit one at a time, with five nodes we can disrupt two at a time, and so on. In addition, if some sort of stateful service is bound within kubernetes itself to a node that is due to be removed or replaced, it should be drained or otherwise relocated beforehand.

To grow a cluster, we can simply add nodes in instances.yml, and run the tool again - resulting in new instances being provisioned and added to the cluster.

Likewise, upgrading a node from debian 11 to debian 12 is as simple as incrementing the image version in instances.tf and running the tool.

limitations

Again, I’m not necessarily claiming this is a good idea.

The most obvious weak point is traffic ingress. A naive first solution might be DNS round robin - but there are two big issues with this. DNS won’t be aware of node downtime, and kube proxy (or an ingress controller like nginx) will by default do its own load balancing once traffic makes its way into the cluster - which is quite bad, because we don’t want traffic unnecessarily bouncing through one datacenter on its way to another. We can force locality of Services with internalTrafficPolicy: Local, but we’ll still have the problem of getting traffic to the right datacenter in the first place. When you use a common managed k8s, these are the kinds of problems your cloud’s LoadBalancer solves for you.

The most “correct” way to do this that I can think of would be a custom-built deployment of anycast loadbalancers peering via BGP to the internet - perhaps providing a formal IngressClass - which are coupled with the cluster control plane such that they can make intelligent forwarding decisions. These would naturally attract traffic closest to them (for the most part), and could forward it without being too wasteful. To my understanding this is basically what Cloudflare does (excluding the kubernetes bit). Another similar option may be to implement a LoadBalancer that provisions public /32 ipv4 addresses local to pods of a Service, and advertises them via BGP. This would likely be sensitive to routing table convergence time. These are both of course beyond the scope of this project.

full node OS upgrade

Lets demonstrate an upgrade of node mck-k8s-do-0 from bullseye to bookworm with just 2 commands (vim ; python3 mck):

Before state:

root@mck-k8s-linode-0:~# kubectl get nodes -o wide
NAME               STATUS   ROLES    AGE     VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION    CONTAINER-RUNTIME
mck-k8s-linode-1   Ready    <none>   30m     v1.28.2   172.233.214.203   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-13-amd64    containerd://1.6.15
mck-k8s-do-0       Ready    <none>   4m57s   v1.28.2   67.205.142.0      <none>        Debian GNU/Linux 11 (bullseye)   5.10.0-23-amd64   containerd://1.6.15
mck-k8s-linode-0   Ready    <none>   31m     v1.28.2   172.234.19.232    <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-13-amd64    containerd://1.6.15

Replacing debian-11-x64 with debian-12-x64:

~/git/multi-cloud-k8s$ git diff instances.tf
diff --git a/instances.tf b/instances.tf
index 61b49bf..c744f8a 100644
--- a/instances.tf
+++ b/instances.tf
@@ -1,10 +1,10 @@
 resource "digitalocean_droplet" "instance" {
-  image    = "debian-11-x64"
+  image    = "debian-12-x64"
   name     = "mck-k8s-do-${count.index}"
   region   = "nyc1"
   size     = "s-1vcpu-2gb"
   ssh_keys = [digitalocean_ssh_key.default.fingerprint]
   count    = 1
 }
~/git/multi-cloud-k8s$ python3 mck
digitalocean_ssh_key.default: Refreshing state... [id=39828152]
digitalocean_droplet.instance[0]: Refreshing state... [id=382319321]
linode_sshkey.default: Refreshing state...
aws_key_pair.default: Refreshing state... [id=opentofu]
data.aws_ami.image: Reading...
aws_default_vpc.mainvpc: Refreshing state... [id=vpc-0371ba1712735e722]
aws_default_subnet.default_az1: Refreshing state... [id=subnet-046b3f303365c3333]
linode_instance.instance[1]: Refreshing state... [id=51412327]
linode_instance.instance[0]: Refreshing state... [id=51412329]
data.aws_ami.image: Read complete after 0s [id=ami-0c2644caf041bb6de]
aws_default_security_group.default: Refreshing state... [id=sg-06d079c63124325e2]

OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

OpenTofu will perform the following actions:

  # digitalocean_droplet.instance[0] must be replaced

( output truncated )

      ~ image                = "debian-11-x64" -> "debian-12-x64" # forces replacement

( output truncated )

Plan: 1 to add, 0 to change, 1 to destroy.
digitalocean_droplet.instance[0]: Destroying... [id=382319321]
digitalocean_droplet.instance[0]: Still destroying... [id=382319321, 10s elapsed]
digitalocean_droplet.instance[0]: Still destroying... [id=382319321, 20s elapsed]
digitalocean_droplet.instance[0]: Destruction complete after 21s
digitalocean_droplet.instance[0]: Creating...
digitalocean_droplet.instance[0]: Still creating... [10s elapsed]
digitalocean_droplet.instance[0]: Still creating... [20s elapsed]
digitalocean_droplet.instance[0]: Still creating... [30s elapsed]
digitalocean_droplet.instance[0]: Still creating... [40s elapsed]
digitalocean_droplet.instance[0]: Creation complete after 41s [id=382321148]

Apply complete! Resources: 1 added, 0 changed, 1 destroyed.
instance mck-k8s-do-0 is provisioned on digitalocean with address 192.241.145.176
instance mck-k8s-linode-0 is provisioned on linode with address 172.234.19.232
instance mck-k8s-linode-1 is provisioned on linode with address 172.233.214.203

( output truncated )

RUNNING HANDLER [microk8s-ansible : Reboot] ************************************
changed: [192.241.145.176]

PLAY RECAP *********************************************************************
172.233.214.203            : ok=12   changed=1    unreachable=0    failed=0    skipped=4    rescued=0    ignored=0
172.234.19.232             : ok=12   changed=1    unreachable=0    failed=0    skipped=4    rescued=0    ignored=0
192.241.145.176            : ok=14   changed=11   unreachable=0    failed=0    skipped=4    rescued=0    ignored=0
environment created.  follow-up configuration can be performed with:
ansible-playbook main.yml -i inventory
existing cluster found with candidate leader mck-k8s-linode-0
force removing missing node 67.205.142.0
new node mck-k8s-do-0 will join mck-k8s-linode-0
Contacting cluster at 172.234.19.232
Waiting for this node to finish joining the cluster. .. .. .. ..

The orphaned node was purged, the new node was joined, and we’re left with a healthy cluster. The new node uses the same hostname mck-k8s-do-0, but notice it is only 6s old.

root@mck-k8s-linode-0:~# kubectl get nodes -o wide
NAME               STATUS   ROLES    AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
mck-k8s-linode-0   Ready    <none>   38m   v1.28.2   172.234.19.232    <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-13-amd64   containerd://1.6.15
mck-k8s-linode-1   Ready    <none>   37m   v1.28.2   172.233.214.203   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-13-amd64   containerd://1.6.15
mck-k8s-do-0       Ready    <none>   6s    v1.28.2   192.241.145.176   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-9-amd64    containerd://1.6.15

full example

Heres a full run expanding a 3 node cluster to 6 nodes:

~/git/multi-cloud-k8s$ git diff instances.tf
diff --git a/instances.tf b/instances.tf
index ea22d97..3bb7da1 100644
--- a/instances.tf
+++ b/instances.tf
@@ -4,7 +4,7 @@ resource "digitalocean_droplet" "instance" {
   region   = "nyc1"
   size     = "s-1vcpu-2gb"
   ssh_keys = [digitalocean_ssh_key.default.fingerprint]
-  count    = 1
+  count    = 3
 }

 resource "linode_instance" "instance" {
@@ -13,7 +13,7 @@ resource "linode_instance" "instance" {
   region          = "us-ord"
   type            = "g6-standard-1"
   authorized_keys = [linode_sshkey.default.ssh_key]
-  count           = 1
+  count           = 3
 }
~/git/multi-cloud-k8s$ python3 mck
linode_sshkey.default: Refreshing state...
digitalocean_ssh_key.default: Refreshing state... [id=39818938]
linode_instance.instance[0]: Refreshing state... [id=51380107]
linode_instance.instance[1]: Refreshing state... [id=51380422]
data.aws_ami.image: Reading...
aws_key_pair.default: Refreshing state... [id=opentofu]
aws_default_vpc.mainvpc: Refreshing state... [id=vpc-0371ba1712735e722]
aws_default_subnet.default_az1: Refreshing state... [id=subnet-046b3f303365c3333]
digitalocean_droplet.instance[0]: Refreshing state... [id=382118243]
data.aws_ami.image: Read complete after 1s [id=ami-0c2644caf041bb6de]
aws_default_security_group.default: Refreshing state... [id=sg-06d079c63124325e2]

OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

OpenTofu will perform the following actions:

  # digitalocean_droplet.instance[1] will be created
  + resource "digitalocean_droplet" "instance" {
      + backups              = false
      + created_at           = (known after apply)
      + disk                 = (known after apply)
      + graceful_shutdown    = false
      + id                   = (known after apply)
      + image                = "debian-12-x64"
      + ipv4_address         = (known after apply)
      + ipv4_address_private = (known after apply)
      + ipv6                 = false
      + ipv6_address         = (known after apply)
      + locked               = (known after apply)
      + memory               = (known after apply)
      + monitoring           = false
      + name                 = "mck-k8s-do-1"
      + price_hourly         = (known after apply)
      + price_monthly        = (known after apply)
      + private_networking   = (known after apply)
      + region               = "nyc1"
      + resize_disk          = true
      + size                 = "s-1vcpu-2gb"
      + ssh_keys             = [
          + "f9:50:61:b7:24:c2:6a:e0:01:ce:30:be:b5:3e:df:0d",
        ]
      + status               = (known after apply)
      + urn                  = (known after apply)
      + vcpus                = (known after apply)
      + volume_ids           = (known after apply)
      + vpc_uuid             = (known after apply)
    }

  # digitalocean_droplet.instance[2] will be created
  + resource "digitalocean_droplet" "instance" {
      + backups              = false
      + created_at           = (known after apply)
      + disk                 = (known after apply)
      + graceful_shutdown    = false
      + id                   = (known after apply)
      + image                = "debian-12-x64"
      + ipv4_address         = (known after apply)
      + ipv4_address_private = (known after apply)
      + ipv6                 = false
      + ipv6_address         = (known after apply)
      + locked               = (known after apply)
      + memory               = (known after apply)
      + monitoring           = false
      + name                 = "mck-k8s-do-2"
      + price_hourly         = (known after apply)
      + price_monthly        = (known after apply)
      + private_networking   = (known after apply)
      + region               = "nyc1"
      + resize_disk          = true
      + size                 = "s-1vcpu-2gb"
      + ssh_keys             = [
          + "f9:50:61:b7:24:c2:6a:e0:01:ce:30:be:b5:3e:df:0d",
        ]
      + status               = (known after apply)
      + urn                  = (known after apply)
      + vcpus                = (known after apply)
      + volume_ids           = (known after apply)
      + vpc_uuid             = (known after apply)
    }

  # linode_instance.instance[2] will be created
  + resource "linode_instance" "instance" {
      + authorized_keys    = [
          + "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKL+1xp+nQIbu02D1NmU+4RTPGblUML21TSzF/Pxg5GM nhensel@61d3-ws",
        ]
      + backups            = (known after apply)
      + backups_enabled    = (known after apply)
      + boot_config_label  = (known after apply)
      + booted             = (known after apply)
      + has_user_data      = (known after apply)
      + host_uuid          = (known after apply)
      + id                 = (known after apply)
      + image              = "linode/debian12"
      + ip_address         = (known after apply)
      + ipv4               = (known after apply)
      + ipv6               = (known after apply)
      + label              = "mck-k8s-linode-2"
      + private_ip_address = (known after apply)
      + region             = "us-ord"
      + resize_disk        = false
      + shared_ipv4        = (known after apply)
      + specs              = (known after apply)
      + status             = (known after apply)
      + swap_size          = (known after apply)
      + type               = "g6-standard-1"
      + watchdog_enabled   = true
    }

Plan: 3 to add, 0 to change, 0 to destroy.
linode_instance.instance[2]: Creating...
digitalocean_droplet.instance[1]: Creating...
digitalocean_droplet.instance[2]: Creating...
linode_instance.instance[2]: Still creating... [10s elapsed]
digitalocean_droplet.instance[2]: Still creating... [10s elapsed]
digitalocean_droplet.instance[1]: Still creating... [10s elapsed]
linode_instance.instance[2]: Still creating... [20s elapsed]
digitalocean_droplet.instance[1]: Still creating... [20s elapsed]
digitalocean_droplet.instance[2]: Still creating... [20s elapsed]
linode_instance.instance[2]: Still creating... [30s elapsed]
digitalocean_droplet.instance[1]: Still creating... [30s elapsed]
digitalocean_droplet.instance[2]: Still creating... [30s elapsed]
digitalocean_droplet.instance[1]: Creation complete after 30s [id=382122787]
digitalocean_droplet.instance[2]: Creation complete after 31s [id=382122788]
linode_instance.instance[2]: Creation complete after 37s [id=51380582]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
instance mck-k8s-do-0 is provisioned on digitalocean with address 206.189.192.42
instance mck-k8s-do-1 is provisioned on digitalocean with address 192.34.61.217
instance mck-k8s-do-2 is provisioned on digitalocean with address 161.35.124.62
instance mck-k8s-linode-0 is provisioned on linode with address 172.232.11.165
instance mck-k8s-linode-1 is provisioned on linode with address 172.234.25.201
instance mck-k8s-linode-2 is provisioned on linode with address 172.233.215.206

PLAY [common] ******************************************************************

TASK [common : wait for connection] ********************************************
ok: [172.232.11.165]
ok: [172.234.25.201]
ok: [206.189.192.42]
ok: [192.34.61.217]
ok: [161.35.124.62]
ok: [172.233.215.206]

TASK [common : apt update] *****************************************************
ok: [172.234.25.201]
ok: [172.232.11.165]
ok: [206.189.192.42]
changed: [172.233.215.206]
ok: [192.34.61.217]
ok: [161.35.124.62]

TASK [common : configure hosts] ************************************************
changed: [172.234.25.201]
changed: [172.232.11.165]
changed: [192.34.61.217]
changed: [206.189.192.42]
changed: [161.35.124.62]
changed: [172.233.215.206]

TASK [common : set hostname] ***************************************************
ok: [172.234.25.201]
ok: [172.232.11.165]
ok: [206.189.192.42]
changed: [192.34.61.217]
changed: [172.233.215.206]
changed: [161.35.124.62]

RUNNING HANDLER [common : reboot] **********************************************
changed: [161.35.124.62]
changed: [192.34.61.217]
changed: [172.233.215.206]

PLAY [common] ******************************************************************

TASK [Gathering Facts] *********************************************************
ok: [172.232.11.165]
ok: [172.234.25.201]
ok: [192.34.61.217]
ok: [161.35.124.62]
ok: [206.189.192.42]
ok: [172.233.215.206]

TASK [microk8s-ansible : Install role packages] ********************************
ok: [172.234.25.201]
ok: [172.232.11.165]
ok: [206.189.192.42]
changed: [172.233.215.206]
changed: [161.35.124.62]
changed: [192.34.61.217]

TASK [microk8s-ansible : Install role packages] ********************************
skipping: [206.189.192.42]
skipping: [192.34.61.217]
skipping: [161.35.124.62]
skipping: [172.232.11.165]
skipping: [172.234.25.201]
skipping: [172.233.215.206]

TASK [microk8s-ansible : Configure hosts] **************************************
skipping: [206.189.192.42]
skipping: [192.34.61.217]
skipping: [161.35.124.62]
skipping: [172.232.11.165]
skipping: [172.234.25.201]
skipping: [172.233.215.206]

TASK [microk8s-ansible : Create link required for --classic confinement] *******
skipping: [206.189.192.42]
skipping: [192.34.61.217]
skipping: [161.35.124.62]
skipping: [172.232.11.165]
skipping: [172.234.25.201]
skipping: [172.233.215.206]

TASK [microk8s-ansible : enable ipvs] ******************************************
ok: [172.234.25.201]
ok: [172.232.11.165]
changed: [192.34.61.217]
changed: [161.35.124.62]
ok: [206.189.192.42]
changed: [172.233.215.206]

TASK [microk8s-ansible : Ensure snapd is running] ******************************
ok: [172.234.25.201]
ok: [172.232.11.165]
ok: [206.189.192.42]
changed: [192.34.61.217]
changed: [161.35.124.62]
changed: [172.233.215.206]

TASK [microk8s-ansible : Install microk8s] *************************************
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ok: [172.234.25.201]
ok: [172.232.11.165]
ok: [206.189.192.42]
changed: [172.233.215.206]
changed: [161.35.124.62]
changed: [192.34.61.217]

TASK [microk8s-ansible : Land registry cert] ***********************************
skipping: [206.189.192.42]
skipping: [192.34.61.217]
skipping: [161.35.124.62]
skipping: [172.232.11.165]
skipping: [172.234.25.201]
skipping: [172.233.215.206]

TASK [microk8s-ansible : Configure sysctls] ************************************
ok: [172.234.25.201]
ok: [172.232.11.165]
ok: [206.189.192.42]
changed: [172.233.215.206]
changed: [192.34.61.217]
changed: [161.35.124.62]

TASK [microk8s-ansible : Create .kube] *****************************************
ok: [172.234.25.201]
ok: [172.232.11.165]
ok: [206.189.192.42]
changed: [172.233.215.206]
changed: [161.35.124.62]
changed: [192.34.61.217]

TASK [microk8s-ansible : Create symlink for standard kubectl] ******************
ok: [172.234.25.201]
ok: [172.232.11.165]
ok: [206.189.192.42]
changed: [192.34.61.217]
changed: [172.233.215.206]
changed: [161.35.124.62]

RUNNING HANDLER [microk8s-ansible : Reboot] ************************************
changed: [192.34.61.217]
changed: [161.35.124.62]
dchanged: [172.233.215.206]

PLAY RECAP *********************************************************************
161.35.124.62              : ok=14   changed=11   unreachable=0    failed=0    skipped=4    rescued=0    ignored=0
172.232.11.165             : ok=12   changed=1    unreachable=0    failed=0    skipped=4    rescued=0    ignored=0
172.233.215.206            : ok=14   changed=12   unreachable=0    failed=0    skipped=4    rescued=0    ignored=0
172.234.25.201             : ok=12   changed=1    unreachable=0    failed=0    skipped=4    rescued=0    ignored=0
192.34.61.217              : ok=14   changed=11   unreachable=0    failed=0    skipped=4    rescued=0    ignored=0
206.189.192.42             : ok=12   changed=1    unreachable=0    failed=0    skipped=4    rescued=0    ignored=0
environment created.  follow-up configuration can be performed with:
ansible-playbook main.yml -i inventory
existing cluster found with candidate leader mck-k8s-do-0
orphan node mck-k8s-do-1 will join mck-k8s-do-0
Contacting cluster at 206.189.192.42
Waiting for this node to finish joining the cluster. .. .. .. ..
orphan node mck-k8s-do-2 will join mck-k8s-do-0
Contacting cluster at 206.189.192.42
Waiting for this node to finish joining the cluster. .. .. .. ..
orphan node mck-k8s-linode-2 will join mck-k8s-do-0
Contacting cluster at 206.189.192.42
Waiting for this node to finish joining the cluster. .. .. .. ..
root@mck-k8s-do-0:~# kubectl get nodes
NAME               STATUS   ROLES    AGE   VERSION
mck-k8s-linode-0   Ready    <none>   34m   v1.28.2
mck-k8s-linode-1   Ready    <none>   17m   v1.28.2
mck-k8s-do-1       Ready    <none>   91s   v1.28.2
mck-k8s-do-2       Ready    <none>   69s   v1.28.2
mck-k8s-do-0       Ready    <none>   38m   v1.28.2
mck-k8s-linode-2   Ready    <none>   19s   v1.28.2

destruction

Cleaning up these cloud resources is as important as creating them. Here we do a full cleanup:

~/git/multi-cloud-k8s$ python3 mck --cleanup
digitalocean_ssh_key.default: Refreshing state... [id=39818938]
linode_sshkey.default: Refreshing state...
linode_instance.instance[2]: Refreshing state... [id=51380582]
linode_instance.instance[0]: Refreshing state... [id=51380107]
linode_instance.instance[1]: Refreshing state... [id=51380422]
digitalocean_droplet.instance[0]: Refreshing state... [id=382118243]
digitalocean_droplet.instance[1]: Refreshing state... [id=382122787]
digitalocean_droplet.instance[2]: Refreshing state... [id=382122788]
aws_key_pair.default: Refreshing state... [id=opentofu]
data.aws_ami.image: Reading...
aws_default_vpc.mainvpc: Refreshing state... [id=vpc-0371ba1712735e722]
aws_default_subnet.default_az1: Refreshing state... [id=subnet-046b3f303365c3333]
data.aws_ami.image: Read complete after 1s [id=ami-0c2644caf041bb6de]
aws_default_security_group.default: Refreshing state... [id=sg-06d079c63124325e2]

OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  - destroy

OpenTofu will perform the following actions:

( tf diff tuncated )

Plan: 0 to add, 0 to change, 12 to destroy.
digitalocean_droplet.instance[1]: Destroying... [id=382122787]
digitalocean_droplet.instance[0]: Destroying... [id=382118243]
digitalocean_droplet.instance[2]: Destroying... [id=382122788]
linode_instance.instance[1]: Destroying... [id=51380422]
linode_instance.instance[0]: Destroying... [id=51380107]
linode_instance.instance[2]: Destroying... [id=51380582]
aws_key_pair.default: Destroying... [id=opentofu]
aws_default_subnet.default_az1: Destroying... [id=subnet-046b3f303365c3333]
aws_default_security_group.default: Destroying... [id=sg-06d079c63124325e2]
aws_default_subnet.default_az1: Destruction complete after 0s
aws_default_security_group.default: Destruction complete after 0s
aws_default_vpc.mainvpc: Destroying... [id=vpc-0371ba1712735e722]
aws_default_vpc.mainvpc: Destruction complete after 0s
aws_key_pair.default: Destruction complete after 1s
digitalocean_droplet.instance[0]: Still destroying... [id=382118243, 10s elapsed]
digitalocean_droplet.instance[1]: Still destroying... [id=382122787, 10s elapsed]
digitalocean_droplet.instance[2]: Still destroying... [id=382122788, 10s elapsed]
linode_instance.instance[0]: Still destroying... [id=51380107, 10s elapsed]
linode_instance.instance[2]: Still destroying... [id=51380582, 10s elapsed]
linode_instance.instance[1]: Still destroying... [id=51380422, 10s elapsed]
digitalocean_droplet.instance[2]: Still destroying... [id=382122788, 20s elapsed]
digitalocean_droplet.instance[1]: Still destroying... [id=382122787, 20s elapsed]
digitalocean_droplet.instance[0]: Still destroying... [id=382118243, 20s elapsed]
linode_instance.instance[0]: Still destroying... [id=51380107, 20s elapsed]
linode_instance.instance[1]: Still destroying... [id=51380422, 20s elapsed]
linode_instance.instance[2]: Still destroying... [id=51380582, 20s elapsed]
digitalocean_droplet.instance[1]: Destruction complete after 20s
digitalocean_droplet.instance[0]: Destruction complete after 20s
digitalocean_droplet.instance[2]: Destruction complete after 20s
digitalocean_ssh_key.default: Destroying... [id=39818938]
digitalocean_ssh_key.default: Destruction complete after 1s
linode_instance.instance[1]: Still destroying... [id=51380422, 30s elapsed]
linode_instance.instance[0]: Still destroying... [id=51380107, 30s elapsed]
linode_instance.instance[2]: Still destroying... [id=51380582, 30s elapsed]
linode_instance.instance[1]: Destruction complete after 32s
linode_instance.instance[2]: Destruction complete after 36s
linode_instance.instance[0]: Still destroying... [id=51380107, 40s elapsed]
linode_instance.instance[0]: Destruction complete after 41s
linode_sshkey.default: Destroying...
linode_sshkey.default: Destruction complete after 0s

Destroy complete! Resources: 12 destroyed.

Nathan Hensel

on caving, mountaineering, networking, computing, electronics


2023-10-27