I’m writing this to hopefully show up as a search result for other people wondering how to get k3s working on hosts with unusual network setups.
Im doing this on nixos nodes running bgp-unnumbered - these nodes do not have ipv4 addresses on any interfaces; they have only /32 loopback addresses:
[root@b55416a9-f3fc-59d5-b5f8-d40f13e815a1:~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet 10.0.0.1/32 scope global lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp0s20f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:60:e0:8a:2e:03 brd ff:ff:ff:ff:ff:ff
inet6 fe80::260:e0ff:fe8a:2e03/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
3: enp0s20f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 00:60:e0:8a:2e:04 brd ff:ff:ff:ff:ff:ff
4: enp0s20f2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:60:e0:8a:2e:05 brd ff:ff:ff:ff:ff:ff
inet6 fe80::260:e0ff:fe8a:2e05/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
5: enp0s20f3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:60:e0:8a:2e:06 brd ff:ff:ff:ff:ff:ff
inet6 fe80::260:e0ff:fe8a:2e06/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
[root@b55416a9-f3fc-59d5-b5f8-d40f13e815a1:~]# ip r
default nhid 8 via inet6 fe80::290:bff:fea5:e2d0 dev enp0s20f0 proto bgp metric 20
10.0.0.0 nhid 8 via inet6 fe80::290:bff:fea5:e2d0 dev enp0s20f0 proto bgp metric 20
10.0.0.2 nhid 26 proto bgp metric 20
nexthop via inet6 fe80::260:e0ff:fe8a:2ca3 dev enp0s20f2 weight 1
nexthop via inet6 fe80::260:e0ff:fe8a:2ca4 dev enp0s20f3 weight 1
10.42.0.0/24 dev cni0 proto kernel scope link src 10.42.0.1
10.42.1.0 nhid 26 proto bgp metric 20
nexthop via inet6 fe80::260:e0ff:fe8a:2ca3 dev enp0s20f2 weight 1
nexthop via inet6 fe80::260:e0ff:fe8a:2ca4 dev enp0s20f3 weight 1
10.42.1.0/24 via 10.42.1.0 dev flannel.1 onlink
172.30.190.0/24 nhid 8 via inet6 fe80::290:bff:fea5:e2d0 dev enp0s20f0 proto bgp metric 20
With default options, flannel appears to attempt some sort of auto-resolution to decide what host interface to bind to:
flannel exited: failed to find the interface: failed to find IPv4 address for interface: No IPv4 address found for given interface
The fix is to provide --flannel-iface
:
services.k3s = {
enable = true;
role = "server";
extraFlags = "--disable traefik,servicelb --flannel-iface lo";
token = "wermsaslkdjfhkasjdfh";
{% if hostvars["init"] %}
clusterInit = true;
{% else %}
serverAddr = "https://{{hostvars["join"]}}:6443";
{% endif %}
};
An now k3s will run:
[root@b55416a9-f3fc-59d5-b5f8-d40f13e815a1:~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ab984010-325a-57fa-a04c-78fea2d5fbad Ready control-plane,etcd,master 10m v1.31.1+k3s1
b55416a9-f3fc-59d5-b5f8-d40f13e815a1 Ready control-plane,etcd,master 27m v1.31.1+k3s1
The remaining piece is implementing a load balancing solution. I’m working on it.