K8s with K3s
Earlier this year I’ve started using K3s regularly for local testing of Kubernetes workloads, APIs, tools, and more. I’ve grown quite fond of k3s for multiple reasons: it’s very easy and fast to install, very easy to use, and so far I have not found a single service designed for K8s that wouldn’t work on K3s. I’ve run it both on Linux machines as well as on Windows machines through WSL2. It just works. I use it in particular also to test my own K8s controllers and operators. Installation typically finishes in 15-20 seconds, and system pods are usually running fine after another at most 30 seconds.
After some experimenting and debugging, I’ve also managed to get it to work with a dual-stack configuration (IPv4 and IPv4). This was apparently more difficult for v1.24 and v1.25 (which I’ve used) because of a bug in those versions of K8s that made this a bit more difficult than necessary. It sounds like that is no longer an issue though.
Installation
Back when I set up a dual-stack system, I used v1.25 on a VM in Azure. I used
cloud-init
to have K3s installed as the VM was provisioned:
# cloud-init.yaml used for the Azure VM
package_update: true
package_upgrade: true
runcmd:
# Install k3s.
# See https://docs.k3s.io/installation/network-options for dual-stack config
- curl -sfL "https://get.k3s.io" | sh -s -
--secrets-encryption
--write-kubeconfig-mode 644
--service-cidr 10.1.64.0/18,fd8d:5c43:4f39:1001::/112
--cluster-cidr 10.1.128.0/17,fd8d:5c43:4f39:1002::/112
--kube-controller-manager-arg node-cidr-mask-size-ipv6=114
--tls-san my.public.api.name
"--kubelet-arg=node-ip=::"
Note that last argument --kubelet-arg=node-ip=::
to prioritize IPv6 traffic.
I have not yet tested if that is still needed, but again, from the documentation
it sounds like it isn’t.
Of course I used my own IPv6 tools to generate a random local / private IPv6 network, so be sure you check that out too instead of just copying the private network block definitions above :grinning_face:
I’ve set up the VM using a public IPv6 address too, so I can use kubectl
through
IPv6 - it all works like a charm.
K3s by default also installs Traefik that has quite good K8s integration through ingress controllers and CRDs. In my case, I’m also running Traefik on the IPv6 interface, so I can expose services through IPv6 too. Again, it just works!
Upgrades
K3s upgrades are well documented too, but usually I’m cautious about upgrades that seem too good to be true. But here’s the thing: event the manual upgrades of K3s are truly simple and very easy to achieve. Here’s how I upgraded to v1.26.4 (the latest stable as of the writing of this post). Please note the similarities between this command and the one above for the initial installation:
curl -sfL "https://get.k3s.io" | INSTALL_K3S_CHANNEL=stable sh -s - \
--secrets-encryption \
--write-kubeconfig-mode 644 \
--service-cidr 10.1.64.0/18,fd8d:5c43:4f39:1001::/112 \
--cluster-cidr 10.1.128.0/17,fd8d:5c43:4f39:1002::/112 \
--kube-controller-manager-arg node-cidr-mask-size-ipv6=114 \
--tls-san my.public.api.name \
"--kubelet-arg=node-ip=::"
In fact, the only thing that’s different is the explicit selection of a release
channel through the INSTALL_K3S_CHANNEL
environment variable, here set to
stable
. I could achieve an upgrade to the same version by setting it to v1.26
too. That’s particularly useful when dealing with a K3s cluster and upgrading
machines one by one, and still respecting the
K8s Version Skew Policy.
The upgrade will actually leave the pods running, so services running on the
cluster remain unaffected / continue to operate.
After testing locally, I applied this on the above mentioned dual-stack system and it just continued to work like a charm. I really cannot stress enough how delighted I am about the ease-of-use of K3s. This is a job well done!
Conclusion
Use K3s!