Deploying Kubernetes to Raspberry pi 5
Posted on Thu 12 September 2024 in k8s
I know, it's long overdue... I wanted to dig into Kubernetes.
But I wasn't prepared to buy the required hardware to run a full fledged OpenShift cluster, so I had a look at alternatives. I'm a big fan of Raspberry Pi devices, because of the small footprint, and I can just tuck them away underneath my monitor. So, as a basis, I wanted to build something on the latest Raspberry Pi device, the Raspberry Pi 5
What I want to achieve:
- Deploy a 3-node K8s cluster (with embedded etcd)
My network topology
As stated in my goals, I will have (at least) 4 systems to start with: 3 RPI5s and 1 HAProxy server. Setting up the HAProxy server will not be part of this article, as it's quite simple to get it working.
The names I will be using in the rest of the article are:
- RPI5 1: n00.yavin.home
- RPI5 2: n01.yavin.home
- RPI5 3: n02.yavin.home
- HAproxy server: rodia.home, alias yavin.home
As you can tell, I have a soft spot for Star Wars. Yavin Prime used to have 26 moons orbiting before Yavin-4 was destroyed by the evil Empire. But I digress...
rodia.home has a HAProxy instance, so I can make a cluster out of my K3S nodes.
Looking for clues
I discovered a couple of lightweight K8s distributions, like K3S and Microshift.
I figured Microshift would be a good choice, as it is also a derivate of OpenShift, and professionally I work with a lot of Red Hat products. Unfortunately, there is no RPI5 support for Fedora IoT (yet), and the latest available Microshift (stable) release for Fedora dates back to 2022. Bummer.
Then I discovered K3S. It was all I was looking for... It's supported on aarch64 architecture, and installs easily on Raspberry Pi OS!
Deploying the RPI5s
I bought 2 RPI5 8GB RAM with M.2 NVME and PoE+ HAT and NVME SSDs. These were my first RPI5 I ever bought and I was pleasantly surprised to find the possibility to boot fromthe network. It allowed me to install Raspberry Pi OS (64-bit) Lite, as I like to keep things secure and minimal.
As mentioned, I like to keep my systems secure. Yes, and minimal, that too, but I want to focus on the secure part:
I do not like sudo not asking for passwords. So the first thing I do is adding my user to the sudo group and removing /etc/sudoers.d/010_pi-nopasswd
While I'm also a proponent of the Firewall-doctrine, I did not install any firewall, as K3S will take care of this. I'll need to polish up my iptables.
I do apply more (security-based) configuration changes, but the above is the biggest impact on the rest of this article.
DNS
DNS is a fickle thing. If it works, it works. If it doesn't, it doesn't. And usually it will make your life miserable when it doesn't. While I will not go over setting up a decent DNS server, it's very important to have one. Obviously you can replace all hostnames with ip addresses, and it should work fine as well. It makes life harder if you need to change ip addresses, but your mileage may vary.
I like DNS. I run a Pi-hole container, and up until now, it's been serving me extremely well.
To avoid breakage when my robust Pi-hole is not available (sometimes I need to update the container), I'll add all critical (to the K8s cluster) information to /etc/hosts:
...
127.0.1.1 n00.yavin.home n00
w.x.y.z n01.yavin.home n01
w.x.y.z n02.yavin.home n02
w.x.y.z yavin.home yavin
Notice the 127.0.1.1 n00.yavin.home n00 line. make sure you set it correct on your relevant nodes.
cgroups
K3S requires cgroups to be enabled, before installing K3S, so we need to add cgroup_memory=1 cgroup_enable=memory to the command line in /boot/firmware/cmdline.txt and reboot.
HAProxy
As I want to have a K8S cluster, all nodes need to know how to reach the cluster's API on the main address.
This will be my config in the end:
frontend k3s-frontend
bind *:6443
mode tcp
option tcplog
default_backend k3s-backend
backend k3s-backend
mode tcp
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s
server node00 <ip of n00.yavin.home>:6443 check
server node01 <ip of n01.yavin.home>:6443 check
server node02 <ip of n02.yavin.home>:6443 check
And I will remark out those that are not available while installing new K3S servers.
Deploying the first K3s cluster node
It is worthwhile to RtFM (Read the Fine Manual). The nice people of K3S have put a lot of effort into writing good documentation, so it's only proper to read it. K3S Docs contains all you need to deploy your K3S cluster the way you want. But I will stick to the way I want...
Installing K3S according to the docs is rather simple:
curl -sfL https://get.k3s.io | sh -
But simple isn't good enough for me. As I want to install multiple servers into a cluster, I need to create a token so each node can authenticate with the cluster and let the installer know about it.
Exporting configuration items as environmental values is a good idea, but they do not persist logoffs. Once your cluster is installed, you don't really need those anymore, but what when you need to setup a 2nd, 3rd, ... K3S server or additional K3S agents? From a configuration management perspective it's good to be able to automate this. So, we can add it to some exporter bash file, or, as it happens, K3S offers a way to structurally provide this information in a K3S Config file. Yay!
On each node we'll create /etc/rancher/k3s/config.yaml and populate it with the (same) information. But we will start off with the first server.
According to K3S Docs all arguments passed to the installer can be used in the /etc/rancher/k3s/config.yaml file without the leading double dash (--). boolean arguments have a value of true or false.
Since I'm interested to see what happens during the install, I will activate debug mode using the --debug argument.
I'll not try to stray from the standards too much, but wanting a cluster, I'll need to initiate the cluster with the first node using the --cluster-init argument.
Additionally, I will expose my K8S cluster through my HAproxy, so I will need to tell the installer about that using the --tls-san argument.
I need a server token as well, to distribute among the K3S servers. According to the Token Types this token is quite important, and should be treated as a secret. Use your preferred password manager to generate a strong passphrase. Additionally, we need to make sure the /etc/rancher/k3s/config.yaml file has a mode of 0600 so it cannot be read by anyone else with access to the system.
The command line would look like this:
curl -sfL https://get.k3s.io | sh -s - server \
--debug \
--token='<SECRET>' \
--cluster-init \
--tls-san=yavin.home
But I don't want this in the command line, I want it in a config file.
/etc/rancher/k3s/config.yaml should look like this on n00.yavin.home:
token: <SECRET>
debug: true
cluster-init: true
tls-san:
- yavin.home
Now, my command should be:
curl -sfL https://get.k3s.io | sh -
The output I'm getting is:
[INFO] Finding release for channel stable
[INFO] Using v1.30.4+k3s1 as release
[INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.30.4+k3s1/sha256sum-arm64.txt
[INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.30.4+k3s1/k3s-arm64
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Skipping installation of SELinux RPM
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO] Host iptables-save/iptables-restore tools not found
[INFO] Host ip6tables-save/ip6tables-restore tools not found
[INFO] systemd: Starting k3s
I'm not sure, but it appears it need iptables and netscript-ipfilter installed. I'll add that to my prerequisite list.
When the install is done, you can monitor the progress of the deployment by executing:
watch -n1 'kubectl get node'
and you should eventually get the following feedback:
NAME STATUS ROLES AGE VERSION
n00.yavin.home Ready control-plane,etcd,master 5m40s v1.30.4+k3s1
Deploying the next K3s cluster nodes
As with the first node, I will create a /etc/rancher/k3s/config.yaml (file mode 0600) to contain the config for each K3S server node.
First off, I need to make sure only n00.yavin.home is available in my HAProxy config, so the k3s-backend backend should look like:
backend k3s-backend
mode tcp
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s
server node00 <ip of n00.yavin.home>:6443 check
# server node01 <ip of n01.yavin.home>:6443 check
# server node02 <ip of n02.yavin.home>:6443 check
To test it, I ran the following command on n01.yavin.home to make sure it all works correctly:
curl -k https://yavin.home:6443
Which returns:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
I need to tell the K3S installer to point to cluster address, using the --server argument. I also need to allow the installer to use the target server, so I will need to specify --tls-san again. And finally, I need to use the same token as I used for my first server to authenticate.
My /etc/rancher/k3s/config.yaml contains:
token: <SECRET>
server: 'https://yavin.home:6443'
debug: true
tls-san:
- yavin.home
The token in this config file must be the same you have specified for n00.yavin.home.
Since the cluster has already been initialized on n00.yavin.home, I have no need for the --cluster-init argument or cluster-init value in the config file.
Off we go:
curl -sfL https://get.k3s.io | sh -
The output I get now is:
[INFO] Finding release for channel stable
[INFO] Using v1.30.4+k3s1 as release
[INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.30.4+k3s1/sha256sum-arm64.txt
[INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.30.4+k3s1/k3s-arm64
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Skipping installation of SELinux RPM
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
I installed iptables and netscript-ipfilter this time, and it's no longer complaining about it... Yay!
To check the progress of the K3S server installation, I ran this on n00.yavin.home:
watch -n1 'kubectl get node'
And after a while, the output changed from:
NAME STATUS ROLES AGE VERSION
n00.yavin.home Ready control-plane,etcd,master 53m v1.30.4+k3s1
to:
NAME STATUS ROLES AGE VERSION
n00.yavin.home Ready control-plane,etcd,master 53m v1.30.4+k3s1
n01.yavin.home Ready control-plane,etcd,master 2m31s v1.30.4+k3s1
To put it in the words of John "Hannibal" Smith: I like it when a plan comes together!
So now, I only have to rinse and repeat for the last node, but I have to make sure my HAProxy config reflects the addition of all the nodes after adding them to the cluster
As a final touch, I want to enable bash completion on the nodes (and any other system from which I run kubectl), I add the following code to ~/.bashrc or /etc/profile.d/kubectl.sh:
which kubectl >/dev/null 2>&1
if test $? -eq 0 -a $(id --u) -eq 0; then
source <(kubectl completion bash)
fi
And that's about it!
What's next?
I want to learn (more) using K8S on this K3S cluster, but also possibly migrate my current distributed podman setup so I can perform updates more easily, and possibly make sure I optimize the usage of my hardware...
For that I want/need to investigate in deploying apps (duh) most likely using Helm charts. And to store persistent data, I will most likely need to investigate on how to use centralised storage (NFS) with my apps.