14 KiB
Raspberry pi k3s cluster guide
- Pre-requisites
- Step 1 - Prepare boot media for master
- Step 2 - Copy the install media to sd card
- Step 3 - Boot the pi and remotely connect
B
This file serves as a complete step by step guide for creating a bare metal raspberry pi kubernetes cluster using k3s from Rancher.
My goal for this build is to replace a server I currently run at home that hosts several workloads via Docker with a scalable k3s cluster.
Additionally in future I would like the cluster to be portable and operate via 3G-5G Cellular network and an array of batteries.
I chose k3s as it incredibly lightweight but still CNCF certified and production grade software that is optimised for resource constraints of raspberry pis.
Pre-requisites
Cluster machines
For this guide I am using three Raspberry Pi 4 4GB machines.
The cluster will have two leader node and two worker nodes.
For resiliency puposes in future I will consider updateing the cluster to run with all nodes as leader nodes.
Boot media
This guide requires each Raspberry Pi to have a removable SD card or other removable boot media. I am use three 32GB SD Cards though any USB or SD card at least 8GB in size should work fine.
TODO Migration to network booting
In future it would be preferable for the raspberry pi's to be able to network boot and setup automatically without an SD card.
This is a nice to have that I will pursue at a later date once I have a deployed cluster that allows me to migrate off the current server setup I have deployed.
Step 1 - Prepare boot media for master
Download the latest release
Our first step is to create the bootable SD Card with a minimal install of Raspbian, which is a free operating system based on Debian and is optimised for Raspberry Pi hardware.
Rather than doing an installation and configuration of an operating system image from scratch I found this project on Github which automates the install and configuration process nicely.
echo Downloading latest release zip from github
curl -s https://api.github.com/repos/foodeas/raspberrypi-ua-netinst/releases/latest \
| grep "browser_download_url.*zip" \
| cut -d : -f 2,3 \
| tr -d \" \
| wget -i -
echo Checking file is now present
ls -l | grep *.zip
echo Extracting the zip file
unzip -q -d installer *.zip
ls -l | grep installer
Downloading latest release zip from github Checking file is now present -rw-rw-rw- 1 james james 60299545 Aug 12 08:35 raspberrypi-ua-netinst-v2.4.0.zip Extracting the zip file drwxrwxrwx 1 james james 4096 Jan 20 11:12 installer -rwxrwxrwx 1 james james 2863 Jan 10 17:04 installer-config.txt
Apply custom install configuration
Our next step after downloading the latest release is to apply our own installation configuration using a simple txt file.
There is great documentation online howing what configuration options are available here.
For our purposes we just over-write the file downloaded and extracted in the previous step with one we have prepared earlier :)
echo Display wordcount of original file for comparison
wc installer/raspberrypi-ua-netinst/config/installer-config.txt
echo Overwriting /installer/raspberrypi-ua-netinst/config/installer-config.txt
cp installer-config.txt installer/raspberrypi-ua-netinst/config/
echo Display wordcount of file after copy to validate update
wc installer/raspberrypi-ua-netinst/config/installer-config.txt
Display wordcount of original file for comparison 3 23 157 installer/raspberrypi-ua-netinst/config/installer-config.txt Overwriting /installer/raspberrypi-ua-netinst/config/installer-config.txt Display wordcount of file after copy to validate update 67 85 2863 installer/raspberrypi-ua-netinst/config/installer-config.txt
Apply custom post install script
The final step is to supply a post install script which completes additional security hardening and production readiness automatically.
To supply a script we can provide an additional post-install.txt file as
documented here.
I have a hardening script prepared in this repository that we can copy in.
echo Copying in post-install.txt
cp post-install.txt installer/raspberrypi-ua-netinst/config/
echo Display wordcount of file after copy to validate
wc installer/raspberrypi-ua-netinst/config/post-install.txt
Copying in post-install.txt Display wordcount of file after copy to validate 98 282 3429 installer/raspberrypi-ua-netinst/config/post-install.txt
Step 2 - Copy the install media to sd card
Our next step is to copy the contents of the installer/ folder
to a FAT32 formatted SD Card.
Unfortunately this is currently a windows step as my dev environment
is a Windows 10 laptop with Debian via Windows Subsystem for Linux
which does not support lsblk or other disk management commands.
Obtain sd card partition information
Our first step is to insert the SD Card and ensure it is formatted
correctly as FAT32. To do that we need to know the number of the
disk we want to format, we can find that via powershell.
echo Retrieving disk list via powershell
powershell.exe -nologo
get-disk | select Number, FriendlyName, Size
echo Retrieving partition list via powershell
get-disk | get-partition | select PartitionNumber, DriveLetter, Size, Type
exit
Retrieving disk list via powershell
Number FriendlyName Size
------ ------------ ----
1 Realtek PCIE Card Reader 31104958464
0 SAMSUNG MZVLB256HAHQ-000H1 256060514304
Retrieving partition list via powershell
PartitionNumber DriveLetter Size Type
--------------- ----------- ---- ----
1 D 268435456 FAT32 XINT13
2 E 30832328704 Unknown
1 272629760 System
2 16777216 Reserved
3 C 254735810560 Basic
4 1027604480 Recovery
Create and format sd card partition
Once we know the number of the disk we want to format we can proceed.
In the example above I have a 32GB SD Card which shows as number 1.
Checking the disk we can see some partitions that exist already from
previous use of the card. To delete these partitions you can use the
Remove-Partition -DiskNumber X -PartitionNumber Y command where
X and Y relate to the output of your disk and partition number.
Due to the risk of data loss this step is not automated. Once existing partitions have been cleared we can use the following block to:
- Create a new partition using masixmum available space
- Assign a free drive letter in windows
- Mount the disk in WSL so we can copy to it
- Copy the install media over to the partition
echo Use powershell to create new partition and format
powershell.exe -nologo
new-partition -disknumber 1 -usemaximumsize -driveletter d
format-volume -driveletter d -filesystem FAT32 -newfilesystemlabel sd
exit
echo Mount the new partition in wsl
sudo mkdir /mnt/d
sudo mount -t drvfs d: /mnt/e/
echo Copy the contents of installer to sd
cp -r installer/* /mnt/d/
echo Eject the sd card ready for use
powershell.exe -nologo
(new-object -comobject shell.application).namepsace(17).parsename("E:").invokeverb("eject")
exit
Step 3 - Boot the pi and remotely connect
Provided the configuration on the sd card is valid and the pi has
been able to successfully obtain an ip address via dhcp on boot
then following a 10-20minute net install process the pi will be
online and accessible via ssh using the private key corresponding
to the public key we supplied in our installer-config.txt file.
Setup ssh and connect
First step, we ensure our ssh agent is running and has our key added.
# Ensure our ssh-agent is running.
eval `ssh-agent`
# Make sure our private key is added.
ssh-add ~/.ssh/james
Next we can port knock and connect.
# Setup machine variables
export port=2122
export machineip=192.168.1.122
export knocksequence=[SEQUENCE HERE]
# Knock and enter
knock $machineip $knocksequence && ssh -p $port $machineip
Setup glusterfs storage cluster
Now that our machines are online and we have connected we can setup our storage cluster.
For a distributed storage cluster we are using glusterfs. As part of our earlier setup gluster was automatically installed. We just need to configure it.
Our first step is to ensure our storage drives attached to our raspberry pi's are formatted. In our case our drives are all showing as /dev/sda with no existing partitions, ensure you review your situation with lsblk first and ajdust the commands below as neccessary!
# Format the /dev/sda1 partition as xfs
sudo mkfs.xfs -i size=512 /dev/sda1
# Make the mount point directory
sudo mkdir -p /data/brick1
# Update fstab to ensure the mount will resume on boot
echo '/dev/sda1 /data/brick1 xfs defaults 1 2' | sudo tee -a /etc/fstab
# Mount the new filesystem now
sudo mount -a && sudo mount
The gluster processes on the nodes need to be able to communicate with each other. To simplify this setup, configure the firewall on each node to accept all traffic from the other node.
In our four node cluster this means ensuring we have rules present for all nodes. Adjust as neccessary for the requirements of your cluster!
# Add the firewall rules
sudo iptables -I INPUT -p all -s 192.168.1.122 -j ACCEPT
sudo iptables -I INPUT -p all -s 192.168.1.124 -j ACCEPT
sudo iptables -I INPUT -p all -s 192.168.1.126 -j ACCEPT
sudo iptables -I INPUT -p all -s 192.168.1.128 -j ACCEPT
# Ensure these are saved permanently
sudo netfilter-persistent save
Next we need to ensure the glusterfs daemon is enabled and started.
# Ensure the gluster service starts on boot
sudo systemctl enable glusterd
# Start the gluster service now
sudo systemctl start glusterd
# Check the service status to confirm running
sudo systemctl status glusterd
Now we're ready to test connectivity between all the gluster peers.
# Complete the peer probes
sudo gluster peer probe 192.168.1.122
sudo gluster peer probe 192.168.1.124
sudo gluster peer probe 192.168.1.126
sudo gluster peer probe 192.168.1.128
# Validate the peer status
sudo gluster peer status
Provided connectivity was established successfully you are now ready to setup a gluster volume.
Note: The gluster volume create command only needs to be run from any one node.
# Create the gluster volume folder (all nodes)
sudo mkdir -p /data/brick1/jammaraid
# Create the gluster volume itself (one node)
sudo gluster volume create jammaraid 192.168.1.122:/data/brick1/jammaraid 192.168.1.124:/data/brick1/jammaraid 192.168.1.126:/data/brick1/jammaraid 192.168.1.128:/data/brick1/jammaraid force
# Ensure the volume is started
sudo gluster volume start jammaraid
# Confirm the volume has been created
sudo gluster volume info
Now that the gluster volume has been created and started we can mount it within each node so it is accessible for use :)
# Create the gluster volume mount point
sudo mkdir -p /media/raid
# Mount the volume
sudo mount -t glusterfs localhost:jammaraid /media/raid