Kubernetes has always been tricky deployment for the newbies and setting it up at home requires some beefy hardware with multiple nodes running on it. You can achieve this with K3s which is lightweight version of Kubernetes meant for deployment on home PCs or low-end systems. In this blog, I will be showing you on setting up a Kubernetes Cluster at home that will have a Kubernetes controller and multiple Kubernetes nodes. You can refer to my previous blogs on setting up low budget home lab before proceeding with this blog as I will be using the same setup to provision my Kubernetes cluster. But this can also be achieved using multiple VMs or multiple systems in your environment.
In this blog, I shall be guiding you with deploy a Kubernetes cluster with one controller and three nodes. We shall be using the Ubuntu templates created earlier in this blog article – Creating VM Templates in Proxmox. Then along the way, once we are done with setting up necessary requirements for nodes, we will convert the node VM to template. This will help us in scaling up nodes whenever required in future easily and quickly. The steps mentioned below would be same for both the controller and the node VMs.
Creating VMs for Controller and Nodes
As an initial steps, we will now create two virtual machines, one for controller and the other for node, which will be converted to template later. I am using Ubuntu as my base OS, but you can choose any other base OS and the method of installation might differ based on the installation you choose. With Ubuntu also, there are multiple options like creating a VM from scratch or using a Proxmox template to provision a machine. I shall be going with the latter option to provision 2 machines.
- Right click on the Proxmox template for Ubuntu in the left-hand side menu in your Proxmox UI and select ‘Clone’.
- Provide an ID for your VM. For your node, you can choose a higher ID as it will be changed to a template along the blog and will be used to provision nodes for your cluster. In my case, I have used 800 for my controller and 902 for the node template as my templates start from 900.
- Provide the name of the VM, in the mode select ‘Full Clone’ and then click on the ‘Clone’ button.
- Repeat this once for controller and then for node.
Once you have both the VMs ready, we can follow along the below steps on both the VMs as the initial steps for both, controller and node, are same. Below is the configuration set for the controller and node VMs.
Controller Configuration
- CPU – 2 Cores [host]
- RAM – 4 GB
- HDD – 32 GB
Node Configuration
- CPU – 2 Cores [host]
- RAM – 2 GB
- HDD – 32 GB
Above configuration is subjective, and you can choose but ensure to have 2 cores of CPU as minimum and 2 GB of RAM for controller and 1 GB of RAM for node as bare minimum specs.
Steps for setting up Controller and Nodes
First step is to have a static IP address for your VMs. This can be achieved in multiple ways such as having a static IP address in your VM configuration or by doing it on the DHCP server. In my use case, the router is serving as the DHCP server and provides a feature of MAC address binding which allows me to assign a static IP address to my machine unless its MAC address changes.
We shall be using the containerd container runtime environment to run our containers. To install it, run the below commands.
$ sudo apt install containerd -y
$ sudo mkdir /etc/containerd
$ containerd default config | sudo tee /etc/containerd/config.toml
In order to have containerd working properly in the Kubernetes cluster, we need to enable SystemdCgroup
in the above configuration file that we created. Run the below command to edit the file.
$ sudo nano /etc/containerd/config.toml
Now, search for the following in the open file. If you are using nano
editor as myself then you can use Ctrl + W
shortcut to look for a string in the file.
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
Under this configuration in the file, you should see SystemdCgroup
option which needs to be set to true
, as shown below.
SystemdCgroup = true
Now save the file. Once, this is done, now we need to disable the swap file storage on both the machines. To do so, we can run the following command as shown in the below.
$ sudo swapoff -a
Also, comment out the line in the fstab
file located at /etc/fstab
to ensure that the swap does not turn on after the system restarts. For me, I had no swap enabled on my system, so didn’t turn off the swap. Now we need to edit /etc/sysctl.conf
and uncomment the line shown below by removing the #
in front of the line.
net.ipv4.ip_forward=1
One last preparatory step is to create and add br_netfilter
in the /etc/modules-load.d/k8s.conf
. Save the file and reboot both the servers.
Once the servers reboot, now it’s time to install Kubernetes. To do so, run the below commands in both the servers.
$ sudo apt update && sudo apt install -y ca-certificates curl apt-transport-https
$ curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
$ echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
$ sudo apt update && sudo apt install kubeadm kubectl kubelet -y
Once the installation is finished on both the systems, the common steps for both the systems are now done.
Initializing the Kubernetes Cluster
Now, the below steps need to be only performed on the controller node. To initialize the cluster, your need to the run the following command by replacing the <CONTROLLER_IP>
with the IP address that you have set for your controller node and <CONTROLLER_HOST>
with the hostname of your controller machine. Ensure to not change the subnet mentioned in the command. Once your initialization is complete, you should see few commands listed at the end of the output. They are required to ensure root privileges are not required to interact with your Kubernetes cluster. They are listed after the first command below. You will also see a command to add new controller and worker nodes to your cluster. Please make a note of the command. The command should start with kubeadm join
.
$ sudo kubeadm init --control-plane-endpoint=<CONTROLLER_IP> --node-name=<CONTROLLER_HOST> --pod-network-cidr=10.244.0.0/16
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
The final step left to configure your controller is the enablement of flannel network for your Kubernetes cluster. Flannel is defined by CoreOS for Kubernetes network as a Software Defined Network or SDN for short. But it can also have more generalized usage. To enable Flannel network on your cluster, run the below command.
$ kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/
Documentation/kube-flannel.yml
Now the controller node is ready.
Creating templates for node machines
To create template from node machine, we will first need to ensure that cloud-init is in clean state and the machine ID is reset.
$ sudo cloud-init clean
$ sudo rm -rf /var/lib/cloud/instances
$ sudo truncate -s 0 /etc/machine-id
$ sudo rm /var/lib/dbus/machine-id
$ sudo ln -s /etc/machine-id /var/lib/dbus/machine-id
You can verify the machine-id file by running the cat command and it should not return anything. Once done, now we can power off the server and create a template out of it. If the above steps are not performed correctly, each of your VM created from the template will have the same machine ID and same IP address which, obviously, we do not want in our network.
Now, once the machines are powered off, right click on the virtual machine in the list on the left-hand side menu of your Proxmox UI and select “Convert to Template” option and within few seconds, you will notice that the icon is now changed for the VM to a template icon. Now your node templates are ready.
Using node templates to add nodes to your cluster
Now to add nodes, first we would need node virtual machines, let’s clone the virtual machines from the newly created template. For my use case, I have created 3 node machines, but you can add as many as you wish to. Ensure, that you create a full clone and once each machine is created, you have static IP addresses assigned to these machines. Once all the node machines are created and booted, let’s add them to the cluster by running the below command. Ensure to replace <CONTROLLER_IP>
with the IP address of your controller node, <TOKEN>
with the token provided in the output and the <CERT_HASH>
with the certificate hash provided in the output.
$ sudo kubeadm join <CONTROLLER_IP>:6443 --token <TOKEN> --discovery-token-ca-cert-hash sha256:<CERT_HASH>
Usually, the above command will be provided when you initialize the cluster, but for some reason if your token expires or you were not able to copy the command, you can get the new token and command to join the cluster with the below command.
$ sudo kubeadm token create --print-join-command
It will take a moment for the nodes to be ready. You can check the status by running the below command on your controller to get the status of your nodes.
$ kubectl get nodes
Congratulations. We have our own Kubernetes or what everyone calls it as K8s cluster setup in our home lab environment. I will have more blogs coming up on showing how we can automate the deployment of containers and create pipelines.