Setting up a Kubernetes cluster with Kubeadm and Containerd

Published at 21 Apr 2025

In this tutorial, we'll go through the step-by-step process of installing a Kubernetes cluster using Kubeadm and Containerd.

I'm going to use a machine with Ubuntu server 24.04 LTS so if you're using a different OS you might have to adapt some commands.

This tutorial was written using Kubernetes version 1.32, if you're using a newer version some steps might have changed.

Also keep in mind all the following steps will be executed as root unless otherwise stated, so no command will start with sudo.

Set up nodes

The following steps should be run on all nodes.

Enable IPv4 packet forwarding

We should enable IPv4 packet forwarding so the network can work as expected.

sysctl net.ipv4.ip_forward=1

To make this change persistent between reboots, we should modify the /etc/sysctl.conf file.

# Uncomment the next line to enable packet forwarding for IPv4
#net.ipv4.ip_forward=1
net.ipv4.ip_forward=1

Install dependencies

There are some common dependencies we must be sure we have installed in our system.

apt update
apt install ca-certificates curl apt-transport-https ca-certificates gpg

Install containerd.io

To download containerd.io we should add the Docker repository to our system.

First, we add the Docker's official GPG key:

install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
chmod a+r /etc/apt/keyrings/docker.asc

Next, we add the Docker repository:

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
  tee /etc/apt/sources.list.d/docker.list > /dev/null

Now we can install containerd.io:

apt update
apt install containerd.io

Configure systemd cgroup driver for containerd

First, we need to create a containerd configuration file at /etc/containerd/config.toml.

mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml > /dev/null

Now, we should enable the systemd cgroup driver for the CRI at /etc/containerd/config.toml.

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
          SystemdCgroup = false
          SystemdCgroup = true

Instead of editing manually the file you can run the following command:

sed -i "s/SystemdCgroup = false/SystemdCgroup = true/g" "/etc/containerd/config.toml"

Now we restart containerd and check its status to make sure it is working.

systemctl restart containerd
systemctl status containerd

Install kubeadm, kubelet and kubectl

First, we should download Kubernetes' official GPG key:

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | \
gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

Next, we add the Kubernetes repository to our system:

echo \
	"deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /" | \
	tee /etc/apt/sources.list.d/kubernetes.list

We can now install the Kubernetes packages and hold the packages to the downloaded version.

apt update
apt install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

Finally, we can enable the kubelet service:

systemctl enable --now kubelet

Set up Kubernetes cluster

The following steps should only be run on the control plane node.

The control plane node is the one in charge of supervising and administrating the k8s cluster.

Initialize the k8s control plane

To initialize the control plane, we're going to specify the network CIDR for our pods, this is the default network for Calico and if you don't have a good reason, it's better to keep it as it is.

kubeadm init --pod-network-cidr=192.168.0.0/16

Once you run this command it will output a kubeadm join command, keep it safe as we will use it later.

Prepare non-root user

To operate with our Kubernetes cluster, it's better to use a non-root user.

If you already have one with sudo permission log in with it, if you don't have one run the following command to create it:

adduser kubernetes
usermod -aG sudo kubernetes
su - kubernetes

To manage the cluster, the user must have the k8s config file at ~./kube.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

To check if the user can manage the cluster we can run the following command:

kubectl get nodes

It should output something like the following:

NAME                           STATUS     ROLES           AGE   VERSION
eu-central-1.binarycomet.net   NotReady   control-plane   15m   v1.32.3

The following steps should be run using this user.

Deploy a Container Network Interface (CNI)

The pods require a CNI to communicate between them, there are a few options, we'll use the Calico operator.

First, we download the configuration files:

wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.4/manifests/tigera-operator.yaml
wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.4/manifests/custom-resources.yaml

If you're installing a newer Kubernetes version, check the Calico releases at https://github.com/projectcalico/calico/releases

If you changed the pod-network-cidr while initializing the cluster, you should update the CIDR at the custom-resources.yaml configuration file.

Next, we import the configuration files to our cluster.

kubectl create -f ./tigera-operator.yaml
kubectl create -f ./custom-resources.yaml

After a few seconds check the pods status to make sure everything is working:

kubectl get pods -n calico-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-88ff6f9d5-v4pp2   1/1     Running   0          102s
calico-node-mh8gd                         1/1     Running   0          102s
calico-typha-6fc55bd49d-s62bq             1/1     Running   0          102s
csi-node-driver-2r96g                     2/2     Running   0          102s

Join the worker nodes to the cluster

The following steps should be run only on worker nodes.

Prepare non-root user

As we've done at the control plane node, we're going to use a non-root user.

If you already have one with sudo permission log in with it, if you don't have one run the following command to create it:

adduser kubernetes
usermod -aG sudo kubernetes
su - kubernetes

Join cluster

Using the non-root user we should run the kubeadm join command we've keept safe previously:

kubeadm join <IP>:<port> --token <token> \
        --discovery-token-ca-cert-hash sha256:<hash>

Check worker node status

To check the worker node status, we should run the following command at the control plane node.

kubectl get nodes
NAME                           STATUS   ROLES           AGE   VERSION
eu-central-1.binarycomet.net   Ready    control-plane   78m   v1.32.3
eu-central-2.binarycomet.net   Ready    <none>          53m   v1.32.3
eu-central-3.binarycomet.net   Ready    <none>          25m   v1.32.3

Set the worker role to the worker node

The worker node is already part of the cluster, but it has no role. To give it the worker role we should run the following command:

kubectl label node <node-name> node-role.kubernetes.io/worker=worker

We check the status again to make sure the node has obtained the worker role.

kubectl get nodes
NAME                           STATUS   ROLES           AGE   VERSION
eu-central-1.binarycomet.net   Ready    control-plane   78m   v1.32.3
eu-central-2.binarycomet.net   Ready    worker          53m   v1.32.3
eu-central-3.binarycomet.net   Ready    worker          25m   v1.32.3