How to use: Follow these steps in your terminal. Use the interactive tools on the left to generate specific configurations (like Netplan or PVs).
Phase 1: Architecture & Planning
We configure a high-availability cluster using a Virtual IP (VIP). This ensures that if the leader node dies, the API endpoint automatically migrates to another master.
1.1 Topology
Masters (x3)
10.10.66.10 - 12
Workers (x3)
10.10.66.20 - 22
VIP
10.10.66.100
Phase 2: SSH Access (From Mac)
Establish passwordless access to all nodes.
1. Generate Key
ssh-keygen -t ed25519 -C "admin@macbook"
2. Propagate Key
Repeat for all 6 IPs (.10, .11, .12, .20, .21, .22):
ssh-copy-id -i ~/.ssh/id_ed25519.pub user@10.10.66.10
Phase 3: Networking (Netplan)
Go to the "2. OS & Net" tab on the left to generate your exact YAML configuration based on whether you need VLAN tagging or a Flat network.
Debian Users: Netplan is not installed by default. Run: sudo apt install netplan.io first.
Apply Command (On ALL 6 Nodes)
sudo netplan apply
Phase 4: OS Prep (Run on ALL Nodes)
0. Minimal Debian Essentials
If you are on a bare minimal Debian install, you might need these:
su -
apt update && apt install -y sudo curl gnupg
/usr/sbin/usermod -aG sudo user # Log out/in after this
1. Set Hostnames
Run the specific command for each node:
# Node 1 (10.10.66.10)
sudo hostnamectl set-hostname k8s-master-01
# Node 2 (10.10.66.11)
sudo hostnamectl set-hostname k8s-master-02
# Node 3 (10.10.66.12)
sudo hostnamectl set-hostname k8s-master-03
# Node 4 (10.10.66.20)
sudo hostnamectl set-hostname k8s-worker-01
# Node 5 (10.10.66.21)
sudo hostnamectl set-hostname k8s-worker-02
# Node 6 (10.10.66.22)
sudo hostnamectl set-hostname k8s-worker-03
2. Update Hosts File (On ALL Nodes)
This list covers all nodes and the VIP. It is complete. Run this block on every server.
cat <<EOF | sudo tee -a /etc/hosts
10.10.66.10 k8s-master-01
10.10.66.11 k8s-master-02
10.10.66.12 k8s-master-03
10.10.66.20 k8s-worker-01
10.10.66.21 k8s-worker-02
10.10.66.22 k8s-worker-03
10.10.66.100 k8s-cluster-endpoint
EOF
3. Disable Firewall (UFW)
For Debian, skip this if UFW isn't installed.
sudo ufw disable
4. Disable Swap (Required)
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
5. Load Modules
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
6. Sysctl Params
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
7. Install Containerd
Note: Updated to support both Ubuntu and Debian automatically.
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install -y ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/$(. /etc/os-release && echo "$ID")/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/$(. /etc/os-release && echo "$ID") \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y containerd.io
8. Config Containerd
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
sudo systemctl restart containerd
9. Install K8s Binaries
sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
# Download the public signing key for the Kubernetes package repositories
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# Add the repository
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo systemctl enable --now kubelet
Phase 5: Cluster Initialization
Perform these steps on Master 01 (10.10.66.10) only.
1. Generate kube-vip Manifest
export INTERFACE=eth0
export VIP=10.10.66.100
sudo ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:v0.6.4 vip /kube-vip manifest pod \
--interface $INTERFACE \
--address $VIP \
--controlplane \
--arp \
--leaderElection \
| sudo tee /etc/kubernetes/manifests/kube-vip.yaml
2. Initialize Cluster
sudo kubeadm init \
--control-plane-endpoint "10.10.66.100:6443" \
--upload-certs \
--pod-network-cidr=192.168.0.0/16
Important: Save the output of this command! It contains the tokens needed for Phase 6.
3. Configure Kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
4. Install Calico CNI
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml
Phase 6: Join Nodes
Use the output from kubeadm init. If you lost it, generate new commands on Master 01:
1. On Master 01: Get Worker Join Command
kubeadm token create --print-join-command
Run the output of this command on Worker 01, 02, and 03.
2. On Master 01: Get Master Join Command
Masters require a certificate key. Generate it:
echo "$(kubeadm token create --print-join-command) --control-plane --certificate-key $(kubeadm init phase upload-certs --upload-certs | tail -1)"
Run the full output of this command on Master 02 and 03.
3. Copy VIP Config (Critical for HA)
Once joined, run this on Master 01 to send the VIP config to the new masters:
# To Master 02
sudo scp /etc/kubernetes/manifests/kube-vip.yaml user@10.10.66.11:~/
# Login to Master 02 and move it: sudo mv ~/kube-vip.yaml /etc/kubernetes/manifests/
# To Master 03
sudo scp /etc/kubernetes/manifests/kube-vip.yaml user@10.10.66.12:~/
# Login to Master 03 and move it: sudo mv ~/kube-vip.yaml /etc/kubernetes/manifests/
Phase 7: SMB Storage
1. Prerequisite (On Workers 01, 02, 03)
Install CIFS utilities so nodes can mount SMB shares.
sudo apt-get install -y cifs-utils
2. Install Driver (On Master 01)
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/deploy/install-driver-master.yaml
3. Create Credentials (On Master 01)
kubectl create secret generic smb-creds \
--from-literal username="my_smb_user" \
--from-literal password="my_smb_password"
4. Apply PV & PVC (On Master 01)
Create smb-storage.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: smb-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
mountOptions:
- dir_mode=0777
- file_mode=0777
- vers=3.0
csi:
driver: smb.csi.k8s.io
readOnly: false
volumeHandle: unique-volumeid-1
volumeAttributes:
source: "//10.10.66.9/ShareName"
nodeStageSecretRef:
name: smb-creds
namespace: default
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: smb-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
volumeName: smb-pv
storageClassName: ""
Phase 8: Verification
1. Check Nodes
All nodes should be 'Ready' after a few minutes.
kubectl get nodes -o wide
2. Verify HA Failover
Run a continuous ping to the VIP (10.10.66.100), then reboot the current leader master. The ping should only drop once or twice.