Skip to content

Preparing Trusted Execution Domains for Kubernetes

Betteredge Trusted Execution Domains (TEDs) provide virtual machines with dedicated public IPv6 addresses. Therefore, Kubernetes clusters must be configured to support IPv6 routing. Here we outline how to prepare Betteredge TEDs for creating Kubernetes clusters with IPv6. It is expected that the reader understands Kubernetes and Linux administration. For detailed installation instructions, refer to the official kubernetes documentation.

TED booking on Betteredge cloud platform

TED booking

Recommended minimal number of nodes for a Kubernetes cluster would be of a control plane and two worker nodes. Ressources of the worker nodes should be adapted to your application.

Networking

Once all nodes are booked, several ports must be opened in order to allow Kubernetes, CNI and Wiregurard networking to function correctly.

On control plane:

  • Kubernetes requirements: 6443 - 2379 - 2380 - 10250 - 10257 - 10259
  • Tigera requirements: 179 - 4789 - 5473 - 51820 - 51821

On worker nodes:

  • Kubernetes requirements: 10250 - 10256
  • Tigera requirements: 179 - 4789 - 5473 - 51820 - 51821 - 2379

Kubernetes networking requirements
Tigera networking requirements

For more detailed instructions on TED creation and port forwarding, please refer to Betteredge getting started documentation.

Requirements

Are required for this Kubernetes cluster setup: containerd - runc - kubelet - kubeadm - kubectl - CNI plugin - wireguard

The basic steps for a successful kubernetes installation will be detailed in the following sections.

Enable IPv4 / IPv6 forwarding and Bridged traffic

Load required kernel modules overlay and br_netfilter:

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

Apply directly using modprobe:

sudo modprobe overlay
sudo modprobe br_netfilter

Ensure that ip forwarding is enabled on each of your nodes.

sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.ipv6.conf.all.forwarding        = 1
EOF

Apply directly using sysctl:

sudo sysctl --system

Swap Memory disabling

For the kubelet to start, Swap memory should be disabled. This might be unnecessary, since recent versions of Kubernetes can support Swap. Nonetheless it is not needed for this use case, so it is safer to just disable it.

sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Container runtime installation

Kubernetes requires a container runtime to work. Any could work as long as it supports CRI (Container Runtime Interface). In this guide, containerd is being used.

Download latest containerd release and extract it to /usr/local. Latest tarball is available at: https://containerd.io/downloads/

Create a systemd service, official systemd unit file for containerd is available in containerd project github:

curl -LO https://raw.githubusercontent.com/containerd/containerd/main/containerd.service

It then needs to be installed and the default config file to be generated:

sudo mv containerd.service /usr/local/lib/systemd/system/
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

Update the generated config file to use systemd as the Cgroup driver:

sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

Reload and start the containerd service:

sudo systemctl daemon-reload
sudo systemctl enable --now containerd

Containerd should now be up and running as a service, one could check the install with:

sudo systemctl status containerd

runc installation

Download latest runc release (https://github.com/opencontainers/runc/releases) and install it under /usr/local/sbin/runc:

sudo install -m 755 runc.amd64 /usr/local/sbin/runc

CNI plugin installation

Download and install the CNI plugin for container networking:

curl -LO https://github.com/containernetworking/plugins/releases/download/v1.5.0/cni-plugins-linux-amd64-v1.5.0.tgz
sudo mkdir -p /opt/cni/bin
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.5.0.tgz

Install kubernetes packages

Download and install kubernetes packages kubeadm, kubelet and kubectl. Detailed instructions are available in kubernetes official documentation

Configure crictl for containerd

The CRI CLI needs to be configured for containerd, this can be achieved by linking its runtime endpoint to containerd socket:

sudo crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock

At this point, all requirements should be installed and the cluster configuration can start.

Dual stack setup

For dual stack networking, it is recommended to refer to Kuberentes IPv4/IPv6 Dual stack documentation.

Configuration templates that should work by themselves are provided in this documentation, but in case of issue or install in a customized environment, one should dive more in depth in the official documentation.

Wireguard

Since the TEDs only have public IPv6 available, a WireGuard IPv4-over-IPv6 tunnel will be created to provide a virtual IPv4 network between the nodes. This ensures that Kubernetes components and the CNI plugin operates in dual-stack mode.

Keypairs creation

WireGuard uses for each node authentification a private and public key pair. For security reasons, these keys should be stored in files with appropriate permissions.

wg genkey | tee private.key | wg pubkey > public.key

Decide on a private IPv4 subnet that will be used for the network, for example:

  • control-plane: 10.20.0.1
  • worker1: 10.20.0.2
  • worker2: 10.20.0.3

Configure WireGuard on each nodes

Next, configure WireGuard for each node under /etc/wireguard/wg0.conf:

Example configuration for control-plane:

[Interface]
Address = 10.20.0.1/24
ListenPort = 51820
PrivateKey = <cplane-private-key>

# worker1
[Peer]
PublicKey = <worker1-public-key>
AllowedIPs = 10.20.0.2/32
Endpoint = [<worker1-IPv6>]:51820
PersistentKeepalive = 25

# worker2
[Peer]
PublicKey = <worker2-public-key>
AllowedIPs = 10.20.0.3/32
Endpoint = [<worker2-IPv6>]:51820
PersistentKeepalive = 25

Do such configuration for worker1 and worker2, adapting the [Interface] section for the host on which the instance is deployed and the [Peer] sections to access the other nodes.

Enable and start WireGuard

On each node:

sudo systemctl enable wg-quick@wg0
sudo systemctl start wg-quick@wg0

Configure node-ip for Kubelet

Edit /etc/default/kubelet on each node to prioritize traffic on WireGuard link.

KUBELET_EXTRA_ARGS=--node-ip=<wireguard-IPv4>

Kubeadm

Detailing kubeadm command-line is beyond the scope of this document. Only a template of the kubeadm-config.yaml for dual-stack setup is given here, one is free to modify/customize it as its own risks. It is important to keep in mind that in the following config, 10.20.0.1 is WireGuard IPv4 for the control plane, and should updated in the case of a different virtual private IPv4 network being used.

# kubeadm-config.yaml
# ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
kubernetesVersion: 1.33.5
clusterName: kubernetes
imageRepository: registry.k8s.io
controlPlaneEndpoint: "10.20.0.1:6443"
networking:
  podSubnet: 10.244.0.0/16,fd00::/64
  serviceSubnet: 10.96.0.0/16,fd01::/108
apiServer:
  certSANs:
    - "10.20.0.1"
    - "<control-plane-IPv6>"

---

# InitConfiguration
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: "10.20.0.1"
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  kubeletExtraArgs:
    - name: "node-ip"
      value: "10.20.0.1,<control-plane-IPv6>"

Control plane is then initialized using this configuration file using kubeadm:

sudo kubeadm init --config kubeadm-config.yaml --node-name master

Setting up the CNI (Container Network Interface)

For a quick setup of the CNI, calico is here used with Tigera operator. Which allows us to configure the whole CNI in a single custom-resources.yaml file for calico. First step will be to initialize the Tigera operator:

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.30.2/manifests/tigera-operator.yaml

Note: It is always worth verifying the latest version of Calico is being used.

A configuration file for this type of cluster should look like this:

apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  calicoNetwork:
    nodeAddressAutodetectionV4:
      firstFound: true
    nodeAddressAutodetectionV6:
      firstFound: true
    ipPools:
    - blockSize: 26
      cidr: 10.244.0.0/16
      encapsulation: None 
      natOutgoing: Enabled
      nodeSelector: all()
    - blockSize: 122
      cidr: fd00::/64
      encapsulation: None
      natOutgoing: Enabled
      nodeSelector: all()
  flexVolumePath: None
  kubeletVolumePluginPath: /var/lib/kubelet
  nodeMetricsPort: 9081

---

apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec: {}

In the eventuality of a different podSubnet being used in the kubeadm-config.yaml file, the ipPools should be updated to match the config.

Apply the custom resources:

kubectl apply -f custom-resources.yaml

At this point, the control plane should be up and running. This can be checked by verifying that all pods are running and that the control plane is in Ready state:

kubectl get pods -A -o wide
kubectl get nodes

Only remaining step is then to make worker nodes join the control plane.