This guide walks through setting up a highly available Talos Linux Kubernetes cluster with 3 control plane nodes and 2 workers, featuring KubeSpan for encrypted node-to-node communication and Tailscale integration for secure remote access.
Cluster Overview
| Node | Hostname | IP Address |
|---|---|---|
| Control Plane 1 | talos-cp01 | 172.16.18.231 |
| Control Plane 2 | talos-cp02 | 172.16.18.232 |
| Control Plane 3 | talos-cp03 | 172.16.18.233 |
| Worker 1 | talos-worker01 | 172.16.18.241 |
| Worker 2 | talos-worker02 | 172.16.18.242 |
| VIP Endpoint | - | 172.16.18.222 |
Prerequisites
- Talos Linux installed on all nodes (using nocloud image)
talosctlCLI installed on your workstation- Tailscale account with an auth key
- Network connectivity to all nodes
Environment Setup
Set up the environment variables for your cluster:
Generate Cluster Secrets
First, generate the cluster secrets that will be used to create the configuration:
| |
Keep secrets.yaml safe - you’ll need it to regenerate configs or add nodes later.
Generate Cluster Configuration
Generate the Talos configuration files with KubeSpan enabled:
This generates:
controlplane.yaml- Configuration for control plane nodesworker.yaml- Configuration for worker nodestalosconfig- Client configuration fortalosctl
Create Patch Files
Create a patches directory for node-specific configurations:
| |
Tailscale Extension Patch
Create patches/tailscale.yaml to enable Tailscale on all nodes:
Hostname Patches
Create hostname patches for each node. These also configure DNS to use Tailscale’s MagicDNS (100.100.100.100) for seamless name resolution across your tailnet.
patches/cp01-hostname.yaml:
patches/cp02-hostname.yaml:
patches/cp03-hostname.yaml:
patches/worker01-hostname.yaml:
patches/worker02-hostname.yaml:
Note: The
100.100.100.100DNS server is Tailscale’s MagicDNS resolver, which allows nodes to resolve names within your tailnet. The search domain should match your Tailscale network’s domain.
Verify Node Connectivity
Before applying configurations, verify that you can reach all nodes and check their network links:
| |
Verify the disks are available:
| |
Ensure /dev/vda (or your target disk) is available on all nodes.
Apply Configuration to Nodes
Control Plane Nodes
Apply the configuration to each control plane node with their respective hostname patches:
| |
Worker Nodes
Apply the configuration to each worker node:
| |
The nodes will reboot and install Talos Linux.
Configure talosctl
Set up the talosctl client configuration:
Configure the endpoints for the control plane:
| |
Bootstrap the Cluster
Bootstrap etcd on the first control plane node. This only needs to be done once:
| |
Wait for the bootstrap process to complete. You can monitor progress with:
| |
Retrieve Kubeconfig
Once the cluster is bootstrapped, retrieve the kubeconfig:
| |
This merges the cluster’s kubeconfig into your default ~/.kube/config file.
Verify the Cluster
Check that all nodes have joined the cluster:
| |
You should see all 5 nodes in Ready state:
NAME STATUS ROLES AGE VERSION talos-cp01 Ready control-plane 5m v1.32.x talos-cp02 Ready control-plane 5m v1.32.x talos-cp03 Ready control-plane 5m v1.32.x talos-worker01 Ready4m v1.32.x talos-worker02 Ready 4m v1.32.x
Verify KubeSpan
Check that KubeSpan is working correctly:
| |
Verify Tailscale
Check that Tailscale is connected on your nodes:
| |
Your nodes should now appear in your Tailscale admin console and be accessible via their Tailscale IPs or MagicDNS names.
Summary
You now have a production-ready Talos Kubernetes cluster with:
- 3 control plane nodes for high availability
- 2 worker nodes for running workloads
- KubeSpan for encrypted WireGuard-based node communication
- Tailscale for secure remote access and MagicDNS resolution
Useful Commands
| |