Settle on v1 setup method. Test run completed successfully from bootstrap to service setup.

- Refactor dnsmasq configuration and scripts for improved variable handling and clarity
- Updated dnsmasq configuration files to use direct variable references instead of data source functions for better readability.
- Modified setup scripts to ensure they are run from the correct environment and directory, checking for the WC_HOME variable.
- Changed paths in README and scripts to reflect the new directory structure.
- Enhanced error handling in setup scripts to provide clearer guidance on required configurations.
- Adjusted kernel and initramfs URLs in boot.ipxe to use the updated variable references.
This commit is contained in:
2025-06-24 15:12:53 -07:00
parent 335cca1eba
commit f1fe4f9cc2
165 changed files with 15838 additions and 1003 deletions

View File

@@ -1,90 +1,235 @@
# Cluster Node Setup
Cluster node setup is WIP. Any kubernetes setup will do. Currently, we have a working cluster using each of these methods and are moving towards Talos.
This directory contains automation for setting up Talos Kubernetes cluster nodes with static IP configuration.
## k3s cluster node setup
## Hardware Detection and Setup (Recommended)
K3s provides a fully-compliant Kubernetes distribution in a small footprint.
The automated setup discovers hardware configuration from nodes in maintenance mode and generates machine configurations with the correct interface names and disk paths.
To set up control nodes:
### Prerequisites
1. `source .env`
2. Boot nodes with Talos ISO in maintenance mode
3. Nodes must be accessible on the network
### Hardware Discovery Workflow
```bash
# Install K3s without the default load balancer (we'll use MetalLB)
curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode=644 --disable servicelb --disable metallb
# ONE-TIME CLUSTER INITIALIZATION (run once per cluster)
./init-cluster.sh
# Set up kubectl configuration
mkdir -p ~/.kube
sudo cat /etc/rancher/k3s/k3s.yaml > ~/.kube/config
chmod 600 ~/.kube/config
# FOR EACH CONTROL PLANE NODE:
# 1. Boot node with Talos ISO (it will get a DHCP IP in maintenance mode)
# 2. Detect hardware and update config.yaml
./detect-node-hardware.sh <maintenance-ip> <node-number>
# Example: Node boots at 192.168.8.168, register as node 1
./detect-node-hardware.sh 192.168.8.168 1
# 3. Generate machine config for registered nodes
./generate-machine-configs.sh
# 4. Apply configuration - node will reboot with static IP
talosctl apply-config --insecure -n 192.168.8.168 --file final/controlplane-node-1.yaml
# 5. Wait for reboot, node should come up at its target static IP (192.168.8.31)
# Repeat steps 1-5 for additional control plane nodes
```
Set up the infrastructure services after these are running, then you can add more worker nodes with:
The `detect-node-hardware.sh` script will:
- Connect to nodes in maintenance mode via talosctl
- Discover active ethernet interfaces (e.g., `enp4s0` instead of hardcoded `eth0`)
- Discover available installation disks (>10GB)
- Update `config.yaml` with per-node hardware configuration
- Provide next steps for machine config generation
The `init-cluster.sh` script will:
- Generate Talos cluster secrets and base configurations (once per cluster)
- Set up talosctl context with cluster certificates
- Configure VIP endpoint for cluster communication
The `generate-machine-configs.sh` script will:
- Check which nodes have been hardware-detected
- Compile network configuration templates with discovered hardware settings
- Create final machine configurations for registered nodes only
- Include system extensions for Longhorn (iscsi-tools, util-linux-tools)
- Update talosctl context with registered node IPs
### Cluster Bootstrap
After all control plane nodes are configured with static IPs:
```bash
# On your master node, get the node token
NODE_TOKEN=`sudo cat /var/lib/rancher/k3s/server/node-token`
MASTER_IP=192.168.8.222
# On each new node, join the cluster
# Bootstrap the cluster using any control node
talosctl bootstrap --nodes 192.168.8.31 --endpoint 192.168.8.31
curl -sfL https://get.k3s.io | K3S_URL=https://$MASTER_IP:6443 K3S_TOKEN=$NODE_TOKEN sh -
```
## Talos cluster node setup
This is a new experimental method for setting up cluster nodes. We're currently working through the simplest bootstrapping experience.
Currently, though, all these steps are manual.
Copy this entire directory to your personal cloud folder and modify it as necessary as you install. We suggest putting it in `cluster/bootstrap`.
```bash
# Install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
echo "$(cat kubectl.sha256) kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# Install talosctl
curl -sL https://talos.dev/install | sh
# In your LAN Router (which is your DHCP server),
CLUSTER_NAME=test-cluster
VIP=192.168.8.20 # Non-DHCP
# Boot your nodes with the ISO and put their IP addresses here. Pin in DHCP.
# Nodes must all be on the same switch.
# TODO: How to set these static on boot?
CONTROL_NODE_1=192.168.8.21
CONTROL_NODE_2=192.168.8.22
CONTROL_NODE_3=192.168.8.23
# Generate cluster config files (including pki and tokens)
cd generated
talosctl gen secrets -o secrets.yaml
talosctl gen config --with-secrets secrets.yaml $CLUSTER_NAME https://$VIP:6443
talosctl config merge ./talosconfig
cd ..
# If the disk you want to install Talos on isn't /dev/sda, you should
# update to the disk you want in patch/controlplane.yml and patch/worker.yaml. If you have already attempted to install a node and received an error about not being able to find /dev/sda, you can see what disks are available on it with:
#
# talosctl -n $VIP get disks --insecure
# See https://www.talos.dev/v1.10/talos-guides/configuration/patching/
talosctl machineconfig patch generated/controlplane.yaml --patch @patch/controlplane.yaml -o final/controlplane.yaml
talosctl machineconfig patch generated/worker.yaml --patch @patch/worker.yaml -o final/worker.yaml
$
# Apply control plane config
talosctl apply-config --insecure -n $CONTROL_NODE_1,$CONTROL_NODE_2,$CONTROL_NODE_3 --file final/controlplane.yaml
# Bootstrap cluster on control plan
talosctl bootstrap -n $VIP
# Merge new cluster information into kubeconfig
# Get kubeconfig
talosctl kubeconfig
# You are now ready to use both `talosctl` and `kubectl` against your new cluster.
# Verify cluster is ready
kubectl get nodes
```
## Complete Example
Here's a complete example of setting up a 3-node control plane:
```bash
# CLUSTER INITIALIZATION (once per cluster)
./init-cluster.sh
# NODE 1
# Boot node with Talos ISO, it gets DHCP IP 192.168.8.168
./detect-node-hardware.sh 192.168.8.168 1
./generate-machine-configs.sh
talosctl apply-config --insecure -n 192.168.8.168 --file final/controlplane-node-1.yaml
# Node reboots and comes up at 192.168.8.31
# NODE 2
# Boot second node with Talos ISO, it gets DHCP IP 192.168.8.169
./detect-node-hardware.sh 192.168.8.169 2
./generate-machine-configs.sh
talosctl apply-config --insecure -n 192.168.8.169 --file final/controlplane-node-2.yaml
# Node reboots and comes up at 192.168.8.32
# NODE 3
# Boot third node with Talos ISO, it gets DHCP IP 192.168.8.170
./detect-node-hardware.sh 192.168.8.170 3
./generate-machine-configs.sh
talosctl apply-config --insecure -n 192.168.8.170 --file final/controlplane-node-3.yaml
# Node reboots and comes up at 192.168.8.33
# CLUSTER BOOTSTRAP
talosctl bootstrap -n 192.168.8.30
talosctl kubeconfig
kubectl get nodes
```
## Configuration Details
### Per-Node Configuration
Each control plane node has its own configuration block in `config.yaml`:
```yaml
cluster:
nodes:
control:
vip: 192.168.8.30
node1:
ip: 192.168.8.31
interface: enp4s0 # Discovered automatically
disk: /dev/sdb # Selected during hardware detection
node2:
ip: 192.168.8.32
# interface and disk added after hardware detection
node3:
ip: 192.168.8.33
# interface and disk added after hardware detection
```
Worker nodes use DHCP by default. You can use the same hardware detection process for worker nodes if static IPs are needed.
## Talosconfig Management
### Context Naming and Conflicts
When running `talosctl config merge ./generated/talosconfig`, if a context with the same name already exists, talosctl will create an enumerated version (e.g., `demo-cluster-2`).
**For a clean setup:**
- Delete existing contexts before merging: `talosctl config contexts` then `talosctl config context <name> --remove`
- Or use `--force` to overwrite: `talosctl config merge ./generated/talosconfig --force`
**Recommended approach for new clusters:**
```bash
# Remove old context if rebuilding cluster
talosctl config context demo-cluster --remove || true
# Merge new configuration
talosctl config merge ./generated/talosconfig
talosctl config endpoint 192.168.8.30
talosctl config node 192.168.8.31 # Add nodes as they are registered
```
### Context Configuration Timeline
1. **After first node hardware detection**: Merge talosconfig and set endpoint/first node
2. **After additional nodes**: Add them to the existing context with `talosctl config node <ip1> <ip2> <ip3>`
3. **Before cluster bootstrap**: Ensure all control plane nodes are in the node list
### System Extensions
All nodes include:
- `siderolabs/iscsi-tools`: Required for Longhorn storage
- `siderolabs/util-linux-tools`: Utility tools for storage operations
### Hardware Detection
The `detect-node-hardware.sh` script automatically discovers:
- **Network interfaces**: Finds active ethernet interfaces (no more hardcoded `eth0`)
- **Installation disks**: Lists available disks >10GB for interactive selection
- **Per-node settings**: Updates `config.yaml` with hardware-specific configuration
This eliminates the need to manually configure hardware settings and handles different hardware configurations across nodes.
### Template Structure
Configuration templates are stored in `patch.templates/` and use gomplate syntax:
- `controlplane-node-1.yaml`: Template for first control plane node
- `controlplane-node-2.yaml`: Template for second control plane node
- `controlplane-node-3.yaml`: Template for third control plane node
- `worker.yaml`: Template for worker nodes
Templates use per-node variables from `config.yaml`:
- `{{ .cluster.nodes.control.node1.ip }}`
- `{{ .cluster.nodes.control.node1.interface }}`
- `{{ .cluster.nodes.control.node1.disk }}`
- `{{ .cluster.nodes.control.vip }}`
The `wild-compile-template-dir` command processes all templates and outputs compiled configurations to the `patch/` directory.
## Troubleshooting
### Hardware Detection Issues
```bash
# Check if node is accessible in maintenance mode
talosctl -n <NODE_IP> version --insecure
# View available network interfaces
talosctl -n <NODE_IP> get links --insecure
# View available disks
talosctl -n <NODE_IP> get disks --insecure
```
### Manual Hardware Discovery
If the automatic detection fails, you can manually inspect hardware:
```bash
# Find active ethernet interfaces
talosctl -n <NODE_IP> get links --insecure -o json | jq -s '.[] | select(.spec.operationalState == "up" and .spec.type == "ether" and .metadata.id != "lo") | .metadata.id'
# Find suitable installation disks
talosctl -n <NODE_IP> get disks --insecure -o json | jq -s '.[] | select(.spec.size > 10000000000) | .metadata.id'
```
### Node Status
```bash
# View machine configuration (only works after config is applied)
talosctl -n <NODE_IP> get machineconfig
```

View File

@@ -0,0 +1,53 @@
#!/bin/bash
# Talos custom installer image creation script
# This script generates installer image URLs using the centralized schematic ID
set -euo pipefail
# Check if WC_HOME is set
if [ -z "${WC_HOME:-}" ]; then
echo "Error: WC_HOME environment variable not set. Run \`source ./env.sh\`."
exit 1
fi
# Get Talos version and schematic ID from config
TALOS_VERSION=$(wild-config cluster.nodes.talos.version)
SCHEMATIC_ID=$(wild-config cluster.nodes.talos.schematicId)
echo "Creating custom Talos installer image..."
echo "Talos version: $TALOS_VERSION"
# Check if schematic ID exists
if [ -z "$SCHEMATIC_ID" ] || [ "$SCHEMATIC_ID" = "null" ]; then
echo "Error: No schematic ID found in config.yaml"
echo "Run 'wild-talos-schema' first to upload schematic and get ID"
exit 1
fi
echo "Schematic ID: $SCHEMATIC_ID"
echo ""
echo "Schematic includes:"
yq eval '.cluster.nodes.talos.schematic.customization.systemExtensions.officialExtensions[]' "${WC_HOME}/config.yaml" | sed 's/^/ - /'
echo ""
# Generate installer image URL
INSTALLER_URL="factory.talos.dev/metal-installer/$SCHEMATIC_ID:$TALOS_VERSION"
echo ""
echo "🎉 Custom installer image URL generated!"
echo ""
echo "Installer URL: $INSTALLER_URL"
echo ""
echo "Usage in machine configuration:"
echo "machine:"
echo " install:"
echo " image: $INSTALLER_URL"
echo ""
echo "Next steps:"
echo "1. Update machine config templates with this installer URL"
echo "2. Regenerate machine configurations"
echo "3. Apply to existing nodes to trigger installation with extensions"
echo ""
echo "To update templates automatically, run:"
echo " sed -i 's|image:.*|image: $INSTALLER_URL|' patch.templates/controlplane-node-*.yaml"

View File

@@ -0,0 +1,163 @@
#!/bin/bash
# Node registration script for Talos cluster setup
# This script discovers hardware configuration from a node in maintenance mode
# and updates config.yaml with per-node hardware settings
set -euo pipefail
# Check if WC_HOME is set
if [ -z "${WC_HOME:-}" ]; then
echo "Error: WC_HOME environment variable not set. Run \`source ./env.sh\`."
exit 1
fi
# Usage function
usage() {
echo "Usage: register-node.sh <node-ip> <node-number>"
echo ""
echo "Register a Talos node by discovering its hardware configuration."
echo "The node must be booted in maintenance mode and accessible via IP."
echo ""
echo "Arguments:"
echo " node-ip Current IP of the node in maintenance mode"
echo " node-number Node number (1, 2, or 3) for control plane nodes"
echo ""
echo "Examples:"
echo " ./register-node.sh 192.168.8.168 1"
echo " ./register-node.sh 192.168.8.169 2"
echo ""
echo "This script will:"
echo " - Query the node for available network interfaces"
echo " - Query the node for available disks"
echo " - Update config.yaml with the per-node hardware settings"
echo " - Update patch templates to use per-node hardware"
}
# Parse arguments
if [ $# -ne 2 ]; then
usage
exit 1
fi
NODE_IP="$1"
NODE_NUMBER="$2"
# Validate node number
if [[ ! "$NODE_NUMBER" =~ ^[1-3]$ ]]; then
echo "Error: Node number must be 1, 2, or 3"
exit 1
fi
echo "Registering Talos control plane node $NODE_NUMBER at $NODE_IP..."
# Test connectivity
echo "Testing connectivity to node..."
if ! talosctl -n "$NODE_IP" get links --insecure >/dev/null 2>&1; then
echo "Error: Cannot connect to node at $NODE_IP"
echo "Make sure the node is booted in maintenance mode and accessible."
exit 1
fi
echo "✅ Node is accessible"
# Discover network interfaces
echo "Discovering network interfaces..."
# First, try to find the interface that's actually carrying traffic (has the default route)
CONNECTED_INTERFACE=$(talosctl -n "$NODE_IP" get routes --insecure -o json 2>/dev/null | \
jq -s -r '.[] | select(.spec.destination == "0.0.0.0/0" and .spec.gateway != null) | .spec.outLinkName' | \
head -1)
if [ -n "$CONNECTED_INTERFACE" ]; then
ACTIVE_INTERFACE="$CONNECTED_INTERFACE"
echo "✅ Discovered connected interface (with default route): $ACTIVE_INTERFACE"
else
# Fallback: find any active ethernet interface
echo "No default route found, checking for active ethernet interfaces..."
ACTIVE_INTERFACE=$(talosctl -n "$NODE_IP" get links --insecure -o json 2>/dev/null | \
jq -s -r '.[] | select(.spec.operationalState == "up" and .spec.type == "ether" and .metadata.id != "lo") | .metadata.id' | \
head -1)
if [ -z "$ACTIVE_INTERFACE" ]; then
echo "Error: No active ethernet interface found"
echo "Available interfaces:"
talosctl -n "$NODE_IP" get links --insecure
echo ""
echo "Available routes:"
talosctl -n "$NODE_IP" get routes --insecure
exit 1
fi
echo "✅ Discovered active interface: $ACTIVE_INTERFACE"
fi
# Discover available disks
echo "Discovering available disks..."
AVAILABLE_DISKS=$(talosctl -n "$NODE_IP" get disks --insecure -o json 2>/dev/null | \
jq -s -r '.[] | select(.spec.size > 10000000000) | .metadata.id' | \
head -5)
if [ -z "$AVAILABLE_DISKS" ]; then
echo "Error: No suitable disks found (must be >10GB)"
echo "Available disks:"
talosctl -n "$NODE_IP" get disks --insecure
exit 1
fi
echo "Available disks (>10GB):"
echo "$AVAILABLE_DISKS"
echo ""
# Let user choose disk
echo "Select installation disk for node $NODE_NUMBER:"
select INSTALL_DISK in $AVAILABLE_DISKS; do
if [ -n "${INSTALL_DISK:-}" ]; then
break
fi
echo "Invalid selection. Please try again."
done
# Add /dev/ prefix if not present
if [[ "$INSTALL_DISK" != /dev/* ]]; then
INSTALL_DISK="/dev/$INSTALL_DISK"
fi
echo "✅ Selected disk: $INSTALL_DISK"
# Update config.yaml with per-node configuration
echo "Updating config.yaml with node $NODE_NUMBER configuration..."
CONFIG_FILE="${WC_HOME}/config.yaml"
# Get the target IP for this node from the existing config
TARGET_IP=$(yq eval ".cluster.nodes.control.node${NODE_NUMBER}.ip" "$CONFIG_FILE")
# Use yq to update the per-node configuration
yq eval ".cluster.nodes.control.node${NODE_NUMBER}.ip = \"$TARGET_IP\"" -i "$CONFIG_FILE"
yq eval ".cluster.nodes.control.node${NODE_NUMBER}.interface = \"$ACTIVE_INTERFACE\"" -i "$CONFIG_FILE"
yq eval ".cluster.nodes.control.node${NODE_NUMBER}.disk = \"$INSTALL_DISK\"" -i "$CONFIG_FILE"
echo "✅ Updated config.yaml for node $NODE_NUMBER:"
echo " - Target IP: $TARGET_IP"
echo " - Network interface: $ACTIVE_INTERFACE"
echo " - Installation disk: $INSTALL_DISK"
echo ""
echo "🎉 Node $NODE_NUMBER registration complete!"
echo ""
echo "Node configuration saved:"
echo " - Target IP: $TARGET_IP"
echo " - Interface: $ACTIVE_INTERFACE"
echo " - Disk: $INSTALL_DISK"
echo ""
echo "Next steps:"
echo "1. Regenerate machine configurations:"
echo " ./generate-machine-configs.sh"
echo ""
echo "2. Apply configuration to this node:"
echo " talosctl apply-config --insecure -n $NODE_IP --file final/controlplane-node-${NODE_NUMBER}.yaml"
echo ""
echo "3. Wait for reboot and verify static IP connectivity"
echo "4. Repeat registration for additional control plane nodes"

View File

View File

@@ -0,0 +1,115 @@
#!/bin/bash
# Talos machine configuration generation script
# This script generates machine configs for registered nodes using existing cluster secrets
set -euo pipefail
# Check if WC_HOME is set
if [ -z "${WC_HOME:-}" ]; then
echo "Error: WC_HOME environment variable not set. Run \`source ./env.sh\`."
exit 1
fi
NODE_SETUP_DIR="${WC_HOME}/setup/cluster-nodes"
# Check if cluster has been initialized
if [ ! -f "${NODE_SETUP_DIR}/generated/secrets.yaml" ]; then
echo "Error: Cluster not initialized. Run ./init-cluster.sh first."
exit 1
fi
# Get cluster configuration from config.yaml
CLUSTER_NAME=$(wild-config cluster.name)
VIP=$(wild-config cluster.nodes.control.vip)
echo "Generating machine configurations for cluster: $CLUSTER_NAME"
# Check which nodes have been registered (have hardware config)
REGISTERED_NODES=()
for i in 1 2 3; do
if yq eval ".cluster.nodes.control.node${i}.interface" "${WC_HOME}/config.yaml" | grep -v "null" >/dev/null 2>&1; then
NODE_IP=$(wild-config cluster.nodes.control.node${i}.ip)
REGISTERED_NODES+=("$NODE_IP")
echo "✅ Node $i registered: $NODE_IP"
else
echo "⏸️ Node $i not registered yet"
fi
done
if [ ${#REGISTERED_NODES[@]} -eq 0 ]; then
echo ""
echo "No nodes have been registered yet."
echo "Run ./detect-node-hardware.sh <maintenance-ip> <node-number> first."
exit 1
fi
# Create directories
mkdir -p "${NODE_SETUP_DIR}/final" "${NODE_SETUP_DIR}/patch"
# Compile patch templates for registered nodes only
echo "Compiling patch templates..."
for i in 1 2 3; do
if yq eval ".cluster.nodes.control.node${i}.interface" "${WC_HOME}/config.yaml" | grep -v "null" >/dev/null 2>&1; then
echo "Compiling template for control plane node $i..."
cat "${NODE_SETUP_DIR}/patch.templates/controlplane-node-${i}.yaml" | wild-compile-template > "${NODE_SETUP_DIR}/patch/controlplane-node-${i}.yaml"
fi
done
# Always compile worker template (doesn't require hardware detection)
if [ -f "${NODE_SETUP_DIR}/patch.templates/worker.yaml" ]; then
cat "${NODE_SETUP_DIR}/patch.templates/worker.yaml" | wild-compile-template > "${NODE_SETUP_DIR}/patch/worker.yaml"
fi
# Generate final machine configs for registered nodes only
echo "Generating final machine configurations..."
for i in 1 2 3; do
if yq eval ".cluster.nodes.control.node${i}.interface" "${WC_HOME}/config.yaml" | grep -v "null" >/dev/null 2>&1; then
echo "Generating config for control plane node $i..."
talosctl machineconfig patch "${NODE_SETUP_DIR}/generated/controlplane.yaml" --patch @"${NODE_SETUP_DIR}/patch/controlplane-node-${i}.yaml" -o "${NODE_SETUP_DIR}/final/controlplane-node-${i}.yaml"
fi
done
# Always generate worker config (doesn't require hardware detection)
if [ -f "${NODE_SETUP_DIR}/patch/worker.yaml" ]; then
echo "Generating worker config..."
talosctl machineconfig patch "${NODE_SETUP_DIR}/generated/worker.yaml" --patch @"${NODE_SETUP_DIR}/patch/worker.yaml" -o "${NODE_SETUP_DIR}/final/worker.yaml"
fi
# Update talosctl context with registered nodes
echo "Updating talosctl context..."
if [ ${#REGISTERED_NODES[@]} -gt 0 ]; then
talosctl config node "${REGISTERED_NODES[@]}"
fi
echo ""
echo "✅ Machine configurations generated successfully!"
echo ""
echo "Generated configs:"
for i in 1 2 3; do
if [ -f "${NODE_SETUP_DIR}/final/controlplane-node-${i}.yaml" ]; then
NODE_IP=$(wild-config cluster.nodes.control.node${i}.ip)
echo " - ${NODE_SETUP_DIR}/final/controlplane-node-${i}.yaml (target IP: $NODE_IP)"
fi
done
if [ -f "${NODE_SETUP_DIR}/final/worker.yaml" ]; then
echo " - ${NODE_SETUP_DIR}/final/worker.yaml"
fi
echo ""
echo "Current talosctl configuration:"
talosctl config info
echo ""
echo "Next steps:"
echo "1. Apply configurations to nodes in maintenance mode:"
for i in 1 2 3; do
if [ -f "${NODE_SETUP_DIR}/final/controlplane-node-${i}.yaml" ]; then
echo " talosctl apply-config --insecure -n <maintenance-ip> --file ${NODE_SETUP_DIR}/final/controlplane-node-${i}.yaml"
fi
done
echo ""
echo "2. Wait for nodes to reboot with static IPs, then bootstrap cluster with ANY control node:"
echo " talosctl bootstrap --nodes 192.168.8.31 --endpoint 192.168.8.31"
echo ""
echo "3. Get kubeconfig:"
echo " talosctl kubeconfig"

View File

@@ -0,0 +1,577 @@
version: v1alpha1 # Indicates the schema used to decode the contents.
debug: false # Enable verbose logging to the console.
persist: true
# Provides machine specific configuration options.
machine:
type: controlplane # Defines the role of the machine within the cluster.
token: t1yf7w.zwevymjw6v0v1q76 # The `token` is used by a machine to join the PKI of the cluster.
# The root certificate authority of the PKI.
ca:
crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJQekNCOHFBREFnRUNBaEVBa2JEQ2VJR09iTlBZZGQxRTBNSUozVEFGQmdNclpYQXdFREVPTUF3R0ExVUUKQ2hNRmRHRnNiM013SGhjTk1qVXdOakl6TURJek9ERXpXaGNOTXpVd05qSXhNREl6T0RFeldqQVFNUTR3REFZRApWUVFLRXdWMFlXeHZjekFxTUFVR0F5dGxjQU1oQVBhbVhHamhnN0FFUmpQZUFJL3dQK21YWVZsYm95M01TUTErCm1CTGh3NmhLbzJFd1h6QU9CZ05WSFE4QkFmOEVCQU1DQW9Rd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUcKQ0NzR0FRVUZCd01DTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3SFFZRFZSME9CQllFRk12QnhpY2tXOXVaZWR0ZgppblRzK3p1U2VLK2FNQVVHQXl0bGNBTkJBSEl5Y2ttT3lGMWEvTVJROXp4a1lRcy81clptRjl0YTVsZktCamVlCmRLV0lVbFNRNkY4c1hjZ1orWlhOcXNjSHNwbzFKdStQUVVwa3VocWREdDBRblFjPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
key: LS0tLS1CRUdJTiBFRDI1NTE5IFBSSVZBVEUgS0VZLS0tLS0KTUM0Q0FRQXdCUVlESzJWd0JDSUVJT0hOamQ1blVzdVRGRXpsQmtFOVhkZUJ4b1AxMk9mY2R4a0tjQmZlU0xKbgotLS0tLUVORCBFRDI1NTE5IFBSSVZBVEUgS0VZLS0tLS0K
# Extra certificate subject alternative names for the machine's certificate.
certSANs: []
# # Uncomment this to enable SANs.
# - 10.0.0.10
# - 172.16.0.10
# - 192.168.0.10
# Used to provide additional options to the kubelet.
kubelet:
image: ghcr.io/siderolabs/kubelet:v1.33.1 # The `image` field is an optional reference to an alternative kubelet image.
defaultRuntimeSeccompProfileEnabled: true # Enable container runtime default Seccomp profile.
disableManifestsDirectory: true # The `disableManifestsDirectory` field configures the kubelet to get static pod manifests from the /etc/kubernetes/manifests directory.
# # The `ClusterDNS` field is an optional reference to an alternative kubelet clusterDNS ip list.
# clusterDNS:
# - 10.96.0.10
# - 169.254.2.53
# # The `extraArgs` field is used to provide additional flags to the kubelet.
# extraArgs:
# key: value
# # The `extraMounts` field is used to add additional mounts to the kubelet container.
# extraMounts:
# - destination: /var/lib/example # Destination is the absolute path where the mount will be placed in the container.
# type: bind # Type specifies the mount kind.
# source: /var/lib/example # Source specifies the source path of the mount.
# # Options are fstab style mount options.
# options:
# - bind
# - rshared
# - rw
# # The `extraConfig` field is used to provide kubelet configuration overrides.
# extraConfig:
# serverTLSBootstrap: true
# # The `KubeletCredentialProviderConfig` field is used to provide kubelet credential configuration.
# credentialProviderConfig:
# apiVersion: kubelet.config.k8s.io/v1
# kind: CredentialProviderConfig
# providers:
# - apiVersion: credentialprovider.kubelet.k8s.io/v1
# defaultCacheDuration: 12h
# matchImages:
# - '*.dkr.ecr.*.amazonaws.com'
# - '*.dkr.ecr.*.amazonaws.com.cn'
# - '*.dkr.ecr-fips.*.amazonaws.com'
# - '*.dkr.ecr.us-iso-east-1.c2s.ic.gov'
# - '*.dkr.ecr.us-isob-east-1.sc2s.sgov.gov'
# name: ecr-credential-provider
# # The `nodeIP` field is used to configure `--node-ip` flag for the kubelet.
# nodeIP:
# # The `validSubnets` field configures the networks to pick kubelet node IP from.
# validSubnets:
# - 10.0.0.0/8
# - '!10.0.0.3/32'
# - fdc7::/16
# Provides machine specific network configuration options.
network: {}
# # `interfaces` is used to define the network interface configuration.
# interfaces:
# - interface: enp0s1 # The interface name.
# # Assigns static IP addresses to the interface.
# addresses:
# - 192.168.2.0/24
# # A list of routes associated with the interface.
# routes:
# - network: 0.0.0.0/0 # The route's network (destination).
# gateway: 192.168.2.1 # The route's gateway (if empty, creates link scope route).
# metric: 1024 # The optional metric for the route.
# mtu: 1500 # The interface's MTU.
#
# # # Picks a network device using the selector.
# # # select a device with bus prefix 00:*.
# # deviceSelector:
# # busPath: 00:* # PCI, USB bus prefix, supports matching by wildcard.
# # # select a device with mac address matching `*:f0:ab` and `virtio` kernel driver.
# # deviceSelector:
# # hardwareAddr: '*:f0:ab' # Device hardware (MAC) address, supports matching by wildcard.
# # driver: virtio_net # Kernel driver, supports matching by wildcard.
# # # select a device with bus prefix 00:*, a device with mac address matching `*:f0:ab` and `virtio` kernel driver.
# # deviceSelector:
# # - busPath: 00:* # PCI, USB bus prefix, supports matching by wildcard.
# # - hardwareAddr: '*:f0:ab' # Device hardware (MAC) address, supports matching by wildcard.
# # driver: virtio_net # Kernel driver, supports matching by wildcard.
# # # Bond specific options.
# # bond:
# # # The interfaces that make up the bond.
# # interfaces:
# # - enp2s0
# # - enp2s1
# # # Picks a network device using the selector.
# # deviceSelectors:
# # - busPath: 00:* # PCI, USB bus prefix, supports matching by wildcard.
# # - hardwareAddr: '*:f0:ab' # Device hardware (MAC) address, supports matching by wildcard.
# # driver: virtio_net # Kernel driver, supports matching by wildcard.
# # mode: 802.3ad # A bond option.
# # lacpRate: fast # A bond option.
# # # Bridge specific options.
# # bridge:
# # # The interfaces that make up the bridge.
# # interfaces:
# # - enxda4042ca9a51
# # - enxae2a6774c259
# # # Enable STP on this bridge.
# # stp:
# # enabled: true # Whether Spanning Tree Protocol (STP) is enabled.
# # # Configure this device as a bridge port.
# # bridgePort:
# # master: br0 # The name of the bridge master interface
# # # Indicates if DHCP should be used to configure the interface.
# # dhcp: true
# # # DHCP specific options.
# # dhcpOptions:
# # routeMetric: 1024 # The priority of all routes received via DHCP.
# # # Wireguard specific configuration.
# # # wireguard server example
# # wireguard:
# # privateKey: ABCDEF... # Specifies a private key configuration (base64 encoded).
# # listenPort: 51111 # Specifies a device's listening port.
# # # Specifies a list of peer configurations to apply to a device.
# # peers:
# # - publicKey: ABCDEF... # Specifies the public key of this peer.
# # endpoint: 192.168.1.3 # Specifies the endpoint of this peer entry.
# # # AllowedIPs specifies a list of allowed IP addresses in CIDR notation for this peer.
# # allowedIPs:
# # - 192.168.1.0/24
# # # wireguard peer example
# # wireguard:
# # privateKey: ABCDEF... # Specifies a private key configuration (base64 encoded).
# # # Specifies a list of peer configurations to apply to a device.
# # peers:
# # - publicKey: ABCDEF... # Specifies the public key of this peer.
# # endpoint: 192.168.1.2:51822 # Specifies the endpoint of this peer entry.
# # persistentKeepaliveInterval: 10s # Specifies the persistent keepalive interval for this peer.
# # # AllowedIPs specifies a list of allowed IP addresses in CIDR notation for this peer.
# # allowedIPs:
# # - 192.168.1.0/24
# # # Virtual (shared) IP address configuration.
# # # layer2 vip example
# # vip:
# # ip: 172.16.199.55 # Specifies the IP address to be used.
# # Used to statically set the nameservers for the machine.
# nameservers:
# - 8.8.8.8
# - 1.1.1.1
# # Used to statically set arbitrary search domains.
# searchDomains:
# - example.org
# - example.com
# # Allows for extra entries to be added to the `/etc/hosts` file
# extraHostEntries:
# - ip: 192.168.1.100 # The IP of the host.
# # The host alias.
# aliases:
# - example
# - example.domain.tld
# # Configures KubeSpan feature.
# kubespan:
# enabled: true # Enable the KubeSpan feature.
# Used to provide instructions for installations.
install:
disk: /dev/sda # The disk used for installations.
image: ghcr.io/siderolabs/installer:v1.10.3 # Allows for supplying the image used to perform the installation.
wipe: false # Indicates if the installation disk should be wiped at installation time.
# # Look up disk using disk attributes like model, size, serial and others.
# diskSelector:
# size: 4GB # Disk size.
# model: WDC* # Disk model `/sys/block/<dev>/device/model`.
# busPath: /pci0000:00/0000:00:17.0/ata1/host0/target0:0:0/0:0:0:0 # Disk bus path.
# # Allows for supplying extra kernel args via the bootloader.
# extraKernelArgs:
# - talos.platform=metal
# - reboot=k
# Used to configure the machine's container image registry mirrors.
registries: {}
# # Specifies mirror configuration for each registry host namespace.
# mirrors:
# ghcr.io:
# # List of endpoints (URLs) for registry mirrors to use.
# endpoints:
# - https://registry.insecure
# - https://ghcr.io/v2/
# # Specifies TLS & auth configuration for HTTPS image registries.
# config:
# registry.insecure:
# # The TLS configuration for the registry.
# tls:
# insecureSkipVerify: true # Skip TLS server certificate verification (not recommended).
#
# # # Enable mutual TLS authentication with the registry.
# # clientIdentity:
# # crt: LS0tIEVYQU1QTEUgQ0VSVElGSUNBVEUgLS0t
# # key: LS0tIEVYQU1QTEUgS0VZIC0tLQ==
#
# # # The auth configuration for this registry.
# # auth:
# # username: username # Optional registry authentication.
# # password: password # Optional registry authentication.
# Features describe individual Talos features that can be switched on or off.
features:
rbac: true # Enable role-based access control (RBAC).
stableHostname: true # Enable stable default hostname.
apidCheckExtKeyUsage: true # Enable checks for extended key usage of client certificates in apid.
diskQuotaSupport: true # Enable XFS project quota support for EPHEMERAL partition and user disks.
# KubePrism - local proxy/load balancer on defined port that will distribute
kubePrism:
enabled: true # Enable KubePrism support - will start local load balancing proxy.
port: 7445 # KubePrism port.
# Configures host DNS caching resolver.
hostDNS:
enabled: true # Enable host DNS caching resolver.
forwardKubeDNSToHost: true # Use the host DNS resolver as upstream for Kubernetes CoreDNS pods.
# # Configure Talos API access from Kubernetes pods.
# kubernetesTalosAPIAccess:
# enabled: true # Enable Talos API access from Kubernetes pods.
# # The list of Talos API roles which can be granted for access from Kubernetes pods.
# allowedRoles:
# - os:reader
# # The list of Kubernetes namespaces Talos API access is available from.
# allowedKubernetesNamespaces:
# - kube-system
# Configures the node labels for the machine.
nodeLabels:
node.kubernetes.io/exclude-from-external-load-balancers: ""
# # Provides machine specific control plane configuration options.
# # ControlPlane definition example.
# controlPlane:
# # Controller manager machine specific configuration options.
# controllerManager:
# disabled: false # Disable kube-controller-manager on the node.
# # Scheduler machine specific configuration options.
# scheduler:
# disabled: true # Disable kube-scheduler on the node.
# # Used to provide static pod definitions to be run by the kubelet directly bypassing the kube-apiserver.
# # nginx static pod.
# pods:
# - apiVersion: v1
# kind: pod
# metadata:
# name: nginx
# spec:
# containers:
# - image: nginx
# name: nginx
# # Allows the addition of user specified files.
# # MachineFiles usage example.
# files:
# - content: '...' # The contents of the file.
# permissions: 0o666 # The file's permissions in octal.
# path: /tmp/file.txt # The path of the file.
# op: append # The operation to use
# # The `env` field allows for the addition of environment variables.
# # Environment variables definition examples.
# env:
# GRPC_GO_LOG_SEVERITY_LEVEL: info
# GRPC_GO_LOG_VERBOSITY_LEVEL: "99"
# https_proxy: http://SERVER:PORT/
# env:
# GRPC_GO_LOG_SEVERITY_LEVEL: error
# https_proxy: https://USERNAME:PASSWORD@SERVER:PORT/
# env:
# https_proxy: http://DOMAIN\USERNAME:PASSWORD@SERVER:PORT/
# # Used to configure the machine's time settings.
# # Example configuration for cloudflare ntp server.
# time:
# disabled: false # Indicates if the time service is disabled for the machine.
# # description: |
# servers:
# - time.cloudflare.com
# bootTimeout: 2m0s # Specifies the timeout when the node time is considered to be in sync unlocking the boot sequence.
# # Used to configure the machine's sysctls.
# # MachineSysctls usage example.
# sysctls:
# kernel.domainname: talos.dev
# net.ipv4.ip_forward: "0"
# net/ipv6/conf/eth0.100/disable_ipv6: "1"
# # Used to configure the machine's sysfs.
# # MachineSysfs usage example.
# sysfs:
# devices.system.cpu.cpu0.cpufreq.scaling_governor: performance
# # Machine system disk encryption configuration.
# systemDiskEncryption:
# # Ephemeral partition encryption.
# ephemeral:
# provider: luks2 # Encryption provider to use for the encryption.
# # Defines the encryption keys generation and storage method.
# keys:
# - # Deterministically generated key from the node UUID and PartitionLabel.
# nodeID: {}
# slot: 0 # Key slot number for LUKS2 encryption.
#
# # # KMS managed encryption key.
# # kms:
# # endpoint: https://192.168.88.21:4443 # KMS endpoint to Seal/Unseal the key.
#
# # # Cipher kind to use for the encryption. Depends on the encryption provider.
# # cipher: aes-xts-plain64
# # # Defines the encryption sector size.
# # blockSize: 4096
# # # Additional --perf parameters for the LUKS2 encryption.
# # options:
# # - no_read_workqueue
# # - no_write_workqueue
# # Configures the udev system.
# udev:
# # List of udev rules to apply to the udev system
# rules:
# - SUBSYSTEM=="drm", KERNEL=="renderD*", GROUP="44", MODE="0660"
# # Configures the logging system.
# logging:
# # Logging destination.
# destinations:
# - endpoint: tcp://1.2.3.4:12345 # Where to send logs. Supported protocols are "tcp" and "udp".
# format: json_lines # Logs format.
# # Configures the kernel.
# kernel:
# # Kernel modules to load.
# modules:
# - name: brtfs # Module name.
# # Configures the seccomp profiles for the machine.
# seccompProfiles:
# - name: audit.json # The `name` field is used to provide the file name of the seccomp profile.
# # The `value` field is used to provide the seccomp profile.
# value:
# defaultAction: SCMP_ACT_LOG
# # Override (patch) settings in the default OCI runtime spec for CRI containers.
# # override default open file limit
# baseRuntimeSpecOverrides:
# process:
# rlimits:
# - hard: 1024
# soft: 1024
# type: RLIMIT_NOFILE
# # Configures the node annotations for the machine.
# # node annotations example.
# nodeAnnotations:
# customer.io/rack: r13a25
# # Configures the node taints for the machine. Effect is optional.
# # node taints example.
# nodeTaints:
# exampleTaint: exampleTaintValue:NoSchedule
# Provides cluster specific configuration options.
cluster:
id: 1DOt3ZYTVTzEG_Q2IYnScCjz1rxZYwWRHV9hGXBu1UE= # Globally unique identifier for this cluster (base64 encoded random 32 bytes).
secret: qvOKMH5RJtMOPSLBnWCPV4apReFGTd1czZ+tfz11/jI= # Shared secret of cluster (base64 encoded random 32 bytes).
# Provides control plane specific configuration options.
controlPlane:
endpoint: https://192.168.8.30:6443 # Endpoint is the canonical controlplane endpoint, which can be an IP address or a DNS hostname.
clusterName: demo-cluster # Configures the cluster's name.
# Provides cluster specific network configuration options.
network:
dnsDomain: cluster.local # The domain used by Kubernetes DNS.
# The pod subnet CIDR.
podSubnets:
- 10.244.0.0/16
# The service subnet CIDR.
serviceSubnets:
- 10.96.0.0/12
# # The CNI used.
# cni:
# name: custom # Name of CNI to use.
# # URLs containing manifests to apply for the CNI.
# urls:
# - https://docs.projectcalico.org/archive/v3.20/manifests/canal.yaml
token: ed454d.o4jsg75idc817ojs # The [bootstrap token](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/) used to join the cluster.
secretboxEncryptionSecret: e+8hExoi1Ap4IS6StTsScp72EXKAE2Xi+J7irS7UeG0= # A key used for the [encryption of secret data at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/).
# The base64 encoded root certificate authority used by Kubernetes.
ca:
crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJpakNDQVMrZ0F3SUJBZ0lRRWU5cFdPWEFzd09PNm9NYXNDaXRtakFLQmdncWhrak9QUVFEQWpBVk1STXcKRVFZRFZRUUtFd3ByZFdKbGNtNWxkR1Z6TUI0WERUSTFNRFl5TXpBeU16Z3hNMW9YRFRNMU1EWXlNVEF5TXpneApNMW93RlRFVE1CRUdBMVVFQ2hNS2EzVmlaWEp1WlhSbGN6QlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VICkEwSUFCQ3p0YTA1T3NWOU1NaVg4WDZEdC9xbkhWelkra2tqZ01rcjdsU1kzaERPbmVWYnBhOTJmSHlkS1QyWEgKcWN1L3FJWHpodTg0ckN0VWJuQUsyckJUekFPallUQmZNQTRHQTFVZER3RUIvd1FFQXdJQ2hEQWRCZ05WSFNVRQpGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFCkZnUVVtWEhwMmM5bGRtdFg0Y2RibDlpM0Rwd05GYzB3Q2dZSUtvWkl6ajBFQXdJRFNRQXdSZ0loQVBwVXVoNmIKYUMwaXdzNTh5WWVlYXVMU1JhbnEveVNUcGo2T0N4UGkvTXJpQWlFQW1DUVdRQ290NkM5b0c5TUlaeDFmMmMxcApBUFRFTHFNQm1vZ1NLSis5dXZBPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
key: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUpMVWF4Z2RXR0Flb1ZNRW1CYkZHUjBjbTJMK1ZxNXFsVVZMaE1USHF1ZnVvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFTE8xclRrNnhYMHd5SmZ4Zm9PMytxY2RYTmo2U1NPQXlTdnVWSmplRU02ZDVWdWxyM1o4ZgpKMHBQWmNlcHk3K29oZk9HN3ppc0sxUnVjQXJhc0ZQTUF3PT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
# The base64 encoded aggregator certificate authority used by Kubernetes for front-proxy certificate generation.
aggregatorCA:
crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJZRENDQVFhZ0F3SUJBZ0lSQVB2Y2ZReS9pbWkzQUtZdm1GNnExcmd3Q2dZSUtvWkl6ajBFQXdJd0FEQWUKRncweU5UQTJNak13TWpNNE1UTmFGdzB6TlRBMk1qRXdNak00TVROYU1BQXdXVEFUQmdjcWhrak9QUUlCQmdncQpoa2pPUFFNQkJ3TkNBQVI5NjFKWXl4N2ZxSXJHaURhMTUvVFVTc2xoR2xjSWhzandvcGFpTDg0dzNiQVBaOVdQCjliRThKUnJOTUIvVGkxSUJwbm1IbitXZ3pjeFBnbmllYzZnWG8yRXdYekFPQmdOVkhROEJBZjhFQkFNQ0FvUXcKSFFZRFZSMGxCQll3RkFZSUt3WUJCUVVIQXdFR0NDc0dBUVVGQndNQ01BOEdBMVVkRXdFQi93UUZNQU1CQWY4dwpIUVlEVlIwT0JCWUVGQ29XYVB4engxL01IanlqcVR1WkhXY2hOeXoxTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDCklHNWdQRHhmYVhNVlMwTEJ5bDNLOENLZVRGNHlBQnV0Zk0vT0hKRGR6ZHNsQWlFQW1pVU9tOU5ma2pQY2ducEcKZzdqd0NQbzczNW5zNXV4d2RRdEZpbjdnMEhvPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
key: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUxwZzZoSlBhR3A0ZmRPdkQwVGUwZklPSWJvWUdHdUM4OXBHbThWU3NYWE1vQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFZmV0U1dNc2UzNmlLeG9nMnRlZjAxRXJKWVJwWENJYkk4S0tXb2kvT01OMndEMmZWai9XeApQQ1VhelRBZjA0dFNBYVo1aDUvbG9NM01UNEo0bm5Pb0Z3PT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
# The base64 encoded private key for service account token generation.
serviceAccount:
key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlKS0FJQkFBS0NBZ0VBK21FdWh4OWxrbXJXeUdaeWIvcXZCU1hUcys0R2lPRDZ2TlJhRSsrR044WnZxWkI1CktTWFM0Q3pzZXE4V0dKOXV2Yzg0WmdaWk9wY3ZZbXM1T1BKdzE2MjJFSUlaV1FKeXdzZ0F2NWFsTWxUZ1BxLzEKejRtSjlURW5lajZVMUE4cXU1U3FYa3F5dzNZNFdsVUU1TnlhR0d3RE9yMlduTjFTMWI0YXh0V2ZLa1hxUjlFUApvOWVrK1g3UHQwdVhQV0RQQlNiSUE1V3BQRkdqN3dic1lMMTcwcW1GemwvRElUY1Q2S3ROb0lxYTVWVGpqRmRkCkRDY2VKQ3ZldFMvT1F2WG1pcXhtTnBPbW02eFhqcGxRMmNYVUg5b3NsSHREUG4vMUszNlBVWlpaZFJHU2lvQm0KM0RHZHlOU2huN0pCN0J6dmZzaGFqU3pxeExxUmNhaHhMcVdZV2hPUEJiTmVXZ0lTMm5CcDJRYXZhMUR2YkpneQpadGVVaW1EK0VUZlB1QXFZOW9OS0Z0eFVTaS9pTUpYMTM0ZFUvQVZPRXZqaXhPNnlQQjZHVUxpb0ppOHJRVG9TCmVDNStRWXFSU2RSMzhhWFo3R2VSaUlvR3BqMndRY2Y2emVoRHJTUUdabU5BZlpaenV3T1JGei9pRTJrWTBXRGwKV1p2RFlTSFNXbk5UdmZyQk0xN1pEYzNGdTRaSGNaWUpKVGdCMDJGS2kzcS9uRWdudy9zTEhHUEl3SVIvaDlidgpzcVRVMDJYaHRKQlgwYUE2RlFqaG1NTGNvLzF0ci84Y3BPVVcvdVhPM2Y5czZHMW1OY21qeDNVamJqU09xSlRnCmFYVTlGeWZJR2lYei9JcDg0Q2Jsb0wvRXJxQmVXVEQvV2twMWF1QThQcXp6emFmU3NrSzNnd2Rla2NjQ0F3RUEKQVFLQ0FnQWVQUEt4dSs5NE90eWNwWTY5aWx0dElPVTJqanU0elRSSzdpUnJmWnAxT0VzUHJKU05ZbldHRmJiVwpRWTN4YnY1MkFFV0hRTTRHNlpwVE93Vi8rZnVrZUk5aE9BSnRWd0tGYSswaDRPK0ExWEpObW56eWZHR3VaeW9sCmluMmo0ZjhKcjNSOGl4aVlJRG9YRFdjdFovSlk3N2FHSWhQRFRHYkVJZW81bllsVVFYbXFyd3RzcTA3NmJoVVMKUmNLZ0FEQ1FVVFRkQmZhWWc4MldGbEoyMlNZbFNpR1FTODFxUUt6RldwR01uc3RYMWZtSWlmRXNBTG9VVStpdQpIaUM5YlNyVFpaVzU1L2xrNzBWQWJMQ3dmdTFWanNqMzE2NXhLUTFJSEVmeWFsalJ4Q0VHc1dkNndWS1ZIZytLClAxZC9JZndra00yQUk1bG96K3ExSjNWMUtqenNxdGVyY0JKTWhuTVdEYUp5NzhZaGZSZnY0TlNieC9ObjEveW0KanpvWXVjd3pRVEhsd0dLZUhoNG12OWZxM3U5cVJoTlEzNmNjOHowcndmT01BOFpBMVJOOFhkOG82dkxkNitHSQpSbDV6eHpoZ283MXB5V0dNNlZ5L3FqK1F0aWVZVzUrMHdUNVFqbW5WL256bDZLSWZzZGU5Q0xzcG02RnhUWVJlCjE5YzAwemlOWE56V3dPMG4yeTZkaWpKamErZ0lmT0pzVFlFb2dJQ0MxczB0N0orRGU0cHV4anVyalRjMTdZYkcKK1BpejMySmFCVDByYUUxdWlQZ1lhL3Bta1plRjBZTFgzemc4NGhSNHF3WmZsaHdNNTIxKzBJRWRRb29jd2Yycgoyb25xTWlVd2NhaVZzWEVBdjJzRDlwRkd3UEg4MUplY2JBcWNmZkJTcjVPY3VvSmsyUUtDQVFFQS93Nm1EbnFUClliK3dvOEl1SUpUanFjdzYwY1RQZzRnS0tsVzJFRWNqMW5qQitaZ2xzblZheHdMZ0QyaXd1d29BZWg1OTUzWkgKbjFoVk5Eb2VCcGJXcXBRY2VuZjlWVzNUMXpNVjVhZTNYenR1MzkrTExZUlVHakV3NitlMWNrendUSlRBYndnZAp5TnM5TjNDNno0bkhmd0NqRHc2VDhWTVFpRVB6akEveEp3L1RTQzBwRHQ5cTFQZ0hMMHBQMllkdkxvYlpEajJLCkRFb1ErcVE3Tm1XeXlLWGQxWUhZK3VaTDZ1SVlYUDNLSjVWQ0N6ZjlHVHZRUi9XL29DdTgzZzdzdWM3YndCajMKYnN5aElWQUxDTXRXSFhCVDdmNXJJVlhuZHEwdGl5cGQ2NTJDTjBya20xRHZ2L0tsTjZJV01jRkFudjRPV1M0aAphdEt0a3d6SVZCdmdQd0tDQVFFQSswNGJXaDVBVmRreXRzUy9peUlOWDQ1MXZsT0YvQVNIZW5EVjVQUDYxSWpXCll3eXFDNTlSdG0rbEI5MDNvZjVSM0RqQTZXelFvNTdUOFBwSmY2di8wN2RHSzQ2alM5ODRoNEswSzZseWllUHAKUlVlbFpEVDNIbi9WK2hhTUFscnlCUFNyRlFyRkVqdmNOMWN3SmMwTEtDSVBpNGVNeGYwMEdiTHErQ0Fic0szQQpCT3N1cDVxWlNMQWcrRGpIVDdGYnpyOTBMSlN1QnFNNXp0cnJHa1NlbmxQNEtRbGFRMTdEeWlJT2tVZUMvekhFCmg2K1NJMXNla3JHeTNEK3NrQW9HZTlOMVQyL3RPM2lsYVhtdTRIdVkwa3NCckNtZ3EzVTZROXZ0aW8yRmluL1QKQkQ2Y3Z2aUkxN1RJa3lIZkZWZktvRklyOWhIT0RYdEFad2lSQnFsc2VRS0NBUUFyOFQ4a3dYT0E1TUN2QmZaaQpnS1JVamE0WWs5cllvMmgwOEwxa1FvMW5Gdmo4WW4wa0tOblI3YW5pbmJ2TkRhVVZaUWwyQmtmQ3FUcE12REtPCkdoQ3o1TDZmVHVyamUvK0NWUGZSMERwa2V0M1lUakF4VUZvWkJSNlRsaUVKcHozRFErRi9mNXQ2RG1PV21LSm0KdlNzVXMyeGtYTE9hWVNBNUNkUDg3b1l5bjZSY0RBUEYzeklOclFtMzJRcTJ4SUdnTjNWUDRjUlY1N0RUTGRaUgp3ZVd5Y2ZrdEhxamVXU3o5TTZUVTZKaWFoem1RcXoyOHlqUlJJWUs1T3EvWVppUGN3MG5TNTdwQmFabmRIbWc0ClJLZjZmRzdKVXdyci9GdmJjMnlrVEZGUUZadm9vTXVRQXJxN2pEZHd4VWtqbTFMaDBZMXhTZVJSL2lnUGJLVmEKOEU2TEFvSUJBQ1Yxc2h3UDBGVTdxQlNZWlZqdS9ZRlY4ZlVwN0JueDd1UHdkK0hHQUlpMzBRVTR1UXc4ZG1pMApZYXczYkhpSU9WbVRXQ1l6WXpKUWxaVWhLZDJQSFBaSkpudU5xb2Uvd1dScHRrT2Y0WVB1WmpJK2lNZlVJVlg1CmhrTGVJNGFpV2RzbFFXOUVpTFc4R0lwalE3a094Ry82QzhrbnJuTkEyQWhRcERmU1NXNWZwL1RUdmNPY0J1ZFAKNGNvK1pHOWJwNnk4Mnl0ZUNrYlJBK2Z5dUFMVlliT0dIc0szTXk1QnJQdXZjZTV6ODNIbzBEdk5qd0lZTGdsOQoxWVNCTlU3UFA4SXJkaHdlT2dXWWFVZThyTFdubHRNWi9TalZsNjZYTGRVNXJrSHQ4SThCbU1uVUwzZEVBdG5zCmg4MXV5aHNiV0FmbjE4ZTVSYmE2dlpIZU5BZ0RMemtDZ2dFQkFKRXRJemdmdE85UE5RSDlQOHBBTnFaclpqWkMKZGJaaThCUkNTUy83b2tHeWtkQzZsdFJVd0c2ajIxNXZZcjlmVW94ODU5ZkI0RjhEK0FXWHNKSm44YnlQZFBuWQpwU1ZRMEV3a045aTE4WnowTitEbWVZRC9CUmdyVHA1OVU1Mmo5Q1ZyQWNQRlphVk00b0xwaEVSeDZvSURPOEcvCk9wUEZkVnJvMFhyN1lpbENiYVJLVWVWbjZWKzFIZ25zelhiUE9sakhrcGdXSXdKb1RkNkVWVDlvbXNVeFlVejcKRUR5L2RXNmVxVFBMUHR5Q2hKZlo5WDB6M09uUWpLbzcwdHhQa1VRTmw0azhqMU9mMFhMaklacmd6MmVub0FRZgpQYXhSc1lCckhNVnI5eStXaDhZdFdESGx1NUU4NlNaMXNIaHphOHhZZWpoQXRndzdqa0FyNWcxL2dOYz0KLS0tLS1FTkQgUlNBIFBSSVZBVEUgS0VZLS0tLS0K
# API server specific configuration options.
apiServer:
image: registry.k8s.io/kube-apiserver:v1.33.1 # The container image used in the API server manifest.
# Extra certificate subject alternative names for the API server's certificate.
certSANs:
- 192.168.8.30
disablePodSecurityPolicy: true # Disable PodSecurityPolicy in the API server and default manifests.
# Configure the API server admission plugins.
admissionControl:
- name: PodSecurity # Name is the name of the admission controller.
# Configuration is an embedded configuration object to be used as the plugin's
configuration:
apiVersion: pod-security.admission.config.k8s.io/v1alpha1
defaults:
audit: restricted
audit-version: latest
enforce: baseline
enforce-version: latest
warn: restricted
warn-version: latest
exemptions:
namespaces:
- kube-system
runtimeClasses: []
usernames: []
kind: PodSecurityConfiguration
# Configure the API server audit policy.
auditPolicy:
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
# # Configure the API server authorization config. Node and RBAC authorizers are always added irrespective of the configuration.
# authorizationConfig:
# - type: Webhook # Type is the name of the authorizer. Allowed values are `Node`, `RBAC`, and `Webhook`.
# name: webhook # Name is used to describe the authorizer.
# # webhook is the configuration for the webhook authorizer.
# webhook:
# connectionInfo:
# type: InClusterConfig
# failurePolicy: Deny
# matchConditionSubjectAccessReviewVersion: v1
# matchConditions:
# - expression: has(request.resourceAttributes)
# - expression: '!(\''system:serviceaccounts:kube-system\'' in request.groups)'
# subjectAccessReviewVersion: v1
# timeout: 3s
# - type: Webhook # Type is the name of the authorizer. Allowed values are `Node`, `RBAC`, and `Webhook`.
# name: in-cluster-authorizer # Name is used to describe the authorizer.
# # webhook is the configuration for the webhook authorizer.
# webhook:
# connectionInfo:
# type: InClusterConfig
# failurePolicy: NoOpinion
# matchConditionSubjectAccessReviewVersion: v1
# subjectAccessReviewVersion: v1
# timeout: 3s
# Controller manager server specific configuration options.
controllerManager:
image: registry.k8s.io/kube-controller-manager:v1.33.1 # The container image used in the controller manager manifest.
# Kube-proxy server-specific configuration options
proxy:
image: registry.k8s.io/kube-proxy:v1.33.1 # The container image used in the kube-proxy manifest.
# # Disable kube-proxy deployment on cluster bootstrap.
# disabled: false
# Scheduler server specific configuration options.
scheduler:
image: registry.k8s.io/kube-scheduler:v1.33.1 # The container image used in the scheduler manifest.
# Configures cluster member discovery.
discovery:
enabled: true # Enable the cluster membership discovery feature.
# Configure registries used for cluster member discovery.
registries:
# Kubernetes registry uses Kubernetes API server to discover cluster members and stores additional information
kubernetes:
disabled: true # Disable Kubernetes discovery registry.
# Service registry is using an external service to push and pull information about cluster members.
service: {}
# # External service endpoint.
# endpoint: https://discovery.talos.dev/
# Etcd specific configuration options.
etcd:
# The `ca` is the root certificate authority of the PKI.
ca:
crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJmakNDQVNTZ0F3SUJBZ0lSQU5pSkxOUTFZZU5ZL1c0V1pnTVR2UFF3Q2dZSUtvWkl6ajBFQXdJd0R6RU4KTUFzR0ExVUVDaE1FWlhSalpEQWVGdzB5TlRBMk1qTXdNak00TVROYUZ3MHpOVEEyTWpFd01qTTRNVE5hTUE4eApEVEFMQmdOVkJBb1RCR1YwWTJRd1dUQVRCZ2NxaGtqT1BRSUJCZ2dxaGtqT1BRTUJCd05DQUFUUWg0T0N3M2VVClpUajhadllHdnh6Mkd2UFdGN0NMOWFwVElxRTdzZkh5YzJ6UW1Ic1NpcGFabW1zR0kyLzZPaVJWV280V2JLeDUKSnMwRW12bkVUYmFSbzJFd1h6QU9CZ05WSFE4QkFmOEVCQU1DQW9Rd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSApBd0VHQ0NzR0FRVUZCd01DTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3SFFZRFZSME9CQllFRkdGVlFqQzUxYXFtCjhyT2l6UVJXTDNkc2RvNndNQW9HQ0NxR1NNNDlCQU1DQTBnQU1FVUNJQ2hvcm9JaVJ4b0VDZEN3dE40UFV4MGoKRUwwM1A3UGJTMDFhQWhNVHJPYkpBaUVBL09mb2RweVd0VlNIK1ZBRVBOcjZaanFOdnFEQTNWQmkraXNQY1YybwpiQUk9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
key: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSU8walRMSmU1TXNzUDhqK3hxbi9Dd0FxSXk1RHo2V1U4MXg2OW5sVFI4S1NvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFMEllRGdzTjNsR1U0L0diMkJyOGM5aHJ6MWhld2kvV3FVeUtoTzdIeDhuTnMwSmg3RW9xVwptWnByQmlOditqb2tWVnFPRm15c2VTYk5CSnI1eEUyMmtRPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
# # The container image used to create the etcd service.
# image: gcr.io/etcd-development/etcd:v3.5.21
# # The `advertisedSubnets` field configures the networks to pick etcd advertised IP from.
# advertisedSubnets:
# - 10.0.0.0/8
# A list of urls that point to additional manifests.
extraManifests: []
# - https://www.example.com/manifest1.yaml
# - https://www.example.com/manifest2.yaml
# A list of inline Kubernetes manifests.
inlineManifests: []
# - name: namespace-ci # Name of the manifest.
# contents: |- # Manifest contents as a string.
# apiVersion: v1
# kind: Namespace
# metadata:
# name: ci
# # A key used for the [encryption of secret data at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/).
# # Decryption secret example (do not use in production!).
# aescbcEncryptionSecret: z01mye6j16bspJYtTB/5SFX8j7Ph4JXxM2Xuu4vsBPM=
# # Core DNS specific configuration options.
# coreDNS:
# image: registry.k8s.io/coredns/coredns:v1.12.1 # The `image` field is an override to the default coredns image.
# # External cloud provider configuration.
# externalCloudProvider:
# enabled: true # Enable external cloud provider.
# # A list of urls that point to additional manifests for an external cloud provider.
# manifests:
# - https://raw.githubusercontent.com/kubernetes/cloud-provider-aws/v1.20.0-alpha.0/manifests/rbac.yaml
# - https://raw.githubusercontent.com/kubernetes/cloud-provider-aws/v1.20.0-alpha.0/manifests/aws-cloud-controller-manager-daemonset.yaml
# # A map of key value pairs that will be added while fetching the extraManifests.
# extraManifestHeaders:
# Token: "1234567"
# X-ExtraInfo: info
# # Settings for admin kubeconfig generation.
# adminKubeconfig:
# certLifetime: 1h0m0s # Admin kubeconfig certificate lifetime (default is 1 year).
# # Allows running workload on control-plane nodes.
# allowSchedulingOnControlPlanes: true

View File

@@ -0,0 +1,23 @@
cluster:
id: 1DOt3ZYTVTzEG_Q2IYnScCjz1rxZYwWRHV9hGXBu1UE=
secret: qvOKMH5RJtMOPSLBnWCPV4apReFGTd1czZ+tfz11/jI=
secrets:
bootstraptoken: ed454d.o4jsg75idc817ojs
secretboxencryptionsecret: e+8hExoi1Ap4IS6StTsScp72EXKAE2Xi+J7irS7UeG0=
trustdinfo:
token: t1yf7w.zwevymjw6v0v1q76
certs:
etcd:
crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJmakNDQVNTZ0F3SUJBZ0lSQU5pSkxOUTFZZU5ZL1c0V1pnTVR2UFF3Q2dZSUtvWkl6ajBFQXdJd0R6RU4KTUFzR0ExVUVDaE1FWlhSalpEQWVGdzB5TlRBMk1qTXdNak00TVROYUZ3MHpOVEEyTWpFd01qTTRNVE5hTUE4eApEVEFMQmdOVkJBb1RCR1YwWTJRd1dUQVRCZ2NxaGtqT1BRSUJCZ2dxaGtqT1BRTUJCd05DQUFUUWg0T0N3M2VVClpUajhadllHdnh6Mkd2UFdGN0NMOWFwVElxRTdzZkh5YzJ6UW1Ic1NpcGFabW1zR0kyLzZPaVJWV280V2JLeDUKSnMwRW12bkVUYmFSbzJFd1h6QU9CZ05WSFE4QkFmOEVCQU1DQW9Rd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSApBd0VHQ0NzR0FRVUZCd01DTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3SFFZRFZSME9CQllFRkdGVlFqQzUxYXFtCjhyT2l6UVJXTDNkc2RvNndNQW9HQ0NxR1NNNDlCQU1DQTBnQU1FVUNJQ2hvcm9JaVJ4b0VDZEN3dE40UFV4MGoKRUwwM1A3UGJTMDFhQWhNVHJPYkpBaUVBL09mb2RweVd0VlNIK1ZBRVBOcjZaanFOdnFEQTNWQmkraXNQY1YybwpiQUk9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
key: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSU8walRMSmU1TXNzUDhqK3hxbi9Dd0FxSXk1RHo2V1U4MXg2OW5sVFI4S1NvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFMEllRGdzTjNsR1U0L0diMkJyOGM5aHJ6MWhld2kvV3FVeUtoTzdIeDhuTnMwSmg3RW9xVwptWnByQmlOditqb2tWVnFPRm15c2VTYk5CSnI1eEUyMmtRPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
k8s:
crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJpakNDQVMrZ0F3SUJBZ0lRRWU5cFdPWEFzd09PNm9NYXNDaXRtakFLQmdncWhrak9QUVFEQWpBVk1STXcKRVFZRFZRUUtFd3ByZFdKbGNtNWxkR1Z6TUI0WERUSTFNRFl5TXpBeU16Z3hNMW9YRFRNMU1EWXlNVEF5TXpneApNMW93RlRFVE1CRUdBMVVFQ2hNS2EzVmlaWEp1WlhSbGN6QlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VICkEwSUFCQ3p0YTA1T3NWOU1NaVg4WDZEdC9xbkhWelkra2tqZ01rcjdsU1kzaERPbmVWYnBhOTJmSHlkS1QyWEgKcWN1L3FJWHpodTg0ckN0VWJuQUsyckJUekFPallUQmZNQTRHQTFVZER3RUIvd1FFQXdJQ2hEQWRCZ05WSFNVRQpGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFCkZnUVVtWEhwMmM5bGRtdFg0Y2RibDlpM0Rwd05GYzB3Q2dZSUtvWkl6ajBFQXdJRFNRQXdSZ0loQVBwVXVoNmIKYUMwaXdzNTh5WWVlYXVMU1JhbnEveVNUcGo2T0N4UGkvTXJpQWlFQW1DUVdRQ290NkM5b0c5TUlaeDFmMmMxcApBUFRFTHFNQm1vZ1NLSis5dXZBPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
key: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUpMVWF4Z2RXR0Flb1ZNRW1CYkZHUjBjbTJMK1ZxNXFsVVZMaE1USHF1ZnVvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFTE8xclRrNnhYMHd5SmZ4Zm9PMytxY2RYTmo2U1NPQXlTdnVWSmplRU02ZDVWdWxyM1o4ZgpKMHBQWmNlcHk3K29oZk9HN3ppc0sxUnVjQXJhc0ZQTUF3PT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
k8saggregator:
crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJZRENDQVFhZ0F3SUJBZ0lSQVB2Y2ZReS9pbWkzQUtZdm1GNnExcmd3Q2dZSUtvWkl6ajBFQXdJd0FEQWUKRncweU5UQTJNak13TWpNNE1UTmFGdzB6TlRBMk1qRXdNak00TVROYU1BQXdXVEFUQmdjcWhrak9QUUlCQmdncQpoa2pPUFFNQkJ3TkNBQVI5NjFKWXl4N2ZxSXJHaURhMTUvVFVTc2xoR2xjSWhzandvcGFpTDg0dzNiQVBaOVdQCjliRThKUnJOTUIvVGkxSUJwbm1IbitXZ3pjeFBnbmllYzZnWG8yRXdYekFPQmdOVkhROEJBZjhFQkFNQ0FvUXcKSFFZRFZSMGxCQll3RkFZSUt3WUJCUVVIQXdFR0NDc0dBUVVGQndNQ01BOEdBMVVkRXdFQi93UUZNQU1CQWY4dwpIUVlEVlIwT0JCWUVGQ29XYVB4engxL01IanlqcVR1WkhXY2hOeXoxTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDCklHNWdQRHhmYVhNVlMwTEJ5bDNLOENLZVRGNHlBQnV0Zk0vT0hKRGR6ZHNsQWlFQW1pVU9tOU5ma2pQY2ducEcKZzdqd0NQbzczNW5zNXV4d2RRdEZpbjdnMEhvPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
key: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUxwZzZoSlBhR3A0ZmRPdkQwVGUwZklPSWJvWUdHdUM4OXBHbThWU3NYWE1vQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFZmV0U1dNc2UzNmlLeG9nMnRlZjAxRXJKWVJwWENJYkk4S0tXb2kvT01OMndEMmZWai9XeApQQ1VhelRBZjA0dFNBYVo1aDUvbG9NM01UNEo0bm5Pb0Z3PT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
k8sserviceaccount:
key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlKS0FJQkFBS0NBZ0VBK21FdWh4OWxrbXJXeUdaeWIvcXZCU1hUcys0R2lPRDZ2TlJhRSsrR044WnZxWkI1CktTWFM0Q3pzZXE4V0dKOXV2Yzg0WmdaWk9wY3ZZbXM1T1BKdzE2MjJFSUlaV1FKeXdzZ0F2NWFsTWxUZ1BxLzEKejRtSjlURW5lajZVMUE4cXU1U3FYa3F5dzNZNFdsVUU1TnlhR0d3RE9yMlduTjFTMWI0YXh0V2ZLa1hxUjlFUApvOWVrK1g3UHQwdVhQV0RQQlNiSUE1V3BQRkdqN3dic1lMMTcwcW1GemwvRElUY1Q2S3ROb0lxYTVWVGpqRmRkCkRDY2VKQ3ZldFMvT1F2WG1pcXhtTnBPbW02eFhqcGxRMmNYVUg5b3NsSHREUG4vMUszNlBVWlpaZFJHU2lvQm0KM0RHZHlOU2huN0pCN0J6dmZzaGFqU3pxeExxUmNhaHhMcVdZV2hPUEJiTmVXZ0lTMm5CcDJRYXZhMUR2YkpneQpadGVVaW1EK0VUZlB1QXFZOW9OS0Z0eFVTaS9pTUpYMTM0ZFUvQVZPRXZqaXhPNnlQQjZHVUxpb0ppOHJRVG9TCmVDNStRWXFSU2RSMzhhWFo3R2VSaUlvR3BqMndRY2Y2emVoRHJTUUdabU5BZlpaenV3T1JGei9pRTJrWTBXRGwKV1p2RFlTSFNXbk5UdmZyQk0xN1pEYzNGdTRaSGNaWUpKVGdCMDJGS2kzcS9uRWdudy9zTEhHUEl3SVIvaDlidgpzcVRVMDJYaHRKQlgwYUE2RlFqaG1NTGNvLzF0ci84Y3BPVVcvdVhPM2Y5czZHMW1OY21qeDNVamJqU09xSlRnCmFYVTlGeWZJR2lYei9JcDg0Q2Jsb0wvRXJxQmVXVEQvV2twMWF1QThQcXp6emFmU3NrSzNnd2Rla2NjQ0F3RUEKQVFLQ0FnQWVQUEt4dSs5NE90eWNwWTY5aWx0dElPVTJqanU0elRSSzdpUnJmWnAxT0VzUHJKU05ZbldHRmJiVwpRWTN4YnY1MkFFV0hRTTRHNlpwVE93Vi8rZnVrZUk5aE9BSnRWd0tGYSswaDRPK0ExWEpObW56eWZHR3VaeW9sCmluMmo0ZjhKcjNSOGl4aVlJRG9YRFdjdFovSlk3N2FHSWhQRFRHYkVJZW81bllsVVFYbXFyd3RzcTA3NmJoVVMKUmNLZ0FEQ1FVVFRkQmZhWWc4MldGbEoyMlNZbFNpR1FTODFxUUt6RldwR01uc3RYMWZtSWlmRXNBTG9VVStpdQpIaUM5YlNyVFpaVzU1L2xrNzBWQWJMQ3dmdTFWanNqMzE2NXhLUTFJSEVmeWFsalJ4Q0VHc1dkNndWS1ZIZytLClAxZC9JZndra00yQUk1bG96K3ExSjNWMUtqenNxdGVyY0JKTWhuTVdEYUp5NzhZaGZSZnY0TlNieC9ObjEveW0KanpvWXVjd3pRVEhsd0dLZUhoNG12OWZxM3U5cVJoTlEzNmNjOHowcndmT01BOFpBMVJOOFhkOG82dkxkNitHSQpSbDV6eHpoZ283MXB5V0dNNlZ5L3FqK1F0aWVZVzUrMHdUNVFqbW5WL256bDZLSWZzZGU5Q0xzcG02RnhUWVJlCjE5YzAwemlOWE56V3dPMG4yeTZkaWpKamErZ0lmT0pzVFlFb2dJQ0MxczB0N0orRGU0cHV4anVyalRjMTdZYkcKK1BpejMySmFCVDByYUUxdWlQZ1lhL3Bta1plRjBZTFgzemc4NGhSNHF3WmZsaHdNNTIxKzBJRWRRb29jd2Yycgoyb25xTWlVd2NhaVZzWEVBdjJzRDlwRkd3UEg4MUplY2JBcWNmZkJTcjVPY3VvSmsyUUtDQVFFQS93Nm1EbnFUClliK3dvOEl1SUpUanFjdzYwY1RQZzRnS0tsVzJFRWNqMW5qQitaZ2xzblZheHdMZ0QyaXd1d29BZWg1OTUzWkgKbjFoVk5Eb2VCcGJXcXBRY2VuZjlWVzNUMXpNVjVhZTNYenR1MzkrTExZUlVHakV3NitlMWNrendUSlRBYndnZAp5TnM5TjNDNno0bkhmd0NqRHc2VDhWTVFpRVB6akEveEp3L1RTQzBwRHQ5cTFQZ0hMMHBQMllkdkxvYlpEajJLCkRFb1ErcVE3Tm1XeXlLWGQxWUhZK3VaTDZ1SVlYUDNLSjVWQ0N6ZjlHVHZRUi9XL29DdTgzZzdzdWM3YndCajMKYnN5aElWQUxDTXRXSFhCVDdmNXJJVlhuZHEwdGl5cGQ2NTJDTjBya20xRHZ2L0tsTjZJV01jRkFudjRPV1M0aAphdEt0a3d6SVZCdmdQd0tDQVFFQSswNGJXaDVBVmRreXRzUy9peUlOWDQ1MXZsT0YvQVNIZW5EVjVQUDYxSWpXCll3eXFDNTlSdG0rbEI5MDNvZjVSM0RqQTZXelFvNTdUOFBwSmY2di8wN2RHSzQ2alM5ODRoNEswSzZseWllUHAKUlVlbFpEVDNIbi9WK2hhTUFscnlCUFNyRlFyRkVqdmNOMWN3SmMwTEtDSVBpNGVNeGYwMEdiTHErQ0Fic0szQQpCT3N1cDVxWlNMQWcrRGpIVDdGYnpyOTBMSlN1QnFNNXp0cnJHa1NlbmxQNEtRbGFRMTdEeWlJT2tVZUMvekhFCmg2K1NJMXNla3JHeTNEK3NrQW9HZTlOMVQyL3RPM2lsYVhtdTRIdVkwa3NCckNtZ3EzVTZROXZ0aW8yRmluL1QKQkQ2Y3Z2aUkxN1RJa3lIZkZWZktvRklyOWhIT0RYdEFad2lSQnFsc2VRS0NBUUFyOFQ4a3dYT0E1TUN2QmZaaQpnS1JVamE0WWs5cllvMmgwOEwxa1FvMW5Gdmo4WW4wa0tOblI3YW5pbmJ2TkRhVVZaUWwyQmtmQ3FUcE12REtPCkdoQ3o1TDZmVHVyamUvK0NWUGZSMERwa2V0M1lUakF4VUZvWkJSNlRsaUVKcHozRFErRi9mNXQ2RG1PV21LSm0KdlNzVXMyeGtYTE9hWVNBNUNkUDg3b1l5bjZSY0RBUEYzeklOclFtMzJRcTJ4SUdnTjNWUDRjUlY1N0RUTGRaUgp3ZVd5Y2ZrdEhxamVXU3o5TTZUVTZKaWFoem1RcXoyOHlqUlJJWUs1T3EvWVppUGN3MG5TNTdwQmFabmRIbWc0ClJLZjZmRzdKVXdyci9GdmJjMnlrVEZGUUZadm9vTXVRQXJxN2pEZHd4VWtqbTFMaDBZMXhTZVJSL2lnUGJLVmEKOEU2TEFvSUJBQ1Yxc2h3UDBGVTdxQlNZWlZqdS9ZRlY4ZlVwN0JueDd1UHdkK0hHQUlpMzBRVTR1UXc4ZG1pMApZYXczYkhpSU9WbVRXQ1l6WXpKUWxaVWhLZDJQSFBaSkpudU5xb2Uvd1dScHRrT2Y0WVB1WmpJK2lNZlVJVlg1CmhrTGVJNGFpV2RzbFFXOUVpTFc4R0lwalE3a094Ry82QzhrbnJuTkEyQWhRcERmU1NXNWZwL1RUdmNPY0J1ZFAKNGNvK1pHOWJwNnk4Mnl0ZUNrYlJBK2Z5dUFMVlliT0dIc0szTXk1QnJQdXZjZTV6ODNIbzBEdk5qd0lZTGdsOQoxWVNCTlU3UFA4SXJkaHdlT2dXWWFVZThyTFdubHRNWi9TalZsNjZYTGRVNXJrSHQ4SThCbU1uVUwzZEVBdG5zCmg4MXV5aHNiV0FmbjE4ZTVSYmE2dlpIZU5BZ0RMemtDZ2dFQkFKRXRJemdmdE85UE5RSDlQOHBBTnFaclpqWkMKZGJaaThCUkNTUy83b2tHeWtkQzZsdFJVd0c2ajIxNXZZcjlmVW94ODU5ZkI0RjhEK0FXWHNKSm44YnlQZFBuWQpwU1ZRMEV3a045aTE4WnowTitEbWVZRC9CUmdyVHA1OVU1Mmo5Q1ZyQWNQRlphVk00b0xwaEVSeDZvSURPOEcvCk9wUEZkVnJvMFhyN1lpbENiYVJLVWVWbjZWKzFIZ25zelhiUE9sakhrcGdXSXdKb1RkNkVWVDlvbXNVeFlVejcKRUR5L2RXNmVxVFBMUHR5Q2hKZlo5WDB6M09uUWpLbzcwdHhQa1VRTmw0azhqMU9mMFhMaklacmd6MmVub0FRZgpQYXhSc1lCckhNVnI5eStXaDhZdFdESGx1NUU4NlNaMXNIaHphOHhZZWpoQXRndzdqa0FyNWcxL2dOYz0KLS0tLS1FTkQgUlNBIFBSSVZBVEUgS0VZLS0tLS0K
os:
crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJQekNCOHFBREFnRUNBaEVBa2JEQ2VJR09iTlBZZGQxRTBNSUozVEFGQmdNclpYQXdFREVPTUF3R0ExVUUKQ2hNRmRHRnNiM013SGhjTk1qVXdOakl6TURJek9ERXpXaGNOTXpVd05qSXhNREl6T0RFeldqQVFNUTR3REFZRApWUVFLRXdWMFlXeHZjekFxTUFVR0F5dGxjQU1oQVBhbVhHamhnN0FFUmpQZUFJL3dQK21YWVZsYm95M01TUTErCm1CTGh3NmhLbzJFd1h6QU9CZ05WSFE4QkFmOEVCQU1DQW9Rd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUcKQ0NzR0FRVUZCd01DTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3SFFZRFZSME9CQllFRk12QnhpY2tXOXVaZWR0ZgppblRzK3p1U2VLK2FNQVVHQXl0bGNBTkJBSEl5Y2ttT3lGMWEvTVJROXp4a1lRcy81clptRjl0YTVsZktCamVlCmRLV0lVbFNRNkY4c1hjZ1orWlhOcXNjSHNwbzFKdStQUVVwa3VocWREdDBRblFjPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
key: LS0tLS1CRUdJTiBFRDI1NTE5IFBSSVZBVEUgS0VZLS0tLS0KTUM0Q0FRQXdCUVlESzJWd0JDSUVJT0hOamQ1blVzdVRGRXpsQmtFOVhkZUJ4b1AxMk9mY2R4a0tjQmZlU0xKbgotLS0tLUVORCBFRDI1NTE5IFBSSVZBVEUgS0VZLS0tLS0K

View File

@@ -0,0 +1,7 @@
context: demo-cluster
contexts:
demo-cluster:
endpoints: []
ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJQekNCOHFBREFnRUNBaEVBa2JEQ2VJR09iTlBZZGQxRTBNSUozVEFGQmdNclpYQXdFREVPTUF3R0ExVUUKQ2hNRmRHRnNiM013SGhjTk1qVXdOakl6TURJek9ERXpXaGNOTXpVd05qSXhNREl6T0RFeldqQVFNUTR3REFZRApWUVFLRXdWMFlXeHZjekFxTUFVR0F5dGxjQU1oQVBhbVhHamhnN0FFUmpQZUFJL3dQK21YWVZsYm95M01TUTErCm1CTGh3NmhLbzJFd1h6QU9CZ05WSFE4QkFmOEVCQU1DQW9Rd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUcKQ0NzR0FRVUZCd01DTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3SFFZRFZSME9CQllFRk12QnhpY2tXOXVaZWR0ZgppblRzK3p1U2VLK2FNQVVHQXl0bGNBTkJBSEl5Y2ttT3lGMWEvTVJROXp4a1lRcy81clptRjl0YTVsZktCamVlCmRLV0lVbFNRNkY4c1hjZ1orWlhOcXNjSHNwbzFKdStQUVVwa3VocWREdDBRblFjPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJLVENCM0tBREFnRUNBaEVBc0xlWW83MXVpVUlEK3RUSEkrbFJjakFGQmdNclpYQXdFREVPTUF3R0ExVUUKQ2hNRmRHRnNiM013SGhjTk1qVXdOakl6TURJek9ERXpXaGNOTWpZd05qSXpNREl6T0RFeldqQVRNUkV3RHdZRApWUVFLRXdodmN6cGhaRzFwYmpBcU1BVUdBeXRsY0FNaEFPek5qd2FncnBYMFc0TWs1OWpoTmtRVU5UTUNobHFoCklnb1lrWnNkWUdhc28wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0I0QXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0h3WURWUjBqQkJnd0ZvQVV5OEhHSnlSYjI1bDUyMStLZE96N081SjRyNW93QlFZREsyVndBMEVBSmtOcwpGUDZMWUltTDhjR2l3TTRYc0FyZE9XVTdUMzhOaWpvTys2VE80cWRYZTdYSXZwTEZTeXFqRTBuREZtenpmdGVKCk9PM3BiMHlFUmtLcG1rNUJCUT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
key: LS0tLS1CRUdJTiBFRDI1NTE5IFBSSVZBVEUgS0VZLS0tLS0KTUM0Q0FRQXdCUVlESzJWd0JDSUVJQmhtWXY4Wk5kaVVBMG5mbHAvT3VOdHJiM09Rc2xkZUc4cEU2YkFKTStENwotLS0tLUVORCBFRDI1NTE5IFBSSVZBVEUgS0VZLS0tLS0K

View File

@@ -0,0 +1,606 @@
version: v1alpha1 # Indicates the schema used to decode the contents.
debug: false # Enable verbose logging to the console.
persist: true
# Provides machine specific configuration options.
machine:
type: worker # Defines the role of the machine within the cluster.
token: t1yf7w.zwevymjw6v0v1q76 # The `token` is used by a machine to join the PKI of the cluster.
# The root certificate authority of the PKI.
ca:
crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJQekNCOHFBREFnRUNBaEVBa2JEQ2VJR09iTlBZZGQxRTBNSUozVEFGQmdNclpYQXdFREVPTUF3R0ExVUUKQ2hNRmRHRnNiM013SGhjTk1qVXdOakl6TURJek9ERXpXaGNOTXpVd05qSXhNREl6T0RFeldqQVFNUTR3REFZRApWUVFLRXdWMFlXeHZjekFxTUFVR0F5dGxjQU1oQVBhbVhHamhnN0FFUmpQZUFJL3dQK21YWVZsYm95M01TUTErCm1CTGh3NmhLbzJFd1h6QU9CZ05WSFE4QkFmOEVCQU1DQW9Rd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUcKQ0NzR0FRVUZCd01DTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3SFFZRFZSME9CQllFRk12QnhpY2tXOXVaZWR0ZgppblRzK3p1U2VLK2FNQVVHQXl0bGNBTkJBSEl5Y2ttT3lGMWEvTVJROXp4a1lRcy81clptRjl0YTVsZktCamVlCmRLV0lVbFNRNkY4c1hjZ1orWlhOcXNjSHNwbzFKdStQUVVwa3VocWREdDBRblFjPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
key: ""
# Extra certificate subject alternative names for the machine's certificate.
certSANs: []
# # Uncomment this to enable SANs.
# - 10.0.0.10
# - 172.16.0.10
# - 192.168.0.10
# Used to provide additional options to the kubelet.
kubelet:
image: ghcr.io/siderolabs/kubelet:v1.33.1 # The `image` field is an optional reference to an alternative kubelet image.
defaultRuntimeSeccompProfileEnabled: true # Enable container runtime default Seccomp profile.
disableManifestsDirectory: true # The `disableManifestsDirectory` field configures the kubelet to get static pod manifests from the /etc/kubernetes/manifests directory.
# # The `ClusterDNS` field is an optional reference to an alternative kubelet clusterDNS ip list.
# clusterDNS:
# - 10.96.0.10
# - 169.254.2.53
# # The `extraArgs` field is used to provide additional flags to the kubelet.
# extraArgs:
# key: value
# # The `extraMounts` field is used to add additional mounts to the kubelet container.
# extraMounts:
# - destination: /var/lib/example # Destination is the absolute path where the mount will be placed in the container.
# type: bind # Type specifies the mount kind.
# source: /var/lib/example # Source specifies the source path of the mount.
# # Options are fstab style mount options.
# options:
# - bind
# - rshared
# - rw
# # The `extraConfig` field is used to provide kubelet configuration overrides.
# extraConfig:
# serverTLSBootstrap: true
# # The `KubeletCredentialProviderConfig` field is used to provide kubelet credential configuration.
# credentialProviderConfig:
# apiVersion: kubelet.config.k8s.io/v1
# kind: CredentialProviderConfig
# providers:
# - apiVersion: credentialprovider.kubelet.k8s.io/v1
# defaultCacheDuration: 12h
# matchImages:
# - '*.dkr.ecr.*.amazonaws.com'
# - '*.dkr.ecr.*.amazonaws.com.cn'
# - '*.dkr.ecr-fips.*.amazonaws.com'
# - '*.dkr.ecr.us-iso-east-1.c2s.ic.gov'
# - '*.dkr.ecr.us-isob-east-1.sc2s.sgov.gov'
# name: ecr-credential-provider
# # The `nodeIP` field is used to configure `--node-ip` flag for the kubelet.
# nodeIP:
# # The `validSubnets` field configures the networks to pick kubelet node IP from.
# validSubnets:
# - 10.0.0.0/8
# - '!10.0.0.3/32'
# - fdc7::/16
# Provides machine specific network configuration options.
network: {}
# # `interfaces` is used to define the network interface configuration.
# interfaces:
# - interface: enp0s1 # The interface name.
# # Assigns static IP addresses to the interface.
# addresses:
# - 192.168.2.0/24
# # A list of routes associated with the interface.
# routes:
# - network: 0.0.0.0/0 # The route's network (destination).
# gateway: 192.168.2.1 # The route's gateway (if empty, creates link scope route).
# metric: 1024 # The optional metric for the route.
# mtu: 1500 # The interface's MTU.
#
# # # Picks a network device using the selector.
# # # select a device with bus prefix 00:*.
# # deviceSelector:
# # busPath: 00:* # PCI, USB bus prefix, supports matching by wildcard.
# # # select a device with mac address matching `*:f0:ab` and `virtio` kernel driver.
# # deviceSelector:
# # hardwareAddr: '*:f0:ab' # Device hardware (MAC) address, supports matching by wildcard.
# # driver: virtio_net # Kernel driver, supports matching by wildcard.
# # # select a device with bus prefix 00:*, a device with mac address matching `*:f0:ab` and `virtio` kernel driver.
# # deviceSelector:
# # - busPath: 00:* # PCI, USB bus prefix, supports matching by wildcard.
# # - hardwareAddr: '*:f0:ab' # Device hardware (MAC) address, supports matching by wildcard.
# # driver: virtio_net # Kernel driver, supports matching by wildcard.
# # # Bond specific options.
# # bond:
# # # The interfaces that make up the bond.
# # interfaces:
# # - enp2s0
# # - enp2s1
# # # Picks a network device using the selector.
# # deviceSelectors:
# # - busPath: 00:* # PCI, USB bus prefix, supports matching by wildcard.
# # - hardwareAddr: '*:f0:ab' # Device hardware (MAC) address, supports matching by wildcard.
# # driver: virtio_net # Kernel driver, supports matching by wildcard.
# # mode: 802.3ad # A bond option.
# # lacpRate: fast # A bond option.
# # # Bridge specific options.
# # bridge:
# # # The interfaces that make up the bridge.
# # interfaces:
# # - enxda4042ca9a51
# # - enxae2a6774c259
# # # Enable STP on this bridge.
# # stp:
# # enabled: true # Whether Spanning Tree Protocol (STP) is enabled.
# # # Configure this device as a bridge port.
# # bridgePort:
# # master: br0 # The name of the bridge master interface
# # # Indicates if DHCP should be used to configure the interface.
# # dhcp: true
# # # DHCP specific options.
# # dhcpOptions:
# # routeMetric: 1024 # The priority of all routes received via DHCP.
# # # Wireguard specific configuration.
# # # wireguard server example
# # wireguard:
# # privateKey: ABCDEF... # Specifies a private key configuration (base64 encoded).
# # listenPort: 51111 # Specifies a device's listening port.
# # # Specifies a list of peer configurations to apply to a device.
# # peers:
# # - publicKey: ABCDEF... # Specifies the public key of this peer.
# # endpoint: 192.168.1.3 # Specifies the endpoint of this peer entry.
# # # AllowedIPs specifies a list of allowed IP addresses in CIDR notation for this peer.
# # allowedIPs:
# # - 192.168.1.0/24
# # # wireguard peer example
# # wireguard:
# # privateKey: ABCDEF... # Specifies a private key configuration (base64 encoded).
# # # Specifies a list of peer configurations to apply to a device.
# # peers:
# # - publicKey: ABCDEF... # Specifies the public key of this peer.
# # endpoint: 192.168.1.2:51822 # Specifies the endpoint of this peer entry.
# # persistentKeepaliveInterval: 10s # Specifies the persistent keepalive interval for this peer.
# # # AllowedIPs specifies a list of allowed IP addresses in CIDR notation for this peer.
# # allowedIPs:
# # - 192.168.1.0/24
# # # Virtual (shared) IP address configuration.
# # # layer2 vip example
# # vip:
# # ip: 172.16.199.55 # Specifies the IP address to be used.
# # Used to statically set the nameservers for the machine.
# nameservers:
# - 8.8.8.8
# - 1.1.1.1
# # Used to statically set arbitrary search domains.
# searchDomains:
# - example.org
# - example.com
# # Allows for extra entries to be added to the `/etc/hosts` file
# extraHostEntries:
# - ip: 192.168.1.100 # The IP of the host.
# # The host alias.
# aliases:
# - example
# - example.domain.tld
# # Configures KubeSpan feature.
# kubespan:
# enabled: true # Enable the KubeSpan feature.
# Used to provide instructions for installations.
install:
disk: /dev/sda # The disk used for installations.
image: ghcr.io/siderolabs/installer:v1.10.3 # Allows for supplying the image used to perform the installation.
wipe: false # Indicates if the installation disk should be wiped at installation time.
# # Look up disk using disk attributes like model, size, serial and others.
# diskSelector:
# size: 4GB # Disk size.
# model: WDC* # Disk model `/sys/block/<dev>/device/model`.
# busPath: /pci0000:00/0000:00:17.0/ata1/host0/target0:0:0/0:0:0:0 # Disk bus path.
# # Allows for supplying extra kernel args via the bootloader.
# extraKernelArgs:
# - talos.platform=metal
# - reboot=k
# Used to configure the machine's container image registry mirrors.
registries: {}
# # Specifies mirror configuration for each registry host namespace.
# mirrors:
# ghcr.io:
# # List of endpoints (URLs) for registry mirrors to use.
# endpoints:
# - https://registry.insecure
# - https://ghcr.io/v2/
# # Specifies TLS & auth configuration for HTTPS image registries.
# config:
# registry.insecure:
# # The TLS configuration for the registry.
# tls:
# insecureSkipVerify: true # Skip TLS server certificate verification (not recommended).
#
# # # Enable mutual TLS authentication with the registry.
# # clientIdentity:
# # crt: LS0tIEVYQU1QTEUgQ0VSVElGSUNBVEUgLS0t
# # key: LS0tIEVYQU1QTEUgS0VZIC0tLQ==
#
# # # The auth configuration for this registry.
# # auth:
# # username: username # Optional registry authentication.
# # password: password # Optional registry authentication.
# Features describe individual Talos features that can be switched on or off.
features:
rbac: true # Enable role-based access control (RBAC).
stableHostname: true # Enable stable default hostname.
apidCheckExtKeyUsage: true # Enable checks for extended key usage of client certificates in apid.
diskQuotaSupport: true # Enable XFS project quota support for EPHEMERAL partition and user disks.
# KubePrism - local proxy/load balancer on defined port that will distribute
kubePrism:
enabled: true # Enable KubePrism support - will start local load balancing proxy.
port: 7445 # KubePrism port.
# Configures host DNS caching resolver.
hostDNS:
enabled: true # Enable host DNS caching resolver.
forwardKubeDNSToHost: true # Use the host DNS resolver as upstream for Kubernetes CoreDNS pods.
# # Configure Talos API access from Kubernetes pods.
# kubernetesTalosAPIAccess:
# enabled: true # Enable Talos API access from Kubernetes pods.
# # The list of Talos API roles which can be granted for access from Kubernetes pods.
# allowedRoles:
# - os:reader
# # The list of Kubernetes namespaces Talos API access is available from.
# allowedKubernetesNamespaces:
# - kube-system
# # Provides machine specific control plane configuration options.
# # ControlPlane definition example.
# controlPlane:
# # Controller manager machine specific configuration options.
# controllerManager:
# disabled: false # Disable kube-controller-manager on the node.
# # Scheduler machine specific configuration options.
# scheduler:
# disabled: true # Disable kube-scheduler on the node.
# # Used to provide static pod definitions to be run by the kubelet directly bypassing the kube-apiserver.
# # nginx static pod.
# pods:
# - apiVersion: v1
# kind: pod
# metadata:
# name: nginx
# spec:
# containers:
# - image: nginx
# name: nginx
# # Allows the addition of user specified files.
# # MachineFiles usage example.
# files:
# - content: '...' # The contents of the file.
# permissions: 0o666 # The file's permissions in octal.
# path: /tmp/file.txt # The path of the file.
# op: append # The operation to use
# # The `env` field allows for the addition of environment variables.
# # Environment variables definition examples.
# env:
# GRPC_GO_LOG_SEVERITY_LEVEL: info
# GRPC_GO_LOG_VERBOSITY_LEVEL: "99"
# https_proxy: http://SERVER:PORT/
# env:
# GRPC_GO_LOG_SEVERITY_LEVEL: error
# https_proxy: https://USERNAME:PASSWORD@SERVER:PORT/
# env:
# https_proxy: http://DOMAIN\USERNAME:PASSWORD@SERVER:PORT/
# # Used to configure the machine's time settings.
# # Example configuration for cloudflare ntp server.
# time:
# disabled: false # Indicates if the time service is disabled for the machine.
# # description: |
# servers:
# - time.cloudflare.com
# bootTimeout: 2m0s # Specifies the timeout when the node time is considered to be in sync unlocking the boot sequence.
# # Used to configure the machine's sysctls.
# # MachineSysctls usage example.
# sysctls:
# kernel.domainname: talos.dev
# net.ipv4.ip_forward: "0"
# net/ipv6/conf/eth0.100/disable_ipv6: "1"
# # Used to configure the machine's sysfs.
# # MachineSysfs usage example.
# sysfs:
# devices.system.cpu.cpu0.cpufreq.scaling_governor: performance
# # Machine system disk encryption configuration.
# systemDiskEncryption:
# # Ephemeral partition encryption.
# ephemeral:
# provider: luks2 # Encryption provider to use for the encryption.
# # Defines the encryption keys generation and storage method.
# keys:
# - # Deterministically generated key from the node UUID and PartitionLabel.
# nodeID: {}
# slot: 0 # Key slot number for LUKS2 encryption.
#
# # # KMS managed encryption key.
# # kms:
# # endpoint: https://192.168.88.21:4443 # KMS endpoint to Seal/Unseal the key.
#
# # # Cipher kind to use for the encryption. Depends on the encryption provider.
# # cipher: aes-xts-plain64
# # # Defines the encryption sector size.
# # blockSize: 4096
# # # Additional --perf parameters for the LUKS2 encryption.
# # options:
# # - no_read_workqueue
# # - no_write_workqueue
# # Configures the udev system.
# udev:
# # List of udev rules to apply to the udev system
# rules:
# - SUBSYSTEM=="drm", KERNEL=="renderD*", GROUP="44", MODE="0660"
# # Configures the logging system.
# logging:
# # Logging destination.
# destinations:
# - endpoint: tcp://1.2.3.4:12345 # Where to send logs. Supported protocols are "tcp" and "udp".
# format: json_lines # Logs format.
# # Configures the kernel.
# kernel:
# # Kernel modules to load.
# modules:
# - name: brtfs # Module name.
# # Configures the seccomp profiles for the machine.
# seccompProfiles:
# - name: audit.json # The `name` field is used to provide the file name of the seccomp profile.
# # The `value` field is used to provide the seccomp profile.
# value:
# defaultAction: SCMP_ACT_LOG
# # Override (patch) settings in the default OCI runtime spec for CRI containers.
# # override default open file limit
# baseRuntimeSpecOverrides:
# process:
# rlimits:
# - hard: 1024
# soft: 1024
# type: RLIMIT_NOFILE
# # Configures the node labels for the machine.
# # node labels example.
# nodeLabels:
# exampleLabel: exampleLabelValue
# # Configures the node annotations for the machine.
# # node annotations example.
# nodeAnnotations:
# customer.io/rack: r13a25
# # Configures the node taints for the machine. Effect is optional.
# # node taints example.
# nodeTaints:
# exampleTaint: exampleTaintValue:NoSchedule
# Provides cluster specific configuration options.
cluster:
id: 1DOt3ZYTVTzEG_Q2IYnScCjz1rxZYwWRHV9hGXBu1UE= # Globally unique identifier for this cluster (base64 encoded random 32 bytes).
secret: qvOKMH5RJtMOPSLBnWCPV4apReFGTd1czZ+tfz11/jI= # Shared secret of cluster (base64 encoded random 32 bytes).
# Provides control plane specific configuration options.
controlPlane:
endpoint: https://192.168.8.30:6443 # Endpoint is the canonical controlplane endpoint, which can be an IP address or a DNS hostname.
clusterName: demo-cluster # Configures the cluster's name.
# Provides cluster specific network configuration options.
network:
dnsDomain: cluster.local # The domain used by Kubernetes DNS.
# The pod subnet CIDR.
podSubnets:
- 10.244.0.0/16
# The service subnet CIDR.
serviceSubnets:
- 10.96.0.0/12
# # The CNI used.
# cni:
# name: custom # Name of CNI to use.
# # URLs containing manifests to apply for the CNI.
# urls:
# - https://docs.projectcalico.org/archive/v3.20/manifests/canal.yaml
token: ed454d.o4jsg75idc817ojs # The [bootstrap token](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/) used to join the cluster.
# The base64 encoded root certificate authority used by Kubernetes.
ca:
crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJpakNDQVMrZ0F3SUJBZ0lRRWU5cFdPWEFzd09PNm9NYXNDaXRtakFLQmdncWhrak9QUVFEQWpBVk1STXcKRVFZRFZRUUtFd3ByZFdKbGNtNWxkR1Z6TUI0WERUSTFNRFl5TXpBeU16Z3hNMW9YRFRNMU1EWXlNVEF5TXpneApNMW93RlRFVE1CRUdBMVVFQ2hNS2EzVmlaWEp1WlhSbGN6QlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VICkEwSUFCQ3p0YTA1T3NWOU1NaVg4WDZEdC9xbkhWelkra2tqZ01rcjdsU1kzaERPbmVWYnBhOTJmSHlkS1QyWEgKcWN1L3FJWHpodTg0ckN0VWJuQUsyckJUekFPallUQmZNQTRHQTFVZER3RUIvd1FFQXdJQ2hEQWRCZ05WSFNVRQpGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFCkZnUVVtWEhwMmM5bGRtdFg0Y2RibDlpM0Rwd05GYzB3Q2dZSUtvWkl6ajBFQXdJRFNRQXdSZ0loQVBwVXVoNmIKYUMwaXdzNTh5WWVlYXVMU1JhbnEveVNUcGo2T0N4UGkvTXJpQWlFQW1DUVdRQ290NkM5b0c5TUlaeDFmMmMxcApBUFRFTHFNQm1vZ1NLSis5dXZBPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
key: ""
# Configures cluster member discovery.
discovery:
enabled: true # Enable the cluster membership discovery feature.
# Configure registries used for cluster member discovery.
registries:
# Kubernetes registry uses Kubernetes API server to discover cluster members and stores additional information
kubernetes:
disabled: true # Disable Kubernetes discovery registry.
# Service registry is using an external service to push and pull information about cluster members.
service: {}
# # External service endpoint.
# endpoint: https://discovery.talos.dev/
# # A key used for the [encryption of secret data at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/).
# # Decryption secret example (do not use in production!).
# aescbcEncryptionSecret: z01mye6j16bspJYtTB/5SFX8j7Ph4JXxM2Xuu4vsBPM=
# # A key used for the [encryption of secret data at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/).
# # Decryption secret example (do not use in production!).
# secretboxEncryptionSecret: z01mye6j16bspJYtTB/5SFX8j7Ph4JXxM2Xuu4vsBPM=
# # The base64 encoded aggregator certificate authority used by Kubernetes for front-proxy certificate generation.
# # AggregatorCA example.
# aggregatorCA:
# crt: LS0tIEVYQU1QTEUgQ0VSVElGSUNBVEUgLS0t
# key: LS0tIEVYQU1QTEUgS0VZIC0tLQ==
# # The base64 encoded private key for service account token generation.
# # AggregatorCA example.
# serviceAccount:
# key: LS0tIEVYQU1QTEUgS0VZIC0tLQ==
# # API server specific configuration options.
# apiServer:
# image: registry.k8s.io/kube-apiserver:v1.33.1 # The container image used in the API server manifest.
# # Extra arguments to supply to the API server.
# extraArgs:
# feature-gates: ServerSideApply=true
# http2-max-streams-per-connection: "32"
# # Extra certificate subject alternative names for the API server's certificate.
# certSANs:
# - 1.2.3.4
# - 4.5.6.7
# # Configure the API server admission plugins.
# admissionControl:
# - name: PodSecurity # Name is the name of the admission controller.
# # Configuration is an embedded configuration object to be used as the plugin's
# configuration:
# apiVersion: pod-security.admission.config.k8s.io/v1alpha1
# defaults:
# audit: restricted
# audit-version: latest
# enforce: baseline
# enforce-version: latest
# warn: restricted
# warn-version: latest
# exemptions:
# namespaces:
# - kube-system
# runtimeClasses: []
# usernames: []
# kind: PodSecurityConfiguration
# # Configure the API server audit policy.
# auditPolicy:
# apiVersion: audit.k8s.io/v1
# kind: Policy
# rules:
# - level: Metadata
# # Configure the API server authorization config. Node and RBAC authorizers are always added irrespective of the configuration.
# authorizationConfig:
# - type: Webhook # Type is the name of the authorizer. Allowed values are `Node`, `RBAC`, and `Webhook`.
# name: webhook # Name is used to describe the authorizer.
# # webhook is the configuration for the webhook authorizer.
# webhook:
# connectionInfo:
# type: InClusterConfig
# failurePolicy: Deny
# matchConditionSubjectAccessReviewVersion: v1
# matchConditions:
# - expression: has(request.resourceAttributes)
# - expression: '!(\''system:serviceaccounts:kube-system\'' in request.groups)'
# subjectAccessReviewVersion: v1
# timeout: 3s
# - type: Webhook # Type is the name of the authorizer. Allowed values are `Node`, `RBAC`, and `Webhook`.
# name: in-cluster-authorizer # Name is used to describe the authorizer.
# # webhook is the configuration for the webhook authorizer.
# webhook:
# connectionInfo:
# type: InClusterConfig
# failurePolicy: NoOpinion
# matchConditionSubjectAccessReviewVersion: v1
# subjectAccessReviewVersion: v1
# timeout: 3s
# # Controller manager server specific configuration options.
# controllerManager:
# image: registry.k8s.io/kube-controller-manager:v1.33.1 # The container image used in the controller manager manifest.
# # Extra arguments to supply to the controller manager.
# extraArgs:
# feature-gates: ServerSideApply=true
# # Kube-proxy server-specific configuration options
# proxy:
# disabled: false # Disable kube-proxy deployment on cluster bootstrap.
# image: registry.k8s.io/kube-proxy:v1.33.1 # The container image used in the kube-proxy manifest.
# mode: ipvs # proxy mode of kube-proxy.
# # Extra arguments to supply to kube-proxy.
# extraArgs:
# proxy-mode: iptables
# # Scheduler server specific configuration options.
# scheduler:
# image: registry.k8s.io/kube-scheduler:v1.33.1 # The container image used in the scheduler manifest.
# # Extra arguments to supply to the scheduler.
# extraArgs:
# feature-gates: AllBeta=true
# # Etcd specific configuration options.
# etcd:
# image: gcr.io/etcd-development/etcd:v3.5.21 # The container image used to create the etcd service.
# # The `ca` is the root certificate authority of the PKI.
# ca:
# crt: LS0tIEVYQU1QTEUgQ0VSVElGSUNBVEUgLS0t
# key: LS0tIEVYQU1QTEUgS0VZIC0tLQ==
# # Extra arguments to supply to etcd.
# extraArgs:
# election-timeout: "5000"
# # The `advertisedSubnets` field configures the networks to pick etcd advertised IP from.
# advertisedSubnets:
# - 10.0.0.0/8
# # Core DNS specific configuration options.
# coreDNS:
# image: registry.k8s.io/coredns/coredns:v1.12.1 # The `image` field is an override to the default coredns image.
# # External cloud provider configuration.
# externalCloudProvider:
# enabled: true # Enable external cloud provider.
# # A list of urls that point to additional manifests for an external cloud provider.
# manifests:
# - https://raw.githubusercontent.com/kubernetes/cloud-provider-aws/v1.20.0-alpha.0/manifests/rbac.yaml
# - https://raw.githubusercontent.com/kubernetes/cloud-provider-aws/v1.20.0-alpha.0/manifests/aws-cloud-controller-manager-daemonset.yaml
# # A list of urls that point to additional manifests.
# extraManifests:
# - https://www.example.com/manifest1.yaml
# - https://www.example.com/manifest2.yaml
# # A map of key value pairs that will be added while fetching the extraManifests.
# extraManifestHeaders:
# Token: "1234567"
# X-ExtraInfo: info
# # A list of inline Kubernetes manifests.
# inlineManifests:
# - name: namespace-ci # Name of the manifest.
# contents: |- # Manifest contents as a string.
# apiVersion: v1
# kind: Namespace
# metadata:
# name: ci
# # Settings for admin kubeconfig generation.
# adminKubeconfig:
# certLifetime: 1h0m0s # Admin kubeconfig certificate lifetime (default is 1 year).
# # Allows running workload on control-plane nodes.
# allowSchedulingOnControlPlanes: true

View File

@@ -0,0 +1,80 @@
#!/bin/bash
# Talos cluster initialization script
# This script performs one-time cluster setup: generates secrets, base configs, and sets up talosctl
set -euo pipefail
# Check if WC_HOME is set
if [ -z "${WC_HOME:-}" ]; then
echo "Error: WC_HOME environment variable not set. Run \`source ./env.sh\`."
exit 1
fi
NODE_SETUP_DIR="${WC_HOME}/setup/cluster-nodes"
# Get cluster configuration from config.yaml
CLUSTER_NAME=$(wild-config cluster.name)
VIP=$(wild-config cluster.nodes.control.vip)
TALOS_VERSION=$(wild-config cluster.nodes.talos.version)
echo "Initializing Talos cluster: $CLUSTER_NAME"
echo "VIP: $VIP"
echo "Talos version: $TALOS_VERSION"
# Create directories
mkdir -p generated final patch
# Check if cluster secrets already exist
if [ -f "generated/secrets.yaml" ]; then
echo ""
echo "⚠️ Cluster secrets already exist!"
echo "This will regenerate ALL cluster certificates and invalidate existing nodes."
echo ""
read -p "Do you want to continue? (y/N): " -r
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo "Cancelled."
exit 0
fi
echo ""
fi
# Generate fresh cluster secrets
echo "Generating cluster secrets..."
cd generated
talosctl gen secrets -o secrets.yaml --force
echo "Generating base machine configs..."
talosctl gen config --with-secrets secrets.yaml "$CLUSTER_NAME" "https://$VIP:6443" --force
cd ..
# Setup talosctl context
echo "Setting up talosctl context..."
# Remove existing context if it exists
talosctl config context "$CLUSTER_NAME" --remove 2>/dev/null || true
# Merge new configuration
talosctl config merge ./generated/talosconfig
talosctl config endpoint "$VIP"
echo ""
echo "✅ Cluster initialization complete!"
echo ""
echo "Cluster details:"
echo " - Name: $CLUSTER_NAME"
echo " - VIP: $VIP"
echo " - Secrets: generated/secrets.yaml"
echo " - Base configs: generated/controlplane.yaml, generated/worker.yaml"
echo ""
echo "Talosctl context configured:"
talosctl config info
echo ""
echo "Next steps:"
echo "1. Register nodes with hardware detection:"
echo " ./detect-node-hardware.sh <maintenance-ip> <node-number>"
echo ""
echo "2. Generate machine configurations:"
echo " ./generate-machine-configs.sh"
echo ""
echo "3. Apply configurations to nodes"

View File

@@ -1,21 +0,0 @@
#!/bin/bash
set -e
apt-get update
# Longhorn requirements
# Install iscsi on all nodes.
# apt-get install open-iscsi
# modprobe iscsi_tcp
# systemctl restart open-iscsi
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.8.1/deploy/prerequisite/longhorn-iscsi-installation.yaml
# Install NFSv4 client on all nodes.
# apt-get install nfs-common
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.8.1/deploy/prerequisite/longhorn-nfs-installation.yaml
apt-get install cryptsetup
# To check longhorn requirements:
# curl -sSfL https://raw.githubusercontent.com/longhorn/longhorn/v1.8.1/scripts/environment_check.sh | bash

View File

@@ -0,0 +1,22 @@
machine:
install:
disk: {{ .cluster.nodes.control.node1.disk }}
image: factory.talos.dev/metal-installer/{{ .cluster.nodes.talos.schematicId}}:{{ .cluster.nodes.talos.version}}
network:
interfaces:
- interface: {{ .cluster.nodes.control.node1.interface }}
dhcp: false
addresses:
- {{ .cluster.nodes.control.node1.ip }}/24
routes:
- network: 0.0.0.0/0
gateway: {{ .cloud.router.ip }}
vip:
ip: {{ .cluster.nodes.control.vip }}
cluster:
discovery:
enabled: true
registries:
service:
disabled: true
allowSchedulingOnControlPlanes: true

View File

@@ -0,0 +1,22 @@
machine:
install:
disk: {{ .cluster.nodes.control.node2.disk }}
image: factory.talos.dev/metal-installer/{{ .cluster.nodes.talos.schematicId}}:{{ .cluster.nodes.talos.version}}
network:
interfaces:
- interface: {{ .cluster.nodes.control.node2.interface }}
dhcp: false
addresses:
- {{ .cluster.nodes.control.node2.ip }}/24
routes:
- network: 0.0.0.0/0
gateway: {{ .cloud.router.ip }}
vip:
ip: {{ .cluster.nodes.control.vip }}
cluster:
discovery:
enabled: true
registries:
service:
disabled: true
allowSchedulingOnControlPlanes: true

View File

@@ -0,0 +1,22 @@
machine:
install:
disk: {{ .cluster.nodes.control.node3.disk }}
image: factory.talos.dev/metal-installer/{{ .cluster.nodes.talos.schematicId}}:{{ .cluster.nodes.talos.version}}
network:
interfaces:
- interface: {{ .cluster.nodes.control.node3.interface }}
dhcp: false
addresses:
- {{ .cluster.nodes.control.node3.ip }}/24
routes:
- network: 0.0.0.0/0
gateway: {{ .cloud.router.ip }}
vip:
ip: {{ .cluster.nodes.control.vip }}
cluster:
discovery:
enabled: true
registries:
service:
disabled: true
allowSchedulingOnControlPlanes: true

View File

@@ -0,0 +1,22 @@
machine:
install:
disk: /dev/sdc
network:
interfaces:
- interface: enp4s0
dhcp: true
kubelet:
extraMounts:
- destination: /var/lib/longhorn
type: bind
source: /var/lib/longhorn
options:
- bind
- rshared
- rw
# NOTE: System extensions need to be added via Talos Image Factory
# customization:
# systemExtensions:
# officialExtensions:
# - siderolabs/iscsi-tools
# - siderolabs/util-linux-tools

View File

View File

@@ -1,17 +0,0 @@
machine:
install:
disk: /dev/sdc
network:
interfaces:
- interface: eth0
vip:
ip: 192.168.8.20
- interface: eth1
dhcp: true
cluster:
discovery:
enabled: true
registries:
service:
disabled: true
allowSchedulingOnControlPlanes: true

View File

@@ -1,3 +0,0 @@
machine:
install:
disk: /dev/sdc