Initial commit.
This commit is contained in:
260
ai/BUILDING_WILD_CENTRAL.md
Normal file
260
ai/BUILDING_WILD_CENTRAL.md
Normal file
@@ -0,0 +1,260 @@
|
||||
# Building Wild Cloud Central
|
||||
|
||||
The first version of Wild Cloud, the Proof of Concept version (v.PoC), was built as a collection of shell scripts that users would run from their local machines. This works well for early adopters who are comfortable with the command line, Talos, and Kubernetes.
|
||||
|
||||
To make Wild Cloud more accessible to a broader audience, we are developing Wild Central. Central is a single-purpose machine run on a LAN that will deliver:
|
||||
|
||||
- Wild Daemon: A lightweight service that runs on a local machine (e.g., a Raspberry Pi) to manage Wild Cloud instances on the local network.
|
||||
- Wild App: A web-based interface (to Wild Daemon) for managing Wild Cloud instances.
|
||||
- Wild CLI: A command-line interface (to Wild Daemon) for advanced users who prefer to manage Wild Cloud from the terminal.
|
||||
|
||||
## Background info
|
||||
|
||||
### Info about Wild Cloud v.PoC
|
||||
|
||||
- @docs/agent-context/wildcloud-v.PoC/README.md
|
||||
- @docs/agent-context/wildcloud-v.PoC/overview.md
|
||||
- @docs/agent-context/wildcloud-v.PoC/project-architecture.md
|
||||
- @docs/agent-context/wildcloud-v.PoC/bin-scripts.md
|
||||
- @docs/agent-context/wildcloud-v.PoC/configuration-system.md
|
||||
- @docs/agent-context/wildcloud-v.PoC/setup-process.md
|
||||
- @docs/agent-context/wildcloud-v.PoC/apps-system.md
|
||||
|
||||
### Info about Talos
|
||||
|
||||
- @docs/agent-context/talos-v1.11/README.md
|
||||
- @docs/agent-context/talos-v1.11/architecture-and-components.md
|
||||
- @docs/agent-context/talos-v1.11/cli-essentials.md
|
||||
- @docs/agent-context/talos-v1.11/cluster-operations.md
|
||||
- @docs/agent-context/talos-v1.11/discovery-and-networking.md
|
||||
- @docs/agent-context/talos-v1.11/etcd-management.md
|
||||
- @docs/agent-context/talos-v1.11/bare-metal-administration.md
|
||||
- @docs/agent-context/talos-v1.11/troubleshooting-guide.md
|
||||
|
||||
## Architecture
|
||||
|
||||
### Old v.PoC Architecture
|
||||
|
||||
- WC_ROOT: The scripts used to set up and manage the Wild Cloud cluster. Currently, this is a set of shell scripts in $WC_ROOT/bin.
|
||||
- WC_HOME: During setup, the user creates a Wild Cloud project directory (WC_HOME) on their local machine. This directory holds all configuration, secrets, and k8s manifests for their specific Wild Cloud deployment.
|
||||
- Wild Cloud Apps Directory: The Wild Cloud apps are stored in the `apps/` directory within the WC_ROOT repository. Users can deploy these apps to their cluster using the scripts in WC_ROOT/bin.
|
||||
- dnsmasq server: Scripts help the operator set up a dnsmasq server on a separate machine to provide LAN DNS services during node bootstrapping.
|
||||
|
||||
### New Wild Central Architecture
|
||||
|
||||
#### wildd: The Wild Cloud Daemon
|
||||
|
||||
wildd is a long-running service that provides an API and web interface for managing one or more Wild Cloud clusters. It runs on a dedicated device within the user's network.
|
||||
|
||||
wildd replaces functionality from the v.PoC scripts and the dnsmasq server. It is one API for managing multiple wild cloud instances on the LAN.
|
||||
|
||||
Both wild-app and wild-cli communicate with wildd to perform actions.
|
||||
|
||||
See: @daemon/BUILDING_WILD_DAEMON.md
|
||||
|
||||
#### wild-app
|
||||
|
||||
The web application that provides the user interface for Wild Cloud on Wild Central. It communicates with wildd to perform actions and display information.
|
||||
|
||||
See: @/app/BUILDING_WILD_APP.md
|
||||
|
||||
#### wild-cli
|
||||
|
||||
A command-line interface for advanced users who prefer to manage Wild Cloud from the terminal. It communicates with wildd to perform actions.
|
||||
|
||||
Mirrors all of the wild-* scripts from v.PoC, but adapted for the new architecture:
|
||||
|
||||
- One golang client (wild-cli) replaces many bash scripts (wild-*).
|
||||
- Wrapper around wildd API instead of direct file manipulation.
|
||||
- Multi-cloud: v.PoC scripts set the instance context with WC_HOME environment variable. In Central, wild-cli follows the "context" pattern like kubectl and talosctl, using `--context` or `WILD_CONTEXT` to select which wild cloud instance to manage, or defaulting to the "current" context.
|
||||
|
||||
See: @cli/BUILDING_WILD_CLI.md
|
||||
|
||||
#### Wild Central Data
|
||||
|
||||
Configured with $WILD_CENTRAL_DATA environment variable (default: /var/lib/wild-central).
|
||||
|
||||
Replaces multiple WC_HOMEs. All wild clouds managed on the LAN are configured here. These are still in easy to read YAML format and can be edited directly or through the webapp.
|
||||
|
||||
Wild Central data also holds the local app directory, logs, and artifacts, and overall state data.
|
||||
|
||||
#### Wild Cloud Apps Directory
|
||||
|
||||
The Wild Cloud apps are stored in the `apps/` directory within the WC_ROOT repository. Users can deploy these apps to their cluster using the webapp or wild-cli.
|
||||
|
||||
#### dnsmasq server
|
||||
|
||||
The Wild Daemon (wildd) includes functionality to manage a dnsmasq server on the same device, providing LAN DNS services during node bootstrapping.
|
||||
|
||||
## Packaging and Installation
|
||||
|
||||
Ultimately, the daemon, app, and cli will be packaged together for easy installation on a Raspberry Pi or similar device.
|
||||
|
||||
See @ai/WILD_CENTRAL_PACKAGING.md
|
||||
|
||||
## Implementation Philosophy
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
Embodies a Zen-like minimalism that values simplicity and clarity above all. This approach reflects:
|
||||
|
||||
- **Wabi-sabi philosophy**: Embracing simplicity and the essential. Each line serves a clear purpose without unnecessary embellishment.
|
||||
- **KISS**: The solution should be as simple as possible, but no simpler.
|
||||
- **YAGNI**: Avoid building features or abstractions that aren't immediately needed. The code handles what's needed now rather than anticipating every possible future scenario.
|
||||
- **Trust in emergence**: Complex systems work best when built from simple, well-defined components that do one thing well.
|
||||
- **Pragmatic trust**: The developer trusts external systems enough to interact with them directly, handling failures as they occur rather than assuming they'll happen.
|
||||
- **Consistency is key**: Uniform patterns and conventions make the codebase easier to understand and maintain. If you introduce a new pattern, make sure it's consistently applied. There should be one obvious way to do things.
|
||||
|
||||
This development philosophy values clear, concise documentation, readable code, and belief that good architecture emerges from simplicity rather than being imposed through complexity.
|
||||
|
||||
## Core Design Principles
|
||||
|
||||
### 1. Ruthless Simplicity
|
||||
|
||||
- **KISS principle taken to heart**: Keep everything as simple as possible, but no simpler
|
||||
- **Minimize abstractions**: Every layer of abstraction must justify its existence
|
||||
- **Start minimal, grow as needed**: Begin with the simplest implementation that meets current needs
|
||||
- **Avoid future-proofing**: Don't build for hypothetical future requirements
|
||||
- **Question everything**: Regularly challenge complexity in the codebase
|
||||
|
||||
### 2. Architectural Integrity with Minimal Implementation
|
||||
|
||||
- **Preserve key architectural patterns**: Maintain clear boundaries and responsibilities
|
||||
- **Simplify implementations**: Maintain pattern benefits with dramatically simpler code
|
||||
- **Scrappy but structured**: Lightweight implementations of solid architectural foundations
|
||||
- **End-to-end thinking**: Focus on complete flows rather than perfect components
|
||||
|
||||
### 3. Library vs Custom Code
|
||||
|
||||
Choosing between custom code and external libraries is a judgment call that evolves with your requirements. There's no rigid rule - it's about understanding trade-offs and being willing to revisit decisions as needs change.
|
||||
|
||||
#### The Evolution Pattern
|
||||
|
||||
Your approach might naturally evolve:
|
||||
- **Start simple**: Custom code for basic needs (20 lines handles it)
|
||||
- **Growing complexity**: Switch to a library when requirements expand
|
||||
- **Hitting limits**: Back to custom when you outgrow the library's capabilities
|
||||
|
||||
This isn't failure - it's natural evolution. Each stage was the right choice at that time.
|
||||
|
||||
#### When Custom Code Makes Sense
|
||||
|
||||
Custom code often wins when:
|
||||
- The need is simple and well-understood
|
||||
- You want code perfectly tuned to your exact requirements
|
||||
- Libraries would require significant "hacking" or workarounds
|
||||
- The problem is unique to your domain
|
||||
- You need full control over the implementation
|
||||
|
||||
#### When Libraries Make Sense
|
||||
|
||||
Libraries shine when:
|
||||
- They solve complex problems you'd rather not tackle (auth, crypto, video encoding)
|
||||
- They align well with your needs without major modifications
|
||||
- The problem is well-solved with mature, battle-tested solutions
|
||||
- Configuration alone can adapt them to your requirements
|
||||
- The complexity they handle far exceeds the integration cost
|
||||
|
||||
#### Making the Judgment Call
|
||||
|
||||
Ask yourself:
|
||||
- How well does this library align with our actual needs?
|
||||
- Are we fighting the library or working with it?
|
||||
- Is the integration clean or does it require workarounds?
|
||||
- Will our future requirements likely stay within this library's capabilities?
|
||||
- Is the problem complex enough to justify the dependency?
|
||||
|
||||
#### Recognizing Misalignment
|
||||
|
||||
Watch for signs you're fighting your current approach:
|
||||
- Spending more time working around the library than using it
|
||||
- Your simple custom solution has grown complex and fragile
|
||||
- You're monkey-patching or heavily wrapping a library
|
||||
- The library's assumptions fundamentally conflict with your needs
|
||||
|
||||
#### Stay Flexible
|
||||
|
||||
Remember that complexity isn't destroyed, only moved. Libraries shift complexity from your code to someone else's - that's often a great trade, but recognize what you're doing.
|
||||
|
||||
The key is avoiding lock-in. Keep library integration points minimal and isolated so you can switch approaches when needed. There's no shame in moving from custom to library or library to custom. Requirements change, understanding deepens, and the right answer today might not be the right answer tomorrow. Make the best decision with current information, and be ready to evolve.
|
||||
|
||||
## Technical Implementation Guidelines
|
||||
|
||||
### API Layer
|
||||
|
||||
- Implement only essential endpoints
|
||||
- Minimal middleware with focused validation
|
||||
- Clear error responses with useful messages
|
||||
- Consistent patterns across endpoints
|
||||
|
||||
### Storage
|
||||
|
||||
- Prefer simple file storage
|
||||
- Simple schema focused on current needs
|
||||
|
||||
## Development Approach
|
||||
|
||||
### Vertical Slices
|
||||
|
||||
- Implement complete end-to-end functionality slices
|
||||
- Start with core user journeys
|
||||
- Get data flowing through all layers early
|
||||
- Add features horizontally only after core flows work
|
||||
|
||||
### Iterative Implementation
|
||||
|
||||
- 80/20 principle: Focus on high-value, low-effort features first
|
||||
- One working feature > multiple partial features
|
||||
- Validate with real usage before enhancing
|
||||
- Be willing to refactor early work as patterns emerge
|
||||
|
||||
### Testing Strategy
|
||||
|
||||
- Focus on critical path testing initially
|
||||
- Add unit tests for complex logic and edge cases
|
||||
- Testing pyramid: 60% unit, 30% integration, 10% end-to-end
|
||||
|
||||
### Error Handling
|
||||
|
||||
- Handle common errors robustly
|
||||
- Log detailed information for debugging
|
||||
- Provide clear error messages to users
|
||||
- Fail fast and visibly during development
|
||||
|
||||
## Decision-Making Framework
|
||||
|
||||
When faced with implementation decisions, ask these questions:
|
||||
|
||||
1. **Necessity**: "Do we actually need this right now?"
|
||||
2. **Simplicity**: "What's the simplest way to solve this problem?"
|
||||
3. **Directness**: "Can we solve this more directly?"
|
||||
4. **Value**: "Does the complexity add proportional value?"
|
||||
5. **Maintenance**: "How easy will this be to understand and change later?"
|
||||
|
||||
## Areas to Embrace Complexity
|
||||
|
||||
Some areas justify additional complexity:
|
||||
|
||||
1. **Security**: Never compromise on security fundamentals
|
||||
2. **Data integrity**: Ensure data consistency and reliability
|
||||
3. **Core user experience**: Make the primary user flows smooth and reliable
|
||||
4. **Error visibility**: Make problems obvious and diagnosable
|
||||
|
||||
## Areas to Aggressively Simplify
|
||||
|
||||
Push for extreme simplicity in these areas:
|
||||
|
||||
1. **Internal abstractions**: Minimize layers between components
|
||||
2. **Generic "future-proof" code**: Resist solving non-existent problems
|
||||
3. **Edge case handling**: Handle the common cases well first
|
||||
4. **Framework usage**: Use only what you need from frameworks
|
||||
5. **State management**: Keep state simple and explicit
|
||||
|
||||
## Remember
|
||||
|
||||
- It's easier to add complexity later than to remove it
|
||||
- Code you don't write has no bugs
|
||||
- Favor clarity over cleverness
|
||||
- The best code is often the simplest
|
||||
|
||||
This philosophy document serves as the foundational guide for all implementation decisions in the project.
|
||||
102
ai/WILD_CENTRAL_PACKAGING.md
Normal file
102
ai/WILD_CENTRAL_PACKAGING.md
Normal file
@@ -0,0 +1,102 @@
|
||||
# Packaging Wild Central
|
||||
|
||||
## Desired Experience
|
||||
|
||||
This is the desired experience for installing Wild Cloud Central on a fresh Debian/Ubuntu system:
|
||||
|
||||
### APT Repository (Recommended)
|
||||
|
||||
```bash
|
||||
# Download and install GPG key
|
||||
curl -fsSL https://mywildcloud.org/apt/wild-cloud-central.gpg | sudo tee /usr/share/keyrings/wild-cloud-central-archive-keyring.gpg > /dev/null
|
||||
|
||||
# Add repository (modern .sources format)
|
||||
sudo tee /etc/apt/sources.list.d/wild-cloud-central.sources << 'EOF'
|
||||
Types: deb
|
||||
URIs: https://mywildcloud.org/apt
|
||||
Suites: stable
|
||||
Components: main
|
||||
Signed-By: /usr/share/keyrings/wild-cloud-central-archive-keyring.gpg
|
||||
EOF
|
||||
|
||||
# Update and install
|
||||
sudo apt update
|
||||
sudo apt install wild-cloud-central
|
||||
```
|
||||
|
||||
### Manual Installation
|
||||
|
||||
Download the latest `.deb` package from the [releases page](https://github.com/wildcloud/wild-central/releases) and install:
|
||||
|
||||
```bash
|
||||
sudo dpkg -i wild-cloud-central_*.deb
|
||||
sudo apt-get install -f # Fix any dependency issues
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. **Configure the service** (optional):
|
||||
|
||||
```bash
|
||||
sudo cp /etc/wild-cloud-central/config.yaml.example /etc/wild-cloud-central/config.yaml
|
||||
sudo nano /etc/wild-cloud-central/config.yaml
|
||||
```
|
||||
|
||||
2. **Start the service**:
|
||||
|
||||
```bash
|
||||
sudo systemctl enable wild-cloud-central
|
||||
sudo systemctl start wild-cloud-central
|
||||
```
|
||||
|
||||
3. **Access the web interface**:
|
||||
Open http://your-server-ip in your browser
|
||||
|
||||
## Developer tooling
|
||||
|
||||
Makefile commands for packaging:
|
||||
|
||||
Build targets (compile binaries):
|
||||
|
||||
make build - Build for current architecture
|
||||
make build-arm64 - Build arm64 binary
|
||||
make build-amd64 - Build amd64 binary
|
||||
make build-all - Build all architectures
|
||||
|
||||
Package targets (create .deb packages):
|
||||
|
||||
make package - Create .deb package for current arch
|
||||
make package-arm64 - Create arm64 .deb package
|
||||
make package-amd64 - Create amd64 .deb package
|
||||
make package-all - Create all .deb packages
|
||||
|
||||
Repository targets:
|
||||
|
||||
make repo - Build APT repository from packages
|
||||
make deploy-repo - Deploy repository to server
|
||||
|
||||
Quality assurance:
|
||||
|
||||
make check - Run all checks (fmt + vet + test)
|
||||
make fmt - Format Go code
|
||||
make vet - Run go vet
|
||||
make test - Run tests
|
||||
|
||||
Development:
|
||||
|
||||
make run - Run application locally
|
||||
make clean - Remove all build artifacts
|
||||
make deps-check - Verify and tidy dependencies
|
||||
make version - Show build information
|
||||
make install - Install to system
|
||||
|
||||
Directory structure:
|
||||
|
||||
build/ - Intermediate build artifacts
|
||||
dist/bin/ - Final binaries for distribution
|
||||
dist/packages/ - OS packages (.deb files)
|
||||
dist/repositories/ - APT repository for deployment
|
||||
|
||||
Example workflows:
|
||||
make check && make build - Safe development build
|
||||
make clean && make repo - Full release build
|
||||
135
ai/talos-v1.11/README.md
Normal file
135
ai/talos-v1.11/README.md
Normal file
@@ -0,0 +1,135 @@
|
||||
# Talos v1.11 Agent Context Documentation
|
||||
|
||||
This directory contains comprehensive documentation extracted from the official Talos v1.11 documentation, organized specifically to help AI agents become expert Talos cluster administrators.
|
||||
|
||||
## Documentation Structure
|
||||
|
||||
### Core Operations
|
||||
- **[cluster-operations.md](cluster-operations.md)** - Essential cluster operations including upgrades, node management, and configuration
|
||||
- **[cli-essentials.md](cli-essentials.md)** - Key talosctl commands and usage patterns for daily administration
|
||||
|
||||
### System Understanding
|
||||
- **[architecture-and-components.md](architecture-and-components.md)** - Deep dive into Talos architecture, components, and design principles
|
||||
- **[discovery-and-networking.md](discovery-and-networking.md)** - Cluster discovery mechanisms and network configuration
|
||||
|
||||
### Specialized Operations
|
||||
- **[etcd-management.md](etcd-management.md)** - etcd operations, maintenance, backup, and disaster recovery
|
||||
- **[bare-metal-administration.md](bare-metal-administration.md)** - Bare metal specific configurations, security, and hardware management
|
||||
- **[troubleshooting-guide.md](troubleshooting-guide.md)** - Systematic approaches to diagnosing and resolving common issues
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Essential Commands for New Agents
|
||||
```bash
|
||||
# Cluster health check
|
||||
talosctl -n <IP1>,<IP2>,<IP3> health --control-plane-nodes <IP1>,<IP2>,<IP3>
|
||||
|
||||
# Node information
|
||||
talosctl get members
|
||||
talosctl -n <IP> version
|
||||
|
||||
# Service status
|
||||
talosctl -n <IP> services
|
||||
talosctl -n <IP> service kubelet
|
||||
|
||||
# System resources
|
||||
talosctl -n <IP> memory
|
||||
talosctl -n <IP> disks
|
||||
|
||||
# Logs and events
|
||||
talosctl -n <IP> dmesg | tail -50
|
||||
talosctl -n <IP> logs kubelet
|
||||
talosctl -n <IP> events --since=1h
|
||||
```
|
||||
|
||||
### Critical Procedures
|
||||
- **Bootstrap**: `talosctl bootstrap --nodes <first-controlplane-ip>`
|
||||
- **Backup etcd**: `talosctl -n <IP> etcd snapshot db.snapshot`
|
||||
- **Upgrade OS**: `talosctl upgrade --nodes <IP> --image ghcr.io/siderolabs/installer:v1.11.x`
|
||||
- **Upgrade K8s**: `talosctl --nodes <controlplane> upgrade-k8s --to v1.34.1`
|
||||
|
||||
### Emergency Commands
|
||||
- **Node reset**: `talosctl -n <IP> reset`
|
||||
- **Force reset**: `talosctl -n <IP> reset --graceful=false --reboot`
|
||||
- **Disaster recovery**: `talosctl -n <IP> bootstrap --recover-from=./db.snapshot`
|
||||
- **Rollback**: `talosctl rollback --nodes <IP>`
|
||||
|
||||
### Bare Metal Specific Commands
|
||||
- **Check hardware**: `talosctl -n <IP> disks`, `talosctl -n <IP> read /proc/cpuinfo`
|
||||
- **Network interfaces**: `talosctl -n <IP> get addresses`, `talosctl -n <IP> get routes`
|
||||
- **Extensions**: `talosctl -n <IP> get extensions`
|
||||
- **Encryption status**: `talosctl -n <IP> get encryptionconfig -o yaml`
|
||||
- **Hardware monitoring**: `talosctl -n <IP> dmesg | grep -i error`
|
||||
|
||||
## Key Concepts for Agents
|
||||
|
||||
### Architecture Fundamentals
|
||||
- **Immutable OS**: Single image, atomic updates, A-B rollback system
|
||||
- **API-driven**: All management through gRPC API, no SSH/shell access
|
||||
- **Controller pattern**: Kubernetes-style resource controllers for system management
|
||||
- **Minimal attack surface**: Only services necessary for Kubernetes
|
||||
|
||||
### Control Plane Design
|
||||
- **etcd quorum**: Requires majority for operations (3-node=2, 5-node=3)
|
||||
- **Bootstrap process**: One-time initialization of etcd cluster
|
||||
- **HA considerations**: Odd numbers of nodes, avoid even numbers
|
||||
- **Upgrade strategy**: Rolling upgrades with automatic rollback on failure
|
||||
|
||||
### Network and Discovery
|
||||
- **Service discovery**: Encrypted discovery service for cluster membership
|
||||
- **KubeSpan**: Optional WireGuard mesh networking
|
||||
- **mTLS everywhere**: All Talos API communication secured
|
||||
- **Discovery registries**: Service (default) and Kubernetes (deprecated)
|
||||
|
||||
### Bare Metal Considerations
|
||||
- **META configuration**: Network config embedded in disk images
|
||||
- **Hardware compatibility**: Driver support and firmware requirements
|
||||
- **Disk encryption**: LUKS2 with TPM, static keys, or node ID
|
||||
- **SecureBoot**: UKI images with embedded signatures
|
||||
- **System extensions**: Hardware-specific drivers and tools
|
||||
- **Performance tuning**: CPU governors, IOMMU, memory management
|
||||
|
||||
## Common Administration Patterns
|
||||
|
||||
### Daily Operations
|
||||
1. Check cluster health across all nodes
|
||||
2. Monitor resource usage and capacity
|
||||
3. Review system events and logs
|
||||
4. Verify etcd health and backup status
|
||||
5. Monitor discovery service connectivity
|
||||
|
||||
### Maintenance Windows
|
||||
1. Plan upgrade sequence (workers first, then control plane)
|
||||
2. Create etcd backup before major changes
|
||||
3. Apply configuration changes with dry-run first
|
||||
4. Monitor upgrade progress and be ready to rollback
|
||||
5. Verify cluster functionality after changes
|
||||
|
||||
### Troubleshooting Workflow
|
||||
1. **Gather information**: Health, version, resources, logs
|
||||
2. **Check connectivity**: Network, discovery, API endpoints
|
||||
3. **Examine services**: Status of critical services
|
||||
4. **Review logs**: System events, service logs, kernel messages
|
||||
5. **Apply fixes**: Configuration patches, service restarts, node resets
|
||||
|
||||
## Best Practices for Agents
|
||||
|
||||
### Configuration Management
|
||||
- Use reproducible configuration workflow (secrets + patches)
|
||||
- Always dry-run configuration changes first
|
||||
- Store machine configurations in version control
|
||||
- Test configuration changes in non-production first
|
||||
|
||||
### Operational Safety
|
||||
- Take etcd snapshots before major changes
|
||||
- Upgrade one node at a time
|
||||
- Monitor upgrade progress and have rollback ready
|
||||
- Test disaster recovery procedures regularly
|
||||
|
||||
### Performance Optimization
|
||||
- Monitor etcd fragmentation and defragment when needed
|
||||
- Scale vertically before horizontally for control plane
|
||||
- Use appropriate hardware for etcd (fast storage, low network latency)
|
||||
- Monitor resource usage trends and capacity planning
|
||||
|
||||
This documentation provides the essential knowledge needed to effectively administer Talos Linux clusters, organized by operational context and complexity level.
|
||||
248
ai/talos-v1.11/architecture-and-components.md
Normal file
248
ai/talos-v1.11/architecture-and-components.md
Normal file
@@ -0,0 +1,248 @@
|
||||
# Talos Architecture and Components Guide
|
||||
|
||||
This guide provides deep understanding of Talos Linux architecture and system components for effective cluster administration.
|
||||
|
||||
## Core Architecture Principles
|
||||
|
||||
Talos is designed to be:
|
||||
- **Atomic**: Distributed as a single, versioned, signed, immutable image
|
||||
- **Modular**: Composed of separate components with defined gRPC interfaces
|
||||
- **Minimal**: Focused init system that runs only services necessary for Kubernetes
|
||||
|
||||
## File System Architecture
|
||||
|
||||
### Partition Layout
|
||||
- **EFI**: Stores EFI boot data
|
||||
- **BIOS**: Used for GRUB's second stage boot
|
||||
- **BOOT**: Contains boot loader, initramfs, and kernel data
|
||||
- **META**: Stores node metadata (node IDs, etc.)
|
||||
- **STATE**: Stores machine configuration, node identity, cluster discovery, KubeSpan data
|
||||
- **EPHEMERAL**: Stores ephemeral state, mounted at `/var`
|
||||
|
||||
### Root File System Structure
|
||||
Three-layer design:
|
||||
1. **Base Layer**: Read-only squashfs mounted as loop device (immutable base)
|
||||
2. **Runtime Layer**: tmpfs filesystems for runtime needs (`/dev`, `/proc`, `/run`, `/sys`, `/tmp`, `/system`)
|
||||
3. **Overlay Layer**: overlayfs for persistent data backed by XFS at `/var`
|
||||
|
||||
#### Special Directories
|
||||
- `/system`: Internal files that need to be writable (recreated each boot)
|
||||
- Example: `/system/etc/hosts` bind-mounted over `/etc/hosts`
|
||||
- `/var`: Owned by Kubernetes, contains persistent data:
|
||||
- etcd data (control plane nodes)
|
||||
- kubelet data
|
||||
- containerd data
|
||||
- Survives reboots and upgrades, wiped on reset
|
||||
|
||||
## Core Components
|
||||
|
||||
### machined (PID 1)
|
||||
**Role**: Talos replacement for traditional init process
|
||||
**Functions**:
|
||||
- Machine configuration management
|
||||
- API handling
|
||||
- Resource and controller management
|
||||
- Service lifecycle management
|
||||
|
||||
**Managed Services**:
|
||||
- containerd
|
||||
- etcd (control plane nodes)
|
||||
- kubelet
|
||||
- networkd
|
||||
- trustd
|
||||
- udevd
|
||||
|
||||
**Architecture**: Uses controller-runtime pattern similar to Kubernetes controllers
|
||||
|
||||
### apid (API Gateway)
|
||||
**Role**: gRPC API endpoint for all Talos interactions
|
||||
**Functions**:
|
||||
- Routes requests to appropriate components
|
||||
- Provides proxy capabilities for multi-node operations
|
||||
- Handles authentication and authorization
|
||||
|
||||
**Usage Patterns**:
|
||||
```bash
|
||||
# Direct node communication
|
||||
talosctl -e <node-ip> <command>
|
||||
|
||||
# Proxy through endpoint to specific nodes
|
||||
talosctl -e <endpoint> -n <target-nodes> <command>
|
||||
|
||||
# Multi-node operations
|
||||
talosctl -e <endpoint> -n <node1>,<node2>,<node3> <command>
|
||||
```
|
||||
|
||||
### trustd (Trust Management)
|
||||
**Role**: Establishes and maintains trust within the system
|
||||
**Functions**:
|
||||
- Root of Trust implementation
|
||||
- PKI data distribution for control plane bootstrap
|
||||
- Certificate management
|
||||
- Secure file placement operations
|
||||
|
||||
### containerd (Container Runtime)
|
||||
**Role**: Industry-standard container runtime
|
||||
**Namespaces**:
|
||||
- `system`: Talos services
|
||||
- `k8s.io`: Kubernetes services
|
||||
|
||||
### udevd (Device Management)
|
||||
**Role**: Device file manager (eudev implementation)
|
||||
**Functions**:
|
||||
- Kernel device notification handling
|
||||
- Device node management in `/dev`
|
||||
- Hardware discovery and setup
|
||||
|
||||
## Control Plane Architecture
|
||||
|
||||
### etcd Cluster Design
|
||||
**Critical Concepts**:
|
||||
- **Quorum**: Majority of members must agree on leader
|
||||
- **Membership**: Formal etcd cluster membership required
|
||||
- **Consensus**: Uses Raft protocol for distributed consensus
|
||||
|
||||
**Quorum Requirements**:
|
||||
- 3 nodes: Requires 2 for quorum (tolerates 1 failure)
|
||||
- 5 nodes: Requires 3 for quorum (tolerates 2 failures)
|
||||
- Even numbers are worse than odd (4 nodes still only tolerates 1 failure)
|
||||
|
||||
### Control Plane Components
|
||||
**Running as Static Pods on Control Plane Nodes**:
|
||||
|
||||
#### kube-apiserver
|
||||
- Kubernetes API endpoint
|
||||
- Connects to local etcd instance
|
||||
- Handles all API operations
|
||||
|
||||
#### kube-controller-manager
|
||||
- Runs control loops
|
||||
- Manages cluster state reconciliation
|
||||
- Handles node lifecycle, replication, etc.
|
||||
|
||||
#### kube-scheduler
|
||||
- Pod placement decisions
|
||||
- Resource-aware scheduling
|
||||
- Constraint satisfaction
|
||||
|
||||
### Bootstrap Process
|
||||
1. **etcd Bootstrap**: One node chosen as bootstrap node, initializes etcd cluster
|
||||
2. **Static Pods**: Control plane components start as static pods via kubelet
|
||||
3. **API Availability**: Control plane endpoint becomes available
|
||||
4. **Manifest Injection**: Bootstrap manifests (join tokens, RBAC, etc.) injected
|
||||
5. **Cluster Formation**: Other control plane nodes join etcd cluster
|
||||
6. **HA Control Plane**: All control plane nodes run full component set
|
||||
|
||||
## Resource System Architecture
|
||||
|
||||
### Controller-Runtime Pattern
|
||||
Talos uses Kubernetes-style controller pattern:
|
||||
- **Resources**: Typed configuration and state objects
|
||||
- **Controllers**: Reconcile desired vs actual state
|
||||
- **Events**: Reactive architecture for state changes
|
||||
|
||||
### Resource Namespaces
|
||||
- `config`: Machine configuration resources
|
||||
- `cluster`: Cluster membership and discovery
|
||||
- `controlplane`: Control plane component configurations
|
||||
- `secrets`: Certificate and key management
|
||||
- `network`: Network configuration and state
|
||||
|
||||
### Key Resources
|
||||
```bash
|
||||
# Machine configuration
|
||||
talosctl get machineconfig
|
||||
talosctl get machinetype
|
||||
|
||||
# Cluster membership
|
||||
talosctl get members
|
||||
talosctl get affiliates
|
||||
talosctl get identities
|
||||
|
||||
# Control plane
|
||||
talosctl get apiserverconfig
|
||||
talosctl get controllermanagerconfig
|
||||
talosctl get schedulerconfig
|
||||
|
||||
# Network
|
||||
talosctl get addresses
|
||||
talosctl get routes
|
||||
talosctl get nodeaddresses
|
||||
```
|
||||
|
||||
## Network Architecture
|
||||
|
||||
### Network Stack
|
||||
- **CNI**: Container Network Interface for pod networking
|
||||
- **Host Networking**: Node-to-node communication
|
||||
- **Service Discovery**: Built-in cluster member discovery
|
||||
- **KubeSpan**: Optional WireGuard mesh networking
|
||||
|
||||
### Discovery Service Integration
|
||||
- **Service Registry**: External discovery service (default: discovery.talos.dev)
|
||||
- **Kubernetes Registry**: Deprecated, uses Kubernetes Node resources
|
||||
- **Encrypted Communication**: All discovery data encrypted before transmission
|
||||
|
||||
## Security Architecture
|
||||
|
||||
### Immutable Base
|
||||
- Read-only root filesystem
|
||||
- Signed and verified boot process
|
||||
- Atomic updates with rollback capability
|
||||
|
||||
### Process Isolation
|
||||
- Minimal attack surface
|
||||
- No shell access
|
||||
- No arbitrary user services
|
||||
- Container-based workload isolation
|
||||
|
||||
### Network Security
|
||||
- Mutual TLS (mTLS) for all API communication
|
||||
- Certificate-based node authentication
|
||||
- Optional WireGuard mesh networking (KubeSpan)
|
||||
- Encrypted service discovery
|
||||
|
||||
### Kernel Hardening
|
||||
Configured according to Kernel Self Protection Project (KSPP) recommendations:
|
||||
- Stack protection
|
||||
- Control flow integrity
|
||||
- Memory protection features
|
||||
- Attack surface reduction
|
||||
|
||||
## Extension Points
|
||||
|
||||
### Machine Configuration
|
||||
- Declarative configuration management
|
||||
- Patch-based configuration updates
|
||||
- Runtime configuration validation
|
||||
|
||||
### System Extensions
|
||||
- Kernel modules
|
||||
- System services (limited)
|
||||
- Network configuration
|
||||
- Storage configuration
|
||||
|
||||
### Kubernetes Integration
|
||||
- Automatic kubelet configuration
|
||||
- Bootstrap manifest management
|
||||
- Certificate lifecycle management
|
||||
- Node lifecycle automation
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
### etcd Performance
|
||||
- Performance decreases with cluster size
|
||||
- Network latency affects consensus performance
|
||||
- Storage I/O directly impacts etcd performance
|
||||
|
||||
### Resource Requirements
|
||||
- **Control Plane Nodes**: Higher memory for etcd, CPU for control plane
|
||||
- **Worker Nodes**: Resources scale with workload requirements
|
||||
- **Network**: Low latency crucial for etcd performance
|
||||
|
||||
### Scaling Patterns
|
||||
- **Horizontal Scaling**: Add worker nodes for capacity
|
||||
- **Vertical Scaling**: Increase control plane node resources for performance
|
||||
- **Control Plane Scaling**: Odd numbers (3, 5) for availability
|
||||
|
||||
This architecture enables Talos to provide a secure, minimal, and operationally simple platform for running Kubernetes clusters while maintaining the reliability and performance characteristics needed for production workloads.
|
||||
506
ai/talos-v1.11/bare-metal-administration.md
Normal file
506
ai/talos-v1.11/bare-metal-administration.md
Normal file
@@ -0,0 +1,506 @@
|
||||
# Bare Metal Talos Administration Guide
|
||||
|
||||
This guide covers bare metal specific operations, configurations, and best practices for Talos Linux clusters.
|
||||
|
||||
## META-Based Network Configuration
|
||||
|
||||
Talos supports META-based network configuration for bare metal deployments where configuration is embedded in the disk image.
|
||||
|
||||
### Basic META Configuration
|
||||
```yaml
|
||||
# META configuration for bare metal networking
|
||||
machine:
|
||||
network:
|
||||
interfaces:
|
||||
- interface: eth0
|
||||
addresses:
|
||||
- 192.168.1.100/24
|
||||
routes:
|
||||
- network: 0.0.0.0/0
|
||||
gateway: 192.168.1.1
|
||||
mtu: 1500
|
||||
nameservers:
|
||||
- 8.8.8.8
|
||||
- 1.1.1.1
|
||||
```
|
||||
|
||||
### Advanced Network Configurations
|
||||
|
||||
#### VLAN Configuration
|
||||
```yaml
|
||||
machine:
|
||||
network:
|
||||
interfaces:
|
||||
- interface: eth0.100 # VLAN 100
|
||||
vlan:
|
||||
parentDevice: eth0
|
||||
vid: 100
|
||||
addresses:
|
||||
- 192.168.100.10/24
|
||||
routes:
|
||||
- network: 192.168.100.0/24
|
||||
```
|
||||
|
||||
#### Interface Bonding
|
||||
```yaml
|
||||
machine:
|
||||
network:
|
||||
interfaces:
|
||||
- interface: bond0
|
||||
bond:
|
||||
mode: 802.3ad
|
||||
lacpRate: fast
|
||||
xmitHashPolicy: layer3+4
|
||||
miimon: 100
|
||||
updelay: 200
|
||||
downdelay: 200
|
||||
interfaces:
|
||||
- eth0
|
||||
- eth1
|
||||
addresses:
|
||||
- 192.168.1.100/24
|
||||
routes:
|
||||
- network: 0.0.0.0/0
|
||||
gateway: 192.168.1.1
|
||||
```
|
||||
|
||||
#### Bridge Configuration
|
||||
```yaml
|
||||
machine:
|
||||
network:
|
||||
interfaces:
|
||||
- interface: br0
|
||||
bridge:
|
||||
stp:
|
||||
enabled: false
|
||||
interfaces:
|
||||
- eth0
|
||||
- eth1
|
||||
addresses:
|
||||
- 192.168.1.100/24
|
||||
routes:
|
||||
- network: 0.0.0.0/0
|
||||
gateway: 192.168.1.1
|
||||
```
|
||||
|
||||
### Network Troubleshooting Commands
|
||||
```bash
|
||||
# Check interface configuration
|
||||
talosctl -n <IP> get addresses
|
||||
talosctl -n <IP> get routes
|
||||
talosctl -n <IP> get links
|
||||
|
||||
# Check network configuration
|
||||
talosctl -n <IP> get networkconfig -o yaml
|
||||
|
||||
# Test network connectivity
|
||||
talosctl -n <IP> list /sys/class/net
|
||||
talosctl -n <IP> read /proc/net/dev
|
||||
```
|
||||
|
||||
## Disk Encryption for Bare Metal
|
||||
|
||||
### LUKS2 Encryption Configuration
|
||||
```yaml
|
||||
machine:
|
||||
systemDiskEncryption:
|
||||
state:
|
||||
provider: luks2
|
||||
keys:
|
||||
- slot: 0
|
||||
static:
|
||||
passphrase: "your-secure-passphrase"
|
||||
ephemeral:
|
||||
provider: luks2
|
||||
keys:
|
||||
- slot: 0
|
||||
nodeID: {}
|
||||
```
|
||||
|
||||
### TPM-Based Encryption
|
||||
```yaml
|
||||
machine:
|
||||
systemDiskEncryption:
|
||||
state:
|
||||
provider: luks2
|
||||
keys:
|
||||
- slot: 0
|
||||
tpm: {}
|
||||
ephemeral:
|
||||
provider: luks2
|
||||
keys:
|
||||
- slot: 0
|
||||
tpm: {}
|
||||
```
|
||||
|
||||
### Key Management Operations
|
||||
```bash
|
||||
# Check encryption status
|
||||
talosctl -n <IP> get encryptionconfig -o yaml
|
||||
|
||||
# Rotate encryption keys
|
||||
talosctl -n <IP> apply-config --file updated-config.yaml --mode staged
|
||||
```
|
||||
|
||||
## SecureBoot Implementation
|
||||
|
||||
### UKI (Unified Kernel Image) Setup
|
||||
SecureBoot requires UKI format images with embedded signatures.
|
||||
|
||||
#### Generate SecureBoot Keys
|
||||
```bash
|
||||
# Generate platform key (PK)
|
||||
talosctl gen secureboot uki --platform-key-path platform.key --platform-cert-path platform.crt
|
||||
|
||||
# Generate PCR signing key
|
||||
talosctl gen secureboot pcr --pcr-key-path pcr.key --pcr-cert-path pcr.crt
|
||||
|
||||
# Generate database entries
|
||||
talosctl gen secureboot database --enrolled-certificate platform.crt
|
||||
```
|
||||
|
||||
#### Machine Configuration for SecureBoot
|
||||
```yaml
|
||||
machine:
|
||||
secureboot:
|
||||
enabled: true
|
||||
uklPath: /boot/vmlinuz
|
||||
systemDiskEncryption:
|
||||
state:
|
||||
provider: luks2
|
||||
keys:
|
||||
- slot: 0
|
||||
tpm:
|
||||
pcrTargets:
|
||||
- 0
|
||||
- 1
|
||||
- 7
|
||||
```
|
||||
|
||||
### UEFI Configuration
|
||||
- Enable SecureBoot in UEFI firmware
|
||||
- Enroll platform keys and certificates
|
||||
- Configure TPM 2.0 for PCR measurements
|
||||
- Set boot order for UKI images
|
||||
|
||||
## Hardware-Specific Configurations
|
||||
|
||||
### Performance Tuning for Bare Metal
|
||||
|
||||
#### CPU Governor Configuration
|
||||
```yaml
|
||||
machine:
|
||||
sysfs:
|
||||
"devices.system.cpu.cpu0.cpufreq.scaling_governor": "performance"
|
||||
"devices.system.cpu.cpu1.cpufreq.scaling_governor": "performance"
|
||||
```
|
||||
|
||||
#### Hardware Vulnerability Mitigations
|
||||
```yaml
|
||||
machine:
|
||||
kernel:
|
||||
args:
|
||||
- mitigations=off # For maximum performance (less secure)
|
||||
# or
|
||||
- mitigations=auto # Default balanced approach
|
||||
```
|
||||
|
||||
#### IOMMU Configuration
|
||||
```yaml
|
||||
machine:
|
||||
kernel:
|
||||
args:
|
||||
- intel_iommu=on
|
||||
- iommu=pt
|
||||
```
|
||||
|
||||
### Memory Management
|
||||
```yaml
|
||||
machine:
|
||||
kernel:
|
||||
args:
|
||||
- hugepages=1024 # 1GB hugepages
|
||||
- transparent_hugepage=never
|
||||
```
|
||||
|
||||
## Ingress Firewall for Bare Metal
|
||||
|
||||
### Basic Firewall Configuration
|
||||
```yaml
|
||||
machine:
|
||||
network:
|
||||
firewall:
|
||||
defaultAction: block
|
||||
rules:
|
||||
- name: allow-talos-api
|
||||
portSelector:
|
||||
ports:
|
||||
- 50000
|
||||
- 50001
|
||||
ingress:
|
||||
- subnet: 192.168.1.0/24
|
||||
- name: allow-kubernetes-api
|
||||
portSelector:
|
||||
ports:
|
||||
- 6443
|
||||
ingress:
|
||||
- subnet: 0.0.0.0/0
|
||||
- name: allow-etcd
|
||||
portSelector:
|
||||
ports:
|
||||
- 2379
|
||||
- 2380
|
||||
ingress:
|
||||
- subnet: 192.168.1.0/24
|
||||
```
|
||||
|
||||
### Advanced Firewall Rules
|
||||
```yaml
|
||||
machine:
|
||||
network:
|
||||
firewall:
|
||||
defaultAction: block
|
||||
rules:
|
||||
- name: allow-ssh-management
|
||||
portSelector:
|
||||
ports:
|
||||
- 22
|
||||
ingress:
|
||||
- subnet: 10.0.1.0/24 # Management network only
|
||||
- name: allow-monitoring
|
||||
portSelector:
|
||||
ports:
|
||||
- 9100 # Node exporter
|
||||
- 10250 # kubelet metrics
|
||||
ingress:
|
||||
- subnet: 192.168.1.0/24
|
||||
```
|
||||
|
||||
## System Extensions for Bare Metal
|
||||
|
||||
### Common Bare Metal Extensions
|
||||
```yaml
|
||||
machine:
|
||||
install:
|
||||
extensions:
|
||||
- image: ghcr.io/siderolabs/iscsi-tools:latest
|
||||
- image: ghcr.io/siderolabs/util-linux-tools:latest
|
||||
- image: ghcr.io/siderolabs/drbd:latest
|
||||
```
|
||||
|
||||
### Storage Extensions
|
||||
```yaml
|
||||
machine:
|
||||
install:
|
||||
extensions:
|
||||
- image: ghcr.io/siderolabs/zfs:latest
|
||||
- image: ghcr.io/siderolabs/nut-client:latest
|
||||
- image: ghcr.io/siderolabs/smartmontools:latest
|
||||
```
|
||||
|
||||
### Checking Extension Status
|
||||
```bash
|
||||
# List installed extensions
|
||||
talosctl -n <IP> get extensions
|
||||
|
||||
# Check extension services
|
||||
talosctl -n <IP> get extensionserviceconfigs
|
||||
```
|
||||
|
||||
## Static Pod Configuration for Bare Metal
|
||||
|
||||
### Local Storage Static Pods
|
||||
```yaml
|
||||
machine:
|
||||
pods:
|
||||
- name: local-storage-provisioner
|
||||
namespace: kube-system
|
||||
image: rancher/local-path-provisioner:v0.0.24
|
||||
args:
|
||||
- --config-path=/etc/config/config.json
|
||||
env:
|
||||
- name: POD_NAMESPACE
|
||||
value: kube-system
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /etc/config
|
||||
- name: local-storage
|
||||
mountPath: /opt/local-path-provisioner
|
||||
volumes:
|
||||
- name: config
|
||||
hostPath:
|
||||
path: /etc/local-storage
|
||||
- name: local-storage
|
||||
hostPath:
|
||||
path: /var/lib/local-storage
|
||||
```
|
||||
|
||||
### Hardware Monitoring Static Pods
|
||||
```yaml
|
||||
machine:
|
||||
pods:
|
||||
- name: node-exporter
|
||||
namespace: monitoring
|
||||
image: prom/node-exporter:latest
|
||||
args:
|
||||
- --path.rootfs=/host
|
||||
- --collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 65534
|
||||
volumeMounts:
|
||||
- name: proc
|
||||
mountPath: /host/proc
|
||||
readOnly: true
|
||||
- name: sys
|
||||
mountPath: /host/sys
|
||||
readOnly: true
|
||||
- name: rootfs
|
||||
mountPath: /host
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: proc
|
||||
hostPath:
|
||||
path: /proc
|
||||
- name: sys
|
||||
hostPath:
|
||||
path: /sys
|
||||
- name: rootfs
|
||||
hostPath:
|
||||
path: /
|
||||
```
|
||||
|
||||
## Bare Metal Boot Asset Management
|
||||
|
||||
### PXE Boot Configuration
|
||||
For network booting, configure DHCP/TFTP with appropriate boot assets:
|
||||
|
||||
```bash
|
||||
# Download kernel and initramfs for PXE
|
||||
curl -LO https://github.com/siderolabs/talos/releases/download/v1.11.0/vmlinuz-amd64
|
||||
curl -LO https://github.com/siderolabs/talos/releases/download/v1.11.0/initramfs-amd64.xz
|
||||
```
|
||||
|
||||
### USB Boot Asset Creation
|
||||
```bash
|
||||
# Write installer image to USB
|
||||
sudo dd if=metal-amd64.iso of=/dev/sdX bs=4M status=progress
|
||||
```
|
||||
|
||||
### Image Factory Integration
|
||||
For custom bare metal images:
|
||||
```bash
|
||||
# Generate schematic for bare metal with extensions
|
||||
curl -X POST --data-binary @schematic.yaml \
|
||||
https://factory.talos.dev/schematics
|
||||
|
||||
# Download custom installer
|
||||
curl -LO https://factory.talos.dev/image/<schematic-id>/v1.11.0/metal-amd64.iso
|
||||
```
|
||||
|
||||
## Hardware Compatibility and Drivers
|
||||
|
||||
### Check Hardware Support
|
||||
```bash
|
||||
# Check PCI devices
|
||||
talosctl -n <IP> read /proc/bus/pci/devices
|
||||
|
||||
# Check USB devices
|
||||
talosctl -n <IP> read /proc/bus/usb/devices
|
||||
|
||||
# Check loaded kernel modules
|
||||
talosctl -n <IP> read /proc/modules
|
||||
|
||||
# Check hardware information
|
||||
talosctl -n <IP> read /proc/cpuinfo
|
||||
talosctl -n <IP> read /proc/meminfo
|
||||
```
|
||||
|
||||
### Common Hardware Issues
|
||||
|
||||
#### Network Interface Issues
|
||||
```bash
|
||||
# Check interface status
|
||||
talosctl -n <IP> list /sys/class/net/
|
||||
|
||||
# Check driver information
|
||||
talosctl -n <IP> read /sys/class/net/eth0/device/driver
|
||||
|
||||
# Check firmware loading
|
||||
talosctl -n <IP> dmesg | grep firmware
|
||||
```
|
||||
|
||||
#### Storage Controller Issues
|
||||
```bash
|
||||
# Check block devices
|
||||
talosctl -n <IP> disks
|
||||
|
||||
# Check SMART status (if smartmontools extension installed)
|
||||
talosctl -n <IP> list /dev/disk/by-id/
|
||||
```
|
||||
|
||||
## Bare Metal Monitoring and Maintenance
|
||||
|
||||
### Hardware Health Monitoring
|
||||
```bash
|
||||
# Check system temperatures (if available)
|
||||
talosctl -n <IP> read /sys/class/thermal/thermal_zone0/temp
|
||||
|
||||
# Check power supply status
|
||||
talosctl -n <IP> read /sys/class/power_supply/*/status
|
||||
|
||||
# Monitor system events for hardware issues
|
||||
talosctl -n <IP> dmesg | grep -i error
|
||||
talosctl -n <IP> dmesg | grep -i "machine check"
|
||||
```
|
||||
|
||||
### Performance Monitoring
|
||||
```bash
|
||||
# Check CPU performance
|
||||
talosctl -n <IP> read /proc/cpuinfo | grep MHz
|
||||
talosctl -n <IP> cgroups --preset cpu
|
||||
|
||||
# Check memory performance
|
||||
talosctl -n <IP> memory
|
||||
talosctl -n <IP> cgroups --preset memory
|
||||
|
||||
# Check I/O performance
|
||||
talosctl -n <IP> read /proc/diskstats
|
||||
```
|
||||
|
||||
## Security Hardening for Bare Metal
|
||||
|
||||
### BIOS/UEFI Security
|
||||
- Enable SecureBoot
|
||||
- Disable unused boot devices
|
||||
- Set administrator passwords
|
||||
- Enable TPM 2.0
|
||||
- Disable legacy boot modes
|
||||
|
||||
### Physical Security
|
||||
- Secure physical access to servers
|
||||
- Use chassis intrusion detection
|
||||
- Implement network port security
|
||||
- Consider hardware-based attestation
|
||||
|
||||
### Network Security
|
||||
```yaml
|
||||
machine:
|
||||
network:
|
||||
firewall:
|
||||
defaultAction: block
|
||||
rules:
|
||||
# Only allow necessary services
|
||||
- name: allow-cluster-traffic
|
||||
portSelector:
|
||||
ports:
|
||||
- 6443 # Kubernetes API
|
||||
- 2379 # etcd client
|
||||
- 2380 # etcd peer
|
||||
- 10250 # kubelet API
|
||||
- 50000 # Talos API
|
||||
ingress:
|
||||
- subnet: 192.168.1.0/24
|
||||
```
|
||||
|
||||
This bare metal guide provides comprehensive coverage of hardware-specific configurations, performance optimization, security hardening, and operational practices for Talos Linux on physical servers.
|
||||
382
ai/talos-v1.11/cli-essentials.md
Normal file
382
ai/talos-v1.11/cli-essentials.md
Normal file
@@ -0,0 +1,382 @@
|
||||
# Talosctl CLI Essentials
|
||||
|
||||
This guide covers essential talosctl commands and usage patterns for effective Talos cluster administration.
|
||||
|
||||
## Command Structure and Context
|
||||
|
||||
### Basic Command Pattern
|
||||
```bash
|
||||
talosctl [global-flags] <command> [command-flags] [arguments]
|
||||
|
||||
# Examples:
|
||||
talosctl -n <IP> get members
|
||||
talosctl --nodes <IP1>,<IP2> service kubelet
|
||||
talosctl -e <endpoint> -n <target-nodes> upgrade --image <image>
|
||||
```
|
||||
|
||||
### Global Flags
|
||||
- `-e, --endpoints`: API endpoints to connect to
|
||||
- `-n, --nodes`: Target nodes for commands (defaults to first endpoint if omitted)
|
||||
- `--talosconfig`: Path to Talos configuration file
|
||||
- `--context`: Configuration context to use
|
||||
|
||||
### Configuration Management
|
||||
```bash
|
||||
# Use specific config file
|
||||
export TALOSCONFIG=/path/to/talosconfig
|
||||
|
||||
# List available contexts
|
||||
talosctl config contexts
|
||||
|
||||
# Switch context
|
||||
talosctl config context <context-name>
|
||||
|
||||
# View current config
|
||||
talosctl config info
|
||||
```
|
||||
|
||||
## Cluster Management Commands
|
||||
|
||||
### Bootstrap and Node Management
|
||||
```bash
|
||||
# Bootstrap etcd cluster on first control plane node
|
||||
talosctl bootstrap --nodes <first-controlplane-ip>
|
||||
|
||||
# Apply machine configuration
|
||||
talosctl apply-config --nodes <IP> --file <config.yaml>
|
||||
talosctl apply-config --nodes <IP> --file <config.yaml> --mode reboot
|
||||
talosctl apply-config --nodes <IP> --file <config.yaml> --dry-run
|
||||
|
||||
# Reset node (wipe and reboot)
|
||||
talosctl reset --nodes <IP>
|
||||
talosctl reset --nodes <IP> --graceful=false --reboot
|
||||
|
||||
# Reboot node
|
||||
talosctl reboot --nodes <IP>
|
||||
|
||||
# Shutdown node
|
||||
talosctl shutdown --nodes <IP>
|
||||
```
|
||||
|
||||
### Configuration Patching
|
||||
```bash
|
||||
# Patch machine configuration
|
||||
talosctl -n <IP> patch mc --mode=no-reboot -p '[{"op": "replace", "path": "/machine/logging/destinations/0/endpoint", "value": "tcp://new-endpoint:514"}]'
|
||||
|
||||
# Patch with file
|
||||
talosctl -n <IP> patch mc --patch @patch.yaml --mode reboot
|
||||
|
||||
# Edit machine config interactively
|
||||
talosctl -n <IP> edit mc --mode staged
|
||||
```
|
||||
|
||||
## System Information and Monitoring
|
||||
|
||||
### Node Status and Health
|
||||
```bash
|
||||
# Cluster member information
|
||||
talosctl get members
|
||||
talosctl get affiliates
|
||||
talosctl get identities
|
||||
|
||||
# Node health check
|
||||
talosctl -n <IP> health
|
||||
talosctl -n <IP1>,<IP2>,<IP3> health --control-plane-nodes <IP1>,<IP2>,<IP3>
|
||||
|
||||
# System information
|
||||
talosctl -n <IP> version
|
||||
talosctl -n <IP> get machineconfig
|
||||
talosctl -n <IP> get machinetype
|
||||
```
|
||||
|
||||
### Resource Monitoring
|
||||
```bash
|
||||
# CPU and memory usage
|
||||
talosctl -n <IP> cpu
|
||||
talosctl -n <IP> memory
|
||||
|
||||
# Disk usage and information
|
||||
talosctl -n <IP> disks
|
||||
talosctl -n <IP> df
|
||||
|
||||
# Network interfaces
|
||||
talosctl -n <IP> interfaces
|
||||
talosctl -n <IP> get addresses
|
||||
talosctl -n <IP> get routes
|
||||
|
||||
# Process information
|
||||
talosctl -n <IP> processes
|
||||
talosctl -n <IP> cgroups --preset memory
|
||||
talosctl -n <IP> cgroups --preset cpu
|
||||
```
|
||||
|
||||
### Service Management
|
||||
```bash
|
||||
# List all services
|
||||
talosctl -n <IP> services
|
||||
|
||||
# Check specific service status
|
||||
talosctl -n <IP> service kubelet
|
||||
talosctl -n <IP> service containerd
|
||||
talosctl -n <IP> service etcd
|
||||
|
||||
# Restart service
|
||||
talosctl -n <IP> service kubelet restart
|
||||
|
||||
# Start/stop service
|
||||
talosctl -n <IP> service <service-name> start
|
||||
talosctl -n <IP> service <service-name> stop
|
||||
```
|
||||
|
||||
## Logging and Diagnostics
|
||||
|
||||
### Log Retrieval
|
||||
```bash
|
||||
# Kernel logs
|
||||
talosctl -n <IP> dmesg
|
||||
talosctl -n <IP> dmesg -f # Follow mode
|
||||
talosctl -n <IP> dmesg --tail=100
|
||||
|
||||
# Service logs
|
||||
talosctl -n <IP> logs kubelet
|
||||
talosctl -n <IP> logs containerd
|
||||
talosctl -n <IP> logs etcd
|
||||
talosctl -n <IP> logs machined
|
||||
|
||||
# Follow logs
|
||||
talosctl -n <IP> logs kubelet -f
|
||||
```
|
||||
|
||||
### System Events
|
||||
```bash
|
||||
# Monitor system events
|
||||
talosctl -n <IP> events
|
||||
talosctl -n <IP> events --tail
|
||||
|
||||
# Filter events
|
||||
talosctl -n <IP> events --since=1h
|
||||
talosctl -n <IP> events --grep=error
|
||||
```
|
||||
|
||||
## File System and Container Operations
|
||||
|
||||
### File Operations
|
||||
```bash
|
||||
# List files/directories
|
||||
talosctl -n <IP> list /var/log
|
||||
talosctl -n <IP> list /etc/kubernetes
|
||||
|
||||
# Copy files to/from node
|
||||
talosctl -n <IP> copy /local/file /remote/path
|
||||
talosctl -n <IP> cp /var/log/containers/app.log ./app.log
|
||||
|
||||
# Read file contents
|
||||
talosctl -n <IP> read /etc/resolv.conf
|
||||
talosctl -n <IP> cat /var/log/audit/audit.log
|
||||
```
|
||||
|
||||
### Container Operations
|
||||
```bash
|
||||
# List containers
|
||||
talosctl -n <IP> containers
|
||||
talosctl -n <IP> containers -k # Kubernetes containers
|
||||
|
||||
# Container logs
|
||||
talosctl -n <IP> logs --kubernetes <container-name>
|
||||
|
||||
# Execute in container
|
||||
talosctl -n <IP> exec --kubernetes <pod-name> -- <command>
|
||||
```
|
||||
|
||||
## Kubernetes Integration
|
||||
|
||||
### Kubernetes Cluster Operations
|
||||
```bash
|
||||
# Get kubeconfig
|
||||
talosctl kubeconfig
|
||||
talosctl kubeconfig --nodes <controlplane-ip>
|
||||
talosctl kubeconfig --force --nodes <controlplane-ip>
|
||||
|
||||
# Bootstrap manifests
|
||||
talosctl -n <IP> get manifests
|
||||
talosctl -n <IP> get manifests -o yaml | yq eval-all '.spec | .[] | splitDoc' - > manifests.yaml
|
||||
|
||||
# Upgrade Kubernetes
|
||||
talosctl --nodes <controlplane> upgrade-k8s --to v1.34.1
|
||||
talosctl --nodes <controlplane> upgrade-k8s --to v1.34.1 --dry-run
|
||||
```
|
||||
|
||||
### Resource Inspection
|
||||
```bash
|
||||
# Control plane component configs
|
||||
talosctl -n <IP> get apiserverconfig -o yaml
|
||||
talosctl -n <IP> get controllermanagerconfig -o yaml
|
||||
talosctl -n <IP> get schedulerconfig -o yaml
|
||||
|
||||
# etcd configuration
|
||||
talosctl -n <IP> get etcdconfig -o yaml
|
||||
```
|
||||
|
||||
## etcd Management
|
||||
|
||||
### etcd Operations
|
||||
```bash
|
||||
# etcd cluster status
|
||||
talosctl -n <IP1>,<IP2>,<IP3> etcd status
|
||||
|
||||
# etcd members
|
||||
talosctl -n <IP> etcd members
|
||||
|
||||
# etcd snapshots
|
||||
talosctl -n <IP> etcd snapshot db.snapshot
|
||||
|
||||
# etcd maintenance
|
||||
talosctl -n <IP> etcd defrag
|
||||
talosctl -n <IP> etcd alarm list
|
||||
talosctl -n <IP> etcd alarm disarm
|
||||
|
||||
# Leadership management
|
||||
talosctl -n <IP> etcd forfeit-leadership
|
||||
```
|
||||
|
||||
### Disaster Recovery
|
||||
```bash
|
||||
# Bootstrap from snapshot
|
||||
talosctl -n <IP> bootstrap --recover-from=./db.snapshot
|
||||
talosctl -n <IP> bootstrap --recover-from=./db.snapshot --recover-skip-hash-check
|
||||
```
|
||||
|
||||
## Upgrade and Maintenance
|
||||
|
||||
### OS Upgrades
|
||||
```bash
|
||||
# Upgrade Talos OS
|
||||
talosctl upgrade --nodes <IP> --image ghcr.io/siderolabs/installer:v1.11.x
|
||||
talosctl upgrade --nodes <IP> --image ghcr.io/siderolabs/installer:v1.11.x --stage
|
||||
|
||||
# Monitor upgrade progress
|
||||
talosctl upgrade --nodes <IP> --image <image> --wait
|
||||
talosctl upgrade --nodes <IP> --image <image> --wait --debug
|
||||
|
||||
# Rollback
|
||||
talosctl rollback --nodes <IP>
|
||||
```
|
||||
|
||||
## Resource System Commands
|
||||
|
||||
### Resource Management
|
||||
```bash
|
||||
# List resource types
|
||||
talosctl get rd
|
||||
|
||||
# Get specific resources
|
||||
talosctl get <resource-type>
|
||||
talosctl get <resource-type> -o yaml
|
||||
talosctl get <resource-type> --namespace=<namespace>
|
||||
|
||||
# Watch resources
|
||||
talosctl get <resource-type> --watch
|
||||
|
||||
# Common resource types
|
||||
talosctl get machineconfig
|
||||
talosctl get members
|
||||
talosctl get services
|
||||
talosctl get networkconfig
|
||||
talosctl get secrets
|
||||
```
|
||||
|
||||
## Local Development
|
||||
|
||||
### Local Cluster Management
|
||||
```bash
|
||||
# Create local cluster
|
||||
talosctl cluster create
|
||||
talosctl cluster create --controlplanes 3 --workers 2
|
||||
|
||||
# Destroy local cluster
|
||||
talosctl cluster destroy
|
||||
|
||||
# Show local cluster status
|
||||
talosctl cluster show
|
||||
```
|
||||
|
||||
## Advanced Usage Patterns
|
||||
|
||||
### Multi-Node Operations
|
||||
```bash
|
||||
# Run command on multiple nodes
|
||||
talosctl -e <endpoint> -n <node1>,<node2>,<node3> <command>
|
||||
|
||||
# Different endpoint and target nodes
|
||||
talosctl -e <public-endpoint> -n <internal-node1>,<internal-node2> <command>
|
||||
```
|
||||
|
||||
### Output Formatting
|
||||
```bash
|
||||
# JSON output
|
||||
talosctl -n <IP> get members -o json
|
||||
|
||||
# YAML output
|
||||
talosctl -n <IP> get machineconfig -o yaml
|
||||
|
||||
# Table output (default)
|
||||
talosctl -n <IP> get members -o table
|
||||
|
||||
# Custom column output
|
||||
talosctl -n <IP> get members -o columns=HOSTNAME,MACHINE\ TYPE,OS
|
||||
```
|
||||
|
||||
### Filtering and Selection
|
||||
```bash
|
||||
# Filter resources
|
||||
talosctl get members --search <hostname>
|
||||
talosctl get services --search kubelet
|
||||
|
||||
# Namespace filtering
|
||||
talosctl get secrets --namespace=secrets
|
||||
talosctl get affiliates --namespace=cluster-raw
|
||||
```
|
||||
|
||||
## Common Command Workflows
|
||||
|
||||
### Initial Cluster Setup
|
||||
```bash
|
||||
# 1. Generate configurations
|
||||
talosctl gen config cluster-name https://cluster-endpoint:6443
|
||||
|
||||
# 2. Apply to nodes
|
||||
talosctl apply-config --nodes <controlplane-1> --file controlplane.yaml
|
||||
talosctl apply-config --nodes <worker-1> --file worker.yaml
|
||||
|
||||
# 3. Bootstrap cluster
|
||||
talosctl bootstrap --nodes <controlplane-1>
|
||||
|
||||
# 4. Get kubeconfig
|
||||
talosctl kubeconfig --nodes <controlplane-1>
|
||||
```
|
||||
|
||||
### Cluster Health Check
|
||||
```bash
|
||||
# Check all aspects of cluster health
|
||||
talosctl -n <IP1>,<IP2>,<IP3> health --control-plane-nodes <IP1>,<IP2>,<IP3>
|
||||
talosctl -n <IP1>,<IP2>,<IP3> etcd status
|
||||
talosctl -n <IP1>,<IP2>,<IP3> service kubelet
|
||||
kubectl get nodes
|
||||
kubectl get pods --all-namespaces
|
||||
```
|
||||
|
||||
### Node Troubleshooting
|
||||
```bash
|
||||
# System diagnostics
|
||||
talosctl -n <IP> dmesg | tail -100
|
||||
talosctl -n <IP> services | grep -v Running
|
||||
talosctl -n <IP> logs kubelet | tail -50
|
||||
talosctl -n <IP> events --since=1h
|
||||
|
||||
# Resource usage
|
||||
talosctl -n <IP> memory
|
||||
talosctl -n <IP> df
|
||||
talosctl -n <IP> processes | head -20
|
||||
```
|
||||
|
||||
This CLI reference provides the essential commands and patterns needed for day-to-day Talos cluster administration and troubleshooting.
|
||||
239
ai/talos-v1.11/cluster-operations.md
Normal file
239
ai/talos-v1.11/cluster-operations.md
Normal file
@@ -0,0 +1,239 @@
|
||||
# Talos Cluster Operations Guide
|
||||
|
||||
This guide covers essential cluster operations for Talos Linux v1.11 administrators.
|
||||
|
||||
## Upgrading Operations
|
||||
|
||||
### Talos OS Upgrades
|
||||
|
||||
Talos uses an A-B image scheme for rollbacks. Each upgrade retains the previous kernel and OS image.
|
||||
|
||||
#### Upgrade Process
|
||||
```bash
|
||||
# Upgrade a single node
|
||||
talosctl upgrade --nodes <IP> --image ghcr.io/siderolabs/installer:v1.11.x
|
||||
|
||||
# Use --stage flag if upgrade fails due to open files
|
||||
talosctl upgrade --nodes <IP> --image ghcr.io/siderolabs/installer:v1.11.x --stage
|
||||
|
||||
# Monitor upgrade progress
|
||||
talosctl dmesg -f
|
||||
talosctl upgrade --wait --debug
|
||||
```
|
||||
|
||||
#### Upgrade Sequence
|
||||
1. Node cordons itself in Kubernetes
|
||||
2. Node drains existing workloads
|
||||
3. Internal processes shut down
|
||||
4. Filesystems unmount
|
||||
5. Disk verification and image upgrade
|
||||
6. Bootloader set to boot once with new image
|
||||
7. Node reboots
|
||||
8. Node rejoins cluster and uncordons
|
||||
|
||||
#### Rollback
|
||||
```bash
|
||||
talosctl rollback --nodes <IP>
|
||||
```
|
||||
|
||||
### Kubernetes Upgrades
|
||||
|
||||
Kubernetes upgrades are separate from OS upgrades and non-disruptive.
|
||||
|
||||
#### Automated Upgrade (Recommended)
|
||||
```bash
|
||||
# Check what will be upgraded
|
||||
talosctl --nodes <controlplane> upgrade-k8s --to v1.34.1 --dry-run
|
||||
|
||||
# Perform upgrade
|
||||
talosctl --nodes <controlplane> upgrade-k8s --to v1.34.1
|
||||
```
|
||||
|
||||
#### Manual Component Upgrades
|
||||
For manual control, patch each component individually:
|
||||
|
||||
**API Server:**
|
||||
```bash
|
||||
talosctl -n <IP> patch mc --mode=no-reboot -p '[{"op": "replace", "path": "/cluster/apiServer/image", "value": "registry.k8s.io/kube-apiserver:v1.34.1"}]'
|
||||
```
|
||||
|
||||
**Controller Manager:**
|
||||
```bash
|
||||
talosctl -n <IP> patch mc --mode=no-reboot -p '[{"op": "replace", "path": "/cluster/controllerManager/image", "value": "registry.k8s.io/kube-controller-manager:v1.34.1"}]'
|
||||
```
|
||||
|
||||
**Scheduler:**
|
||||
```bash
|
||||
talosctl -n <IP> patch mc --mode=no-reboot -p '[{"op": "replace", "path": "/cluster/scheduler/image", "value": "registry.k8s.io/kube-scheduler:v1.34.1"}]'
|
||||
```
|
||||
|
||||
**Kubelet:**
|
||||
```bash
|
||||
talosctl -n <IP> patch mc --mode=no-reboot -p '[{"op": "replace", "path": "/machine/kubelet/image", "value": "ghcr.io/siderolabs/kubelet:v1.34.1"}]'
|
||||
```
|
||||
|
||||
## Node Management
|
||||
|
||||
### Adding Control Plane Nodes
|
||||
1. Apply machine configuration to new node
|
||||
2. Node automatically joins etcd cluster via control plane endpoint
|
||||
3. Control plane components start automatically
|
||||
|
||||
### Removing Control Plane Nodes
|
||||
```bash
|
||||
# Recommended approach - reset then delete
|
||||
talosctl -n <IP.of.node.to.remove> reset
|
||||
kubectl delete node <node-name>
|
||||
```
|
||||
|
||||
### Adding Worker Nodes
|
||||
1. Apply worker machine configuration
|
||||
2. Node automatically joins via bootstrap token
|
||||
|
||||
### Removing Worker Nodes
|
||||
```bash
|
||||
kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data
|
||||
kubectl delete node <node-name>
|
||||
talosctl -n <IP> reset
|
||||
```
|
||||
|
||||
## Configuration Management
|
||||
|
||||
### Applying Configuration Changes
|
||||
```bash
|
||||
# Apply config with automatic mode detection
|
||||
talosctl apply-config --nodes <IP> --file <config.yaml>
|
||||
|
||||
# Apply with specific modes
|
||||
talosctl apply-config --nodes <IP> --file <config.yaml> --mode no-reboot
|
||||
talosctl apply-config --nodes <IP> --file <config.yaml> --mode reboot
|
||||
talosctl apply-config --nodes <IP> --file <config.yaml> --mode staged
|
||||
|
||||
# Dry run to preview changes
|
||||
talosctl apply-config --nodes <IP> --file <config.yaml> --dry-run
|
||||
```
|
||||
|
||||
### Configuration Patching
|
||||
```bash
|
||||
# Patch machine configuration
|
||||
talosctl -n <IP> patch mc --mode=no-reboot -p '[{"op": "replace", "path": "/machine/logging/destinations/0/endpoint", "value": "tcp://new-endpoint:514"}]'
|
||||
|
||||
# Patch with file
|
||||
talosctl -n <IP> patch mc --patch @patch.yaml
|
||||
```
|
||||
|
||||
### Retrieving Current Configuration
|
||||
```bash
|
||||
# Get machine configuration
|
||||
talosctl -n <IP> get mc v1alpha1 -o yaml
|
||||
|
||||
# Get effective configuration
|
||||
talosctl -n <IP> get machineconfig -o yaml
|
||||
```
|
||||
|
||||
## Cluster Health Monitoring
|
||||
|
||||
### Node Status
|
||||
```bash
|
||||
# Check node status
|
||||
talosctl -n <IP> get members
|
||||
talosctl -n <IP> health
|
||||
|
||||
# Check system services
|
||||
talosctl -n <IP> services
|
||||
talosctl -n <IP> service <service-name>
|
||||
```
|
||||
|
||||
### Resource Monitoring
|
||||
```bash
|
||||
# System resources
|
||||
talosctl -n <IP> memory
|
||||
talosctl -n <IP> cpu
|
||||
talosctl -n <IP> disks
|
||||
|
||||
# Process information
|
||||
talosctl -n <IP> processes
|
||||
talosctl -n <IP> cgroups --preset memory
|
||||
```
|
||||
|
||||
### Log Monitoring
|
||||
```bash
|
||||
# Kernel logs
|
||||
talosctl -n <IP> dmesg
|
||||
talosctl -n <IP> dmesg -f # Follow mode
|
||||
|
||||
# Service logs
|
||||
talosctl -n <IP> logs <service-name>
|
||||
talosctl -n <IP> logs kubelet
|
||||
```
|
||||
|
||||
## Control Plane Best Practices
|
||||
|
||||
### Cluster Sizing Recommendations
|
||||
- **3 nodes**: Sufficient for most use cases, tolerates 1 node failure
|
||||
- **5 nodes**: Better availability (tolerates 2 node failures), higher resource cost
|
||||
- **Avoid even numbers**: 2 or 4 nodes provide worse availability than odd numbers
|
||||
|
||||
### Node Replacement Strategy
|
||||
- **Failed node**: Remove first, then add replacement
|
||||
- **Healthy node**: Add replacement first, then remove old node
|
||||
|
||||
### Performance Considerations
|
||||
- etcd performance decreases as cluster scales
|
||||
- 5-node cluster commits ~5% fewer writes than 3-node cluster
|
||||
- Vertically scale nodes for performance, don't add more nodes
|
||||
|
||||
## Machine Configuration Versioning
|
||||
|
||||
### Reproducible Configuration Workflow
|
||||
Store only:
|
||||
- `secrets.yaml` (generated once at cluster creation)
|
||||
- Patch files (YAML/JSON patches describing differences from defaults)
|
||||
|
||||
Generate configs when needed:
|
||||
```bash
|
||||
# Generate fresh configs with existing secrets
|
||||
talosctl gen config <cluster-name> <cluster-endpoint> --with-secrets secrets.yaml
|
||||
|
||||
# Apply patches to generated configs
|
||||
talosctl gen config <cluster-name> <cluster-endpoint> --with-secrets secrets.yaml --config-patch @patch.yaml
|
||||
```
|
||||
|
||||
This prevents configuration drift after automated upgrades.
|
||||
|
||||
## Troubleshooting Common Issues
|
||||
|
||||
### Upgrade Failures
|
||||
- **Invalid installer image**: Check image reference and network connectivity
|
||||
- **Filesystem unmount failure**: Use `--stage` flag
|
||||
- **Boot failure**: System automatically rolls back to previous version
|
||||
- **Workload issues**: Use `talosctl rollback` to revert
|
||||
|
||||
### Node Join Issues
|
||||
- Verify network connectivity to control plane endpoint
|
||||
- Check discovery service configuration
|
||||
- Validate machine configuration syntax
|
||||
- Ensure bootstrap process completed on initial control plane node
|
||||
|
||||
### Control Plane Quorum Loss
|
||||
- Identify healthy nodes with `talosctl etcd status`
|
||||
- Follow disaster recovery procedures if quorum cannot be restored
|
||||
- Use etcd snapshots for cluster recovery
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Certificate Rotation
|
||||
Talos automatically rotates certificates, but monitor expiration:
|
||||
```bash
|
||||
talosctl -n <IP> get secrets
|
||||
```
|
||||
|
||||
### Pod Security
|
||||
Control plane nodes are tainted by default to prevent workload scheduling. This protects:
|
||||
- Control plane from resource starvation
|
||||
- Credentials from workload exposure
|
||||
|
||||
### Network Security
|
||||
- All API communication uses mutual TLS (mTLS)
|
||||
- Discovery service data is encrypted before transmission
|
||||
- WireGuard (KubeSpan) provides mesh networking security
|
||||
344
ai/talos-v1.11/discovery-and-networking.md
Normal file
344
ai/talos-v1.11/discovery-and-networking.md
Normal file
@@ -0,0 +1,344 @@
|
||||
# Discovery and Networking Guide
|
||||
|
||||
This guide covers Talos cluster discovery mechanisms, network configuration, and connectivity troubleshooting.
|
||||
|
||||
## Cluster Discovery System
|
||||
|
||||
Talos includes built-in node discovery that allows cluster members to find each other and maintain membership information.
|
||||
|
||||
### Discovery Registries
|
||||
|
||||
#### Service Registry (Default)
|
||||
- **External Service**: Uses public discovery service at `https://discovery.talos.dev/`
|
||||
- **Encryption**: All data encrypted with AES-GCM before transmission
|
||||
- **Functionality**: Works without dependency on etcd/Kubernetes
|
||||
- **Advantages**: Available even when control plane is down
|
||||
|
||||
#### Kubernetes Registry (Deprecated)
|
||||
- **Data Source**: Uses Kubernetes Node resources and annotations
|
||||
- **Limitation**: Incompatible with Kubernetes 1.32+ due to AuthorizeNodeWithSelectors
|
||||
- **Status**: Disabled by default, deprecated
|
||||
|
||||
### Discovery Configuration
|
||||
```yaml
|
||||
cluster:
|
||||
discovery:
|
||||
enabled: true
|
||||
registries:
|
||||
service:
|
||||
disabled: false # Default
|
||||
kubernetes:
|
||||
disabled: true # Deprecated, disabled by default
|
||||
```
|
||||
|
||||
**To disable service registry**:
|
||||
```yaml
|
||||
cluster:
|
||||
discovery:
|
||||
enabled: true
|
||||
registries:
|
||||
service:
|
||||
disabled: true
|
||||
```
|
||||
|
||||
## Discovery Data Flow
|
||||
|
||||
### Service Registry Process
|
||||
1. **Data Encryption**: Node encrypts affiliate data with cluster key
|
||||
2. **Endpoint Encryption**: Endpoints separately encrypted for deduplication
|
||||
3. **Data Submission**: Node submits own data + observed peer endpoints
|
||||
4. **Server Processing**: Discovery service aggregates and deduplicates data
|
||||
5. **Data Distribution**: Encrypted updates sent to all cluster members
|
||||
6. **Local Processing**: Nodes decrypt data for cluster discovery and KubeSpan
|
||||
|
||||
### Data Protection
|
||||
- **Cluster Isolation**: Cluster ID used as key selector
|
||||
- **End-to-End Encryption**: Discovery service cannot decrypt affiliate data
|
||||
- **Memory-Only Storage**: Data stored in memory with encrypted snapshots
|
||||
- **No Sensitive Exposure**: Service only sees encrypted blobs and cluster metadata
|
||||
|
||||
## Discovery Resources
|
||||
|
||||
### Node Identity
|
||||
```bash
|
||||
# View node's unique identity
|
||||
talosctl get identities -o yaml
|
||||
```
|
||||
|
||||
**Output**:
|
||||
```yaml
|
||||
spec:
|
||||
nodeId: Utoh3O0ZneV0kT2IUBrh7TgdouRcUW2yzaaMl4VXnCd
|
||||
```
|
||||
|
||||
**Identity Characteristics**:
|
||||
- Base62 encoded random 32 bytes
|
||||
- URL-safe encoding
|
||||
- Preserved in STATE partition (`node-identity.yaml`)
|
||||
- Survives reboots and upgrades
|
||||
- Regenerated on reset/wipe
|
||||
|
||||
### Affiliates (Proposed Members)
|
||||
```bash
|
||||
# View discovered affiliates (proposed cluster members)
|
||||
talosctl get affiliates
|
||||
```
|
||||
|
||||
**Output**:
|
||||
```
|
||||
ID VERSION HOSTNAME MACHINE TYPE ADDRESSES
|
||||
2VfX3nu67ZtZPl57IdJrU87BMjVWkSBJiL9ulP9TCnF 2 talos-default-controlplane-2 controlplane ["172.20.0.3","fd83:b1f7:fcb5:2802:986b:7eff:fec5:889d"]
|
||||
```
|
||||
|
||||
### Members (Approved Members)
|
||||
```bash
|
||||
# View cluster members
|
||||
talosctl get members
|
||||
```
|
||||
|
||||
**Output**:
|
||||
```
|
||||
ID VERSION HOSTNAME MACHINE TYPE OS ADDRESSES
|
||||
talos-default-controlplane-1 2 talos-default-controlplane-1 controlplane Talos (v1.11.0) ["172.20.0.2","fd83:b1f7:fcb5:2802:8c13:71ff:feaf:7c94"]
|
||||
```
|
||||
|
||||
### Raw Registry Data
|
||||
```bash
|
||||
# View data from specific registries
|
||||
talosctl get affiliates --namespace=cluster-raw
|
||||
```
|
||||
|
||||
**Output shows registry sources**:
|
||||
```
|
||||
ID VERSION HOSTNAME
|
||||
k8s/2VfX3nu67ZtZPl57IdJrU87BMjVWkSBJiL9ulP9TCnF 3 talos-default-controlplane-2
|
||||
service/2VfX3nu67ZtZPl57IdJrU87BMjVWkSBJiL9ulP9TCnF 23 talos-default-controlplane-2
|
||||
```
|
||||
|
||||
## Network Architecture
|
||||
|
||||
### Network Layers
|
||||
|
||||
#### Host Networking
|
||||
- **Node-to-Node**: Direct IP connectivity between cluster nodes
|
||||
- **Control Plane**: API server communication via control plane endpoint
|
||||
- **Discovery**: HTTPS connection to discovery service (port 443)
|
||||
|
||||
#### Container Networking
|
||||
- **CNI**: Container Network Interface for pod networking
|
||||
- **Service Mesh**: Optional service mesh implementations
|
||||
- **Network Policies**: Kubernetes network policy enforcement
|
||||
|
||||
#### Optional: KubeSpan (WireGuard Mesh)
|
||||
- **Mesh Networking**: Full mesh WireGuard connections
|
||||
- **Discovery Integration**: Uses discovery service for peer coordination
|
||||
- **Encryption**: WireGuard public keys distributed via discovery
|
||||
- **Use Cases**: Multi-cloud, hybrid, NAT traversal
|
||||
|
||||
### Network Configuration Patterns
|
||||
|
||||
#### Basic Network Setup
|
||||
```yaml
|
||||
machine:
|
||||
network:
|
||||
interfaces:
|
||||
- interface: eth0
|
||||
dhcp: true
|
||||
```
|
||||
|
||||
#### Static IP Configuration
|
||||
```yaml
|
||||
machine:
|
||||
network:
|
||||
interfaces:
|
||||
- interface: eth0
|
||||
addresses:
|
||||
- 192.168.1.100/24
|
||||
routes:
|
||||
- network: 0.0.0.0/0
|
||||
gateway: 192.168.1.1
|
||||
mtu: 1500
|
||||
nameservers:
|
||||
- 8.8.8.8
|
||||
- 1.1.1.1
|
||||
```
|
||||
|
||||
#### Multiple Interface Configuration
|
||||
```yaml
|
||||
machine:
|
||||
network:
|
||||
interfaces:
|
||||
- interface: eth0 # Management interface
|
||||
dhcp: true
|
||||
- interface: eth1 # Kubernetes traffic
|
||||
addresses:
|
||||
- 10.0.1.100/24
|
||||
routes:
|
||||
- network: 10.0.0.0/16
|
||||
gateway: 10.0.1.1
|
||||
```
|
||||
|
||||
## KubeSpan Configuration
|
||||
|
||||
### Basic KubeSpan Setup
|
||||
```yaml
|
||||
machine:
|
||||
network:
|
||||
kubespan:
|
||||
enabled: true
|
||||
```
|
||||
|
||||
### Advanced KubeSpan Configuration
|
||||
```yaml
|
||||
machine:
|
||||
network:
|
||||
kubespan:
|
||||
enabled: true
|
||||
advertiseKubernetesNetworks: true
|
||||
allowDownPeerBypass: true
|
||||
mtu: 1420 # Account for WireGuard overhead
|
||||
filters:
|
||||
endpoints:
|
||||
- 0.0.0.0/0 # Allow all endpoints
|
||||
```
|
||||
|
||||
**KubeSpan Features**:
|
||||
- Automatic peer discovery via discovery service
|
||||
- NAT traversal capabilities
|
||||
- Encrypted mesh networking
|
||||
- Kubernetes network advertisement
|
||||
- Fault tolerance with peer bypass
|
||||
|
||||
## Network Troubleshooting
|
||||
|
||||
### Discovery Issues
|
||||
|
||||
#### Check Discovery Service Connectivity
|
||||
```bash
|
||||
# Test connectivity to discovery service
|
||||
talosctl get affiliates
|
||||
|
||||
# Check discovery configuration
|
||||
talosctl get discoveryconfig -o yaml
|
||||
|
||||
# Monitor discovery events
|
||||
talosctl events --tail
|
||||
```
|
||||
|
||||
#### Common Discovery Problems
|
||||
1. **No Affiliates Discovered**:
|
||||
- Check discovery service connectivity
|
||||
- Verify cluster ID matches across nodes
|
||||
- Confirm discovery is enabled
|
||||
|
||||
2. **Partial Affiliate List**:
|
||||
- Network connectivity issues between nodes
|
||||
- Discovery service regional availability
|
||||
- Firewall blocking discovery traffic
|
||||
|
||||
3. **Discovery Service Unreachable**:
|
||||
- Network connectivity to discovery.talos.dev:443
|
||||
- Corporate firewall/proxy configuration
|
||||
- DNS resolution issues
|
||||
|
||||
### Network Connectivity Testing
|
||||
|
||||
#### Basic Network Tests
|
||||
```bash
|
||||
# Test network interfaces
|
||||
talosctl get addresses
|
||||
talosctl get routes
|
||||
talosctl get nodeaddresses
|
||||
|
||||
# Check network configuration
|
||||
talosctl get networkconfig -o yaml
|
||||
|
||||
# Test connectivity
|
||||
talosctl -n <IP> ping <target-ip>
|
||||
```
|
||||
|
||||
#### Inter-Node Connectivity
|
||||
```bash
|
||||
# Test control plane endpoint
|
||||
talosctl health --control-plane-nodes <IP1>,<IP2>,<IP3>
|
||||
|
||||
# Check etcd connectivity
|
||||
talosctl -n <IP> etcd members
|
||||
|
||||
# Test Kubernetes API
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
#### KubeSpan Troubleshooting
|
||||
```bash
|
||||
# Check KubeSpan status
|
||||
talosctl get kubespanpeerspecs
|
||||
talosctl get kubespanpeerstatuses
|
||||
|
||||
# Monitor WireGuard connections
|
||||
talosctl -n <IP> interfaces
|
||||
|
||||
# Check KubeSpan logs
|
||||
talosctl -n <IP> logs controller-runtime | grep kubespan
|
||||
```
|
||||
|
||||
### Network Performance Optimization
|
||||
|
||||
#### Network Interface Tuning
|
||||
```yaml
|
||||
machine:
|
||||
network:
|
||||
interfaces:
|
||||
- interface: eth0
|
||||
mtu: 9000 # Jumbo frames if supported
|
||||
dhcp: true
|
||||
```
|
||||
|
||||
#### KubeSpan Performance
|
||||
- Adjust MTU for WireGuard overhead (typically -80 bytes)
|
||||
- Consider endpoint filters for large clusters
|
||||
- Monitor WireGuard peer connection stability
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Discovery Security
|
||||
- **Encrypted Communication**: All discovery data encrypted end-to-end
|
||||
- **Cluster Isolation**: Cluster ID prevents cross-cluster data access
|
||||
- **No Sensitive Data**: Only encrypted metadata transmitted
|
||||
- **Network Security**: HTTPS transport with certificate validation
|
||||
|
||||
### Network Security
|
||||
- **mTLS**: All Talos API communication uses mutual TLS
|
||||
- **Certificate Rotation**: Automatic certificate lifecycle management
|
||||
- **Network Policies**: Implement Kubernetes network policies for workloads
|
||||
- **Firewall Rules**: Restrict network access to necessary ports only
|
||||
|
||||
### Required Network Ports
|
||||
- **6443**: Kubernetes API server
|
||||
- **2379-2380**: etcd client/peer communication
|
||||
- **10250**: kubelet API
|
||||
- **50000**: Talos API (apid)
|
||||
- **443**: Discovery service (outbound)
|
||||
- **51820**: KubeSpan WireGuard (if enabled)
|
||||
|
||||
## Operational Best Practices
|
||||
|
||||
### Monitoring
|
||||
- Monitor discovery service connectivity
|
||||
- Track cluster member changes
|
||||
- Alert on network partitions
|
||||
- Monitor KubeSpan peer status
|
||||
|
||||
### Backup and Recovery
|
||||
- Document network configuration
|
||||
- Backup discovery service configuration
|
||||
- Test network recovery procedures
|
||||
- Plan for discovery service outages
|
||||
|
||||
### Scaling Considerations
|
||||
- Discovery service scales to thousands of nodes
|
||||
- KubeSpan mesh scales to hundreds of nodes efficiently
|
||||
- Consider network segmentation for large clusters
|
||||
- Plan for multi-region deployments
|
||||
|
||||
This networking foundation enables Talos clusters to maintain connectivity and membership across various network topologies while providing security and performance optimization options.
|
||||
287
ai/talos-v1.11/etcd-management.md
Normal file
287
ai/talos-v1.11/etcd-management.md
Normal file
@@ -0,0 +1,287 @@
|
||||
# etcd Management and Disaster Recovery Guide
|
||||
|
||||
This guide covers etcd database operations, maintenance, and disaster recovery procedures for Talos Linux clusters.
|
||||
|
||||
## etcd Health Monitoring
|
||||
|
||||
### Basic Health Checks
|
||||
```bash
|
||||
# Check etcd status across all control plane nodes
|
||||
talosctl -n <IP1>,<IP2>,<IP3> etcd status
|
||||
|
||||
# Check etcd alarms
|
||||
talosctl -n <IP> etcd alarm list
|
||||
|
||||
# Check etcd members
|
||||
talosctl -n <IP> etcd members
|
||||
|
||||
# Check service status
|
||||
talosctl -n <IP> service etcd
|
||||
```
|
||||
|
||||
### Understanding etcd Status Output
|
||||
```
|
||||
NODE MEMBER DB SIZE IN USE LEADER RAFT INDEX RAFT TERM RAFT APPLIED INDEX LEARNER ERRORS
|
||||
172.20.0.2 a49c021e76e707db 17 MB 4.5 MB (26.10%) ecebb05b59a776f1 53391 4 53391 false
|
||||
```
|
||||
|
||||
**Key Metrics**:
|
||||
- **DB SIZE**: Total database size on disk
|
||||
- **IN USE**: Actual data size (fragmentation = DB SIZE - IN USE)
|
||||
- **LEADER**: Current etcd cluster leader
|
||||
- **RAFT INDEX**: Consensus log position
|
||||
- **LEARNER**: Whether node is still joining cluster
|
||||
|
||||
## Space Quota Management
|
||||
|
||||
### Default Configuration
|
||||
- Default space quota: 2 GiB
|
||||
- Recommended maximum: 8 GiB
|
||||
- Database locks when quota exceeded
|
||||
|
||||
### Quota Exceeded Handling
|
||||
**Symptoms**:
|
||||
```bash
|
||||
talosctl -n <IP> etcd alarm list
|
||||
# Output: ALARM: NOSPACE
|
||||
```
|
||||
|
||||
**Resolution**:
|
||||
1. Increase quota in machine configuration:
|
||||
```yaml
|
||||
cluster:
|
||||
etcd:
|
||||
extraArgs:
|
||||
quota-backend-bytes: 4294967296 # 4 GiB
|
||||
```
|
||||
|
||||
2. Apply configuration and reboot:
|
||||
```bash
|
||||
talosctl -n <IP> apply-config --file updated-config.yaml --mode reboot
|
||||
```
|
||||
|
||||
3. Clear the alarm:
|
||||
```bash
|
||||
talosctl -n <IP> etcd alarm disarm
|
||||
```
|
||||
|
||||
## Database Defragmentation
|
||||
|
||||
### When to Defragment
|
||||
- In use/DB size ratio < 0.5 (heavily fragmented)
|
||||
- Database size exceeds quota but actual data is small
|
||||
- Performance degradation due to fragmentation
|
||||
|
||||
### Defragmentation Process
|
||||
```bash
|
||||
# Check fragmentation status
|
||||
talosctl -n <IP1>,<IP2>,<IP3> etcd status
|
||||
|
||||
# Defragment single node (resource-intensive operation)
|
||||
talosctl -n <IP1> etcd defrag
|
||||
|
||||
# Verify defragmentation results
|
||||
talosctl -n <IP1> etcd status
|
||||
```
|
||||
|
||||
**Important Notes**:
|
||||
- Defragment one node at a time
|
||||
- Operation blocks reads/writes during execution
|
||||
- Can significantly improve performance if heavily fragmented
|
||||
|
||||
### Post-Defragmentation Verification
|
||||
After successful defragmentation, DB size should closely match IN USE size:
|
||||
```
|
||||
NODE MEMBER DB SIZE IN USE
|
||||
172.20.0.2 a49c021e76e707db 4.5 MB 4.5 MB (100.00%)
|
||||
```
|
||||
|
||||
## Backup Operations
|
||||
|
||||
### Regular Snapshots
|
||||
```bash
|
||||
# Create consistent snapshot
|
||||
talosctl -n <IP> etcd snapshot db.snapshot
|
||||
```
|
||||
|
||||
**Output Example**:
|
||||
```
|
||||
etcd snapshot saved to "db.snapshot" (2015264 bytes)
|
||||
snapshot info: hash c25fd181, revision 4193, total keys 1287, total size 3035136
|
||||
```
|
||||
|
||||
### Disaster Snapshots
|
||||
When etcd cluster is unhealthy and normal snapshot fails:
|
||||
```bash
|
||||
# Copy database directly (may be inconsistent)
|
||||
talosctl -n <IP> cp /var/lib/etcd/member/snap/db .
|
||||
```
|
||||
|
||||
### Automated Backup Strategy
|
||||
- Schedule regular snapshots (daily/hourly based on change frequency)
|
||||
- Store snapshots in multiple locations
|
||||
- Test restore procedures regularly
|
||||
- Document recovery procedures
|
||||
|
||||
## Disaster Recovery
|
||||
|
||||
### Pre-Recovery Assessment
|
||||
**Check if Recovery is Necessary**:
|
||||
```bash
|
||||
# Query etcd health on all control plane nodes
|
||||
talosctl -n <IP1>,<IP2>,<IP3> service etcd
|
||||
|
||||
# Check member list consistency
|
||||
talosctl -n <IP1> etcd members
|
||||
talosctl -n <IP2> etcd members
|
||||
talosctl -n <IP3> etcd members
|
||||
```
|
||||
|
||||
**Recovery is needed when**:
|
||||
- Quorum is lost (majority of nodes down)
|
||||
- etcd data corruption
|
||||
- Complete cluster failure
|
||||
|
||||
### Recovery Prerequisites
|
||||
1. **Latest etcd snapshot** (preferably consistent)
|
||||
2. **Machine configuration backup**:
|
||||
```bash
|
||||
talosctl -n <IP> get mc v1alpha1 -o yaml | yq eval '.spec' -
|
||||
```
|
||||
3. **No init-type nodes** (deprecated, incompatible with recovery)
|
||||
|
||||
### Recovery Procedure
|
||||
|
||||
#### Step 1: Prepare Control Plane Nodes
|
||||
```bash
|
||||
# If nodes have hardware issues, replace them with same configuration
|
||||
# If nodes are running but etcd is corrupted, wipe EPHEMERAL partition:
|
||||
talosctl -n <IP> reset --graceful=false --reboot --system-labels-to-wipe=EPHEMERAL
|
||||
```
|
||||
|
||||
#### Step 2: Verify etcd State
|
||||
All etcd services should be in "Preparing" state:
|
||||
```bash
|
||||
talosctl -n <IP> service etcd
|
||||
# Expected: STATE: Preparing
|
||||
```
|
||||
|
||||
#### Step 3: Bootstrap from Snapshot
|
||||
```bash
|
||||
# Bootstrap cluster from snapshot
|
||||
talosctl -n <IP> bootstrap --recover-from=./db.snapshot
|
||||
|
||||
# For direct database copies, skip hash check:
|
||||
talosctl -n <IP> bootstrap --recover-from=./db --recover-skip-hash-check
|
||||
```
|
||||
|
||||
#### Step 4: Verify Recovery
|
||||
**Monitor kernel logs** for recovery progress:
|
||||
```bash
|
||||
talosctl -n <IP> dmesg -f
|
||||
```
|
||||
|
||||
**Expected log entries**:
|
||||
```
|
||||
recovering etcd from snapshot: hash c25fd181, revision 4193, total keys 1287, total size 3035136
|
||||
{"level":"info","msg":"restored snapshot","path":"/var/lib/etcd.snapshot"}
|
||||
```
|
||||
|
||||
**Verify cluster health**:
|
||||
```bash
|
||||
# etcd should become healthy on bootstrap node
|
||||
talosctl -n <IP> service etcd
|
||||
|
||||
# Kubernetes control plane should start
|
||||
kubectl get nodes
|
||||
|
||||
# Other control plane nodes should join automatically
|
||||
talosctl -n <IP1>,<IP2>,<IP3> etcd status
|
||||
```
|
||||
|
||||
## etcd Version Management
|
||||
|
||||
### Downgrade Process (v3.6 to v3.5)
|
||||
**Prerequisites**:
|
||||
- Healthy cluster running v3.6.x
|
||||
- Recent backup snapshot
|
||||
- Downgrade only one minor version at a time
|
||||
|
||||
#### Step 1: Validate Downgrade
|
||||
```bash
|
||||
talosctl -n <IP1> etcd downgrade validate 3.5
|
||||
```
|
||||
|
||||
#### Step 2: Enable Downgrade
|
||||
```bash
|
||||
talosctl -n <IP1> etcd downgrade enable 3.5
|
||||
```
|
||||
|
||||
#### Step 3: Verify Schema Migration
|
||||
```bash
|
||||
# Check storage version migrated to 3.5
|
||||
talosctl -n <IP1>,<IP2>,<IP3> etcd status
|
||||
# Verify STORAGE column shows 3.5.0
|
||||
```
|
||||
|
||||
#### Step 4: Patch Machine Configuration
|
||||
```bash
|
||||
# Transfer leadership if node is leader
|
||||
talosctl -n <IP1> etcd forfeit-leadership
|
||||
|
||||
# Create patch file
|
||||
cat > etcd-patch.yaml <<EOF
|
||||
cluster:
|
||||
etcd:
|
||||
image: gcr.io/etcd-development/etcd:v3.5.22
|
||||
EOF
|
||||
|
||||
# Apply patch with reboot
|
||||
talosctl -n <IP1> patch machineconfig --patch @etcd-patch.yaml --mode reboot
|
||||
```
|
||||
|
||||
#### Step 5: Repeat for All Control Plane Nodes
|
||||
Continue patching remaining control plane nodes one by one.
|
||||
|
||||
## Operational Best Practices
|
||||
|
||||
### Monitoring
|
||||
- Monitor database size and fragmentation regularly
|
||||
- Set up alerts for space quota approaching limits
|
||||
- Track etcd performance metrics (request latency, leader changes)
|
||||
- Monitor disk I/O and network latency
|
||||
|
||||
### Maintenance Windows
|
||||
- Schedule defragmentation during low-traffic periods
|
||||
- Coordinate with application teams for maintenance windows
|
||||
- Test backup/restore procedures in non-production environments
|
||||
|
||||
### Performance Optimization
|
||||
- Use fast storage (NVMe SSDs preferred)
|
||||
- Minimize network latency between control plane nodes
|
||||
- Monitor and tune etcd configuration based on workload
|
||||
|
||||
### Security
|
||||
- Encrypt etcd data at rest
|
||||
- Secure backup storage with appropriate access controls
|
||||
- Regularly rotate certificates
|
||||
- Monitor for unauthorized access attempts
|
||||
|
||||
## Troubleshooting Common Issues
|
||||
|
||||
### Split Brain Prevention
|
||||
- Ensure odd number of control plane nodes
|
||||
- Monitor network connectivity between nodes
|
||||
- Use dedicated network for control plane communication when possible
|
||||
|
||||
### Performance Issues
|
||||
- Check disk I/O latency
|
||||
- Monitor memory usage
|
||||
- Consider vertical scaling before adding nodes
|
||||
- Review etcd request patterns and optimize applications
|
||||
|
||||
### Backup/Restore Issues
|
||||
- Test restore procedures regularly
|
||||
- Verify backup integrity
|
||||
- Ensure consistent network and storage configuration
|
||||
- Document and practice disaster recovery procedures
|
||||
480
ai/talos-v1.11/troubleshooting-guide.md
Normal file
480
ai/talos-v1.11/troubleshooting-guide.md
Normal file
@@ -0,0 +1,480 @@
|
||||
# Talos Troubleshooting Guide
|
||||
|
||||
This guide provides systematic approaches to diagnosing and resolving common Talos cluster issues.
|
||||
|
||||
## General Troubleshooting Methodology
|
||||
|
||||
### 1. Gather Information
|
||||
```bash
|
||||
# Node status and health
|
||||
talosctl -n <IP> health
|
||||
talosctl -n <IP> version
|
||||
talosctl -n <IP> get members
|
||||
|
||||
# System resources
|
||||
talosctl -n <IP> memory
|
||||
talosctl -n <IP> disks
|
||||
talosctl -n <IP> processes | head -20
|
||||
|
||||
# Service status
|
||||
talosctl -n <IP> services
|
||||
```
|
||||
|
||||
### 2. Check Logs
|
||||
```bash
|
||||
# Kernel logs (system-level issues)
|
||||
talosctl -n <IP> dmesg | tail -100
|
||||
|
||||
# Service logs
|
||||
talosctl -n <IP> logs machined
|
||||
talosctl -n <IP> logs kubelet
|
||||
talosctl -n <IP> logs containerd
|
||||
|
||||
# System events
|
||||
talosctl -n <IP> events --since=1h
|
||||
```
|
||||
|
||||
### 3. Network Connectivity
|
||||
```bash
|
||||
# Discovery and membership
|
||||
talosctl get affiliates
|
||||
talosctl get members
|
||||
|
||||
# Network interfaces
|
||||
talosctl -n <IP> interfaces
|
||||
talosctl -n <IP> get addresses
|
||||
|
||||
# Control plane connectivity
|
||||
kubectl get nodes
|
||||
talosctl -n <IP1>,<IP2>,<IP3> etcd status
|
||||
```
|
||||
|
||||
## Bootstrap and Initial Setup Issues
|
||||
|
||||
### Cluster Bootstrap Failures
|
||||
|
||||
**Symptoms**: Bootstrap command fails or times out
|
||||
**Diagnosis**:
|
||||
```bash
|
||||
# Check etcd service state
|
||||
talosctl -n <IP> service etcd
|
||||
|
||||
# Check if node is trying to join instead of bootstrap
|
||||
talosctl -n <IP> logs etcd | grep -i bootstrap
|
||||
|
||||
# Verify machine configuration
|
||||
talosctl -n <IP> get machineconfig -o yaml
|
||||
```
|
||||
|
||||
**Common Causes & Solutions**:
|
||||
1. **Wrong node type**: Ensure using `controlplane`, not deprecated `init`
|
||||
2. **Network issues**: Verify control plane endpoint connectivity
|
||||
3. **Configuration errors**: Check machine configuration validity
|
||||
4. **Previous bootstrap**: etcd data exists from previous attempts
|
||||
|
||||
**Resolution**:
|
||||
```bash
|
||||
# Reset node if previous bootstrap data exists
|
||||
talosctl -n <IP> reset --graceful=false --reboot --system-labels-to-wipe=EPHEMERAL
|
||||
|
||||
# Re-apply configuration and bootstrap
|
||||
talosctl apply-config --nodes <IP> --file controlplane.yaml
|
||||
talosctl bootstrap --nodes <IP>
|
||||
```
|
||||
|
||||
### Node Join Issues
|
||||
|
||||
**Symptoms**: New nodes don't join cluster
|
||||
**Diagnosis**:
|
||||
```bash
|
||||
# Check discovery
|
||||
talosctl get affiliates
|
||||
talosctl get members
|
||||
|
||||
# Check bootstrap token
|
||||
kubectl get secrets -n kube-system | grep bootstrap-token
|
||||
|
||||
# Check kubelet logs
|
||||
talosctl -n <IP> logs kubelet | grep -i certificate
|
||||
```
|
||||
|
||||
**Common Solutions**:
|
||||
```bash
|
||||
# Regenerate bootstrap token if expired
|
||||
kubeadm token create --print-join-command
|
||||
|
||||
# Verify discovery service connectivity
|
||||
talosctl -n <IP> get affiliates --namespace=cluster-raw
|
||||
|
||||
# Check machine configuration matches cluster
|
||||
talosctl -n <IP> get machineconfig -o yaml
|
||||
```
|
||||
|
||||
## Control Plane Issues
|
||||
|
||||
### etcd Problems
|
||||
|
||||
**etcd Won't Start**:
|
||||
```bash
|
||||
# Check etcd service status and logs
|
||||
talosctl -n <IP> service etcd
|
||||
talosctl -n <IP> logs etcd
|
||||
|
||||
# Check etcd data directory
|
||||
talosctl -n <IP> list /var/lib/etcd
|
||||
|
||||
# Check disk space and permissions
|
||||
talosctl -n <IP> df
|
||||
```
|
||||
|
||||
**etcd Quorum Loss**:
|
||||
```bash
|
||||
# Check member status
|
||||
talosctl -n <IP1>,<IP2>,<IP3> etcd status
|
||||
talosctl -n <IP> etcd members
|
||||
|
||||
# Identify healthy members
|
||||
for ip in IP1 IP2 IP3; do
|
||||
echo "=== Node $ip ==="
|
||||
talosctl -n $ip service etcd
|
||||
done
|
||||
```
|
||||
|
||||
**Solution for Quorum Loss**:
|
||||
1. If majority available: Remove failed members, add replacements
|
||||
2. If majority lost: Follow disaster recovery procedure
|
||||
|
||||
### API Server Issues
|
||||
|
||||
**API Server Not Responding**:
|
||||
```bash
|
||||
# Check API server pod status
|
||||
kubectl get pods -n kube-system | grep apiserver
|
||||
|
||||
# Check API server configuration
|
||||
talosctl -n <IP> get apiserverconfig -o yaml
|
||||
|
||||
# Check control plane endpoint
|
||||
curl -k https://<control-plane-endpoint>:6443/healthz
|
||||
```
|
||||
|
||||
**Common Solutions**:
|
||||
```bash
|
||||
# Restart kubelet to reload static pods
|
||||
talosctl -n <IP> service kubelet restart
|
||||
|
||||
# Check for configuration issues
|
||||
talosctl -n <IP> logs kubelet | grep apiserver
|
||||
|
||||
# Verify etcd connectivity
|
||||
talosctl -n <IP> etcd status
|
||||
```
|
||||
|
||||
## Node-Level Issues
|
||||
|
||||
### Kubelet Problems
|
||||
|
||||
**Kubelet Service Issues**:
|
||||
```bash
|
||||
# Check kubelet status and logs
|
||||
talosctl -n <IP> service kubelet
|
||||
talosctl -n <IP> logs kubelet | tail -50
|
||||
|
||||
# Check kubelet configuration
|
||||
talosctl -n <IP> get kubeletconfig -o yaml
|
||||
|
||||
# Check container runtime
|
||||
talosctl -n <IP> service containerd
|
||||
```
|
||||
|
||||
**Common Kubelet Issues**:
|
||||
1. **Certificate problems**: Check certificate expiration and rotation
|
||||
2. **Container runtime issues**: Verify containerd health
|
||||
3. **Resource constraints**: Check memory and disk space
|
||||
4. **Network connectivity**: Verify API server connectivity
|
||||
|
||||
### Container Runtime Issues
|
||||
|
||||
**Containerd Problems**:
|
||||
```bash
|
||||
# Check containerd service
|
||||
talosctl -n <IP> service containerd
|
||||
talosctl -n <IP> logs containerd
|
||||
|
||||
# List containers
|
||||
talosctl -n <IP> containers
|
||||
talosctl -n <IP> containers -k # Kubernetes containers
|
||||
|
||||
# Check containerd configuration
|
||||
talosctl -n <IP> read /etc/cri/conf.d/cri.toml
|
||||
```
|
||||
|
||||
**Common Solutions**:
|
||||
```bash
|
||||
# Restart containerd
|
||||
talosctl -n <IP> service containerd restart
|
||||
|
||||
# Check disk space for container images
|
||||
talosctl -n <IP> df
|
||||
|
||||
# Clean up unused containers/images
|
||||
# (This happens automatically via kubelet GC)
|
||||
```
|
||||
|
||||
## Network Issues
|
||||
|
||||
### Network Connectivity Problems
|
||||
|
||||
**Node-to-Node Connectivity**:
|
||||
```bash
|
||||
# Test basic network connectivity
|
||||
talosctl -n <IP1> interfaces
|
||||
talosctl -n <IP1> get routes
|
||||
|
||||
# Test specific connectivity
|
||||
talosctl -n <IP1> read /etc/resolv.conf
|
||||
|
||||
# Check network configuration
|
||||
talosctl -n <IP> get networkconfig -o yaml
|
||||
```
|
||||
|
||||
**DNS Resolution Issues**:
|
||||
```bash
|
||||
# Check DNS configuration
|
||||
talosctl -n <IP> read /etc/resolv.conf
|
||||
|
||||
# Test DNS resolution
|
||||
talosctl -n <IP> exec --kubernetes coredns-pod -- nslookup kubernetes.default.svc.cluster.local
|
||||
```
|
||||
|
||||
### Discovery Service Issues
|
||||
|
||||
**Discovery Not Working**:
|
||||
```bash
|
||||
# Check discovery configuration
|
||||
talosctl get discoveryconfig -o yaml
|
||||
|
||||
# Check affiliate discovery
|
||||
talosctl get affiliates
|
||||
talosctl get affiliates --namespace=cluster-raw
|
||||
|
||||
# Test discovery service connectivity
|
||||
curl -v https://discovery.talos.dev/
|
||||
```
|
||||
|
||||
**KubeSpan Issues** (if enabled):
|
||||
```bash
|
||||
# Check KubeSpan configuration
|
||||
talosctl get kubespanconfig -o yaml
|
||||
|
||||
# Check peer status
|
||||
talosctl get kubespanpeerspecs
|
||||
talosctl get kubespanpeerstatuses
|
||||
|
||||
# Check WireGuard interface
|
||||
talosctl -n <IP> interfaces | grep kubespan
|
||||
```
|
||||
|
||||
## Upgrade Issues
|
||||
|
||||
### OS Upgrade Problems
|
||||
|
||||
**Upgrade Fails or Hangs**:
|
||||
```bash
|
||||
# Check upgrade status
|
||||
talosctl -n <IP> dmesg | grep -i upgrade
|
||||
talosctl -n <IP> events | grep -i upgrade
|
||||
|
||||
# Use staged upgrade for filesystem lock issues
|
||||
talosctl upgrade --nodes <IP> --image <image> --stage
|
||||
|
||||
# Monitor upgrade progress
|
||||
talosctl upgrade --nodes <IP> --image <image> --wait --debug
|
||||
```
|
||||
|
||||
**Boot Issues After Upgrade**:
|
||||
```bash
|
||||
# Check boot logs
|
||||
talosctl -n <IP> dmesg | head -100
|
||||
|
||||
# System automatically rolls back on boot failure
|
||||
# Check current version
|
||||
talosctl -n <IP> version
|
||||
|
||||
# Manual rollback if needed
|
||||
talosctl rollback --nodes <IP>
|
||||
```
|
||||
|
||||
### Kubernetes Upgrade Issues
|
||||
|
||||
**K8s Upgrade Failures**:
|
||||
```bash
|
||||
# Check upgrade status
|
||||
talosctl --nodes <controlplane> upgrade-k8s --to <version> --dry-run
|
||||
|
||||
# Check individual component status
|
||||
kubectl get pods -n kube-system
|
||||
talosctl -n <IP> get apiserverconfig -o yaml
|
||||
```
|
||||
|
||||
**Version Mismatch Issues**:
|
||||
```bash
|
||||
# Check version consistency
|
||||
kubectl get nodes -o wide
|
||||
talosctl -n <IP1>,<IP2>,<IP3> version
|
||||
|
||||
# Check component versions
|
||||
kubectl get pods -n kube-system -o wide
|
||||
```
|
||||
|
||||
## Resource and Performance Issues
|
||||
|
||||
### Memory and Storage Problems
|
||||
|
||||
**Out of Memory**:
|
||||
```bash
|
||||
# Check memory usage
|
||||
talosctl -n <IP> memory
|
||||
talosctl -n <IP> processes --sort-by=memory | head -20
|
||||
|
||||
# Check for memory pressure
|
||||
kubectl describe node <node-name> | grep -A 10 Conditions
|
||||
|
||||
# Check OOM events
|
||||
talosctl -n <IP> dmesg | grep -i "out of memory"
|
||||
```
|
||||
|
||||
**Disk Space Issues**:
|
||||
```bash
|
||||
# Check disk usage
|
||||
talosctl -n <IP> df
|
||||
talosctl -n <IP> disks
|
||||
|
||||
# Check specific directories
|
||||
talosctl -n <IP> list /var/lib/containerd
|
||||
talosctl -n <IP> list /var/lib/etcd
|
||||
|
||||
# Clean up if needed (automatic GC usually handles this)
|
||||
kubectl describe node <node-name> | grep -A 5 "Disk Pressure"
|
||||
```
|
||||
|
||||
### Performance Issues
|
||||
|
||||
**Slow Cluster Response**:
|
||||
```bash
|
||||
# Check API server response time
|
||||
time kubectl get nodes
|
||||
|
||||
# Check etcd performance
|
||||
talosctl -n <IP> etcd status
|
||||
# Look for high DB size vs IN USE ratio (fragmentation)
|
||||
|
||||
# Check system load
|
||||
talosctl -n <IP> cpu
|
||||
talosctl -n <IP> memory
|
||||
```
|
||||
|
||||
**High CPU/Memory Usage**:
|
||||
```bash
|
||||
# Identify resource-heavy processes
|
||||
talosctl -n <IP> processes --sort-by=cpu | head -10
|
||||
talosctl -n <IP> processes --sort-by=memory | head -10
|
||||
|
||||
# Check cgroup usage
|
||||
talosctl -n <IP> cgroups --preset memory
|
||||
talosctl -n <IP> cgroups --preset cpu
|
||||
```
|
||||
|
||||
## Configuration Issues
|
||||
|
||||
### Machine Configuration Problems
|
||||
|
||||
**Invalid Configuration**:
|
||||
```bash
|
||||
# Validate configuration before applying
|
||||
talosctl validate -f machineconfig.yaml
|
||||
|
||||
# Check current configuration
|
||||
talosctl -n <IP> get machineconfig -o yaml
|
||||
|
||||
# Compare with expected configuration
|
||||
diff <(talosctl -n <IP> get mc v1alpha1 -o yaml) expected-config.yaml
|
||||
```
|
||||
|
||||
**Configuration Drift**:
|
||||
```bash
|
||||
# Check configuration version
|
||||
talosctl -n <IP> get machineconfig
|
||||
|
||||
# Re-apply configuration if needed
|
||||
talosctl apply-config --nodes <IP> --file corrected-config.yaml --dry-run
|
||||
talosctl apply-config --nodes <IP> --file corrected-config.yaml
|
||||
```
|
||||
|
||||
## Emergency Procedures
|
||||
|
||||
### Node Unresponsive
|
||||
|
||||
**Complete Node Failure**:
|
||||
1. **Physical access required**: Power cycle or hardware reset
|
||||
2. **Check hardware**: Memory, disk, network interface status
|
||||
3. **Boot issues**: May require bootable recovery media
|
||||
|
||||
**Partial Connectivity**:
|
||||
```bash
|
||||
# Try different network interfaces if multiple available
|
||||
talosctl -e <alternate-ip> -n <IP> health
|
||||
|
||||
# Check if specific services are running
|
||||
talosctl -n <IP> service machined
|
||||
talosctl -n <IP> service apid
|
||||
```
|
||||
|
||||
### Cluster-Wide Failures
|
||||
|
||||
**All Control Plane Nodes Down**:
|
||||
1. **Assess scope**: Determine if data corruption or hardware failure
|
||||
2. **Recovery strategy**: Use etcd backup if available
|
||||
3. **Rebuild process**: May require complete cluster rebuild
|
||||
|
||||
**Follow disaster recovery procedures** as documented in etcd-management.md.
|
||||
|
||||
### Emergency Reset Procedures
|
||||
|
||||
**Single Node Reset**:
|
||||
```bash
|
||||
# Graceful reset (preserves some data)
|
||||
talosctl -n <IP> reset
|
||||
|
||||
# Force reset (wipes all data)
|
||||
talosctl -n <IP> reset --graceful=false --reboot
|
||||
|
||||
# Selective wipe (preserve STATE partition)
|
||||
talosctl -n <IP> reset --system-labels-to-wipe=EPHEMERAL
|
||||
```
|
||||
|
||||
**Cluster Reset** (DESTRUCTIVE):
|
||||
```bash
|
||||
# Reset all nodes (DANGER: DATA LOSS)
|
||||
for ip in IP1 IP2 IP3; do
|
||||
talosctl -n $ip reset --graceful=false --reboot
|
||||
done
|
||||
```
|
||||
|
||||
## Monitoring and Alerting
|
||||
|
||||
### Key Metrics to Monitor
|
||||
- Node resource usage (CPU, memory, disk)
|
||||
- etcd health and performance
|
||||
- Control plane component status
|
||||
- Network connectivity
|
||||
- Certificate expiration
|
||||
- Discovery service connectivity
|
||||
|
||||
### Log Locations for External Monitoring
|
||||
- Kernel logs: `talosctl dmesg`
|
||||
- Service logs: `talosctl logs <service>`
|
||||
- System events: `talosctl events`
|
||||
- Kubernetes events: `kubectl get events`
|
||||
|
||||
This troubleshooting guide provides systematic approaches to identify and resolve the most common issues encountered in Talos cluster operations.
|
||||
188
ai/wildcloud-v.PoC/README.md
Normal file
188
ai/wildcloud-v.PoC/README.md
Normal file
@@ -0,0 +1,188 @@
|
||||
# Wild Cloud Agent Context Documentation
|
||||
|
||||
This directory contains comprehensive documentation about the Wild Cloud project, designed to provide AI agents (like Claude Code) with the context needed to effectively help users with Wild Cloud development, deployment, and operations.
|
||||
|
||||
## Documentation Overview
|
||||
|
||||
### 📚 Core Documentation Files
|
||||
|
||||
1. **[overview.md](./overview.md)** - Complete project introduction and getting started guide
|
||||
- What Wild Cloud is and why it exists
|
||||
- Technology stack and architecture overview
|
||||
- Quick start guide and common use cases
|
||||
- Best practices and troubleshooting
|
||||
|
||||
2. **[bin-scripts.md](./bin-scripts.md)** - Complete CLI reference
|
||||
- All 34+ `wild-*` commands with usage examples
|
||||
- Command categories (setup, apps, config, operations)
|
||||
- Script dependencies and execution order
|
||||
- Common usage patterns
|
||||
|
||||
3. **[setup-process.md](./setup-process.md)** - Infrastructure deployment deep dive
|
||||
- Complete setup phases and dependencies
|
||||
- Talos Linux and Kubernetes cluster deployment
|
||||
- Core services installation (MetalLB, Traefik, cert-manager, etc.)
|
||||
- Network configuration and DNS management
|
||||
|
||||
4. **[apps-system.md](./apps-system.md)** - Application management system
|
||||
- App structure and lifecycle management
|
||||
- Template system and configuration
|
||||
- Available applications and their features
|
||||
- Creating custom applications
|
||||
|
||||
5. **[configuration-system.md](./configuration-system.md)** - Configuration and secrets management
|
||||
- `config.yaml` and `secrets.yaml` structure
|
||||
- Template processing with gomplate
|
||||
- Environment setup and validation
|
||||
- Security best practices
|
||||
|
||||
6. **[project-architecture.md](./project-architecture.md)** - Project structure and organization
|
||||
- Wild Cloud repository structure
|
||||
- User cloud directory layout
|
||||
- File permissions and security model
|
||||
- Development and deployment patterns
|
||||
|
||||
## Quick Reference Guide
|
||||
|
||||
### Essential Commands
|
||||
```bash
|
||||
# Setup & Initialization
|
||||
wild-init # Initialize new cloud
|
||||
wild-setup # Complete deployment
|
||||
wild-health # System health check
|
||||
|
||||
# Application Management
|
||||
wild-apps-list # List available apps
|
||||
wild-app-add <app> # Configure app
|
||||
wild-app-deploy <app> # Deploy app
|
||||
|
||||
# Configuration
|
||||
wild-config <key> # Read config
|
||||
wild-config-set <key> <val> # Set config
|
||||
wild-secret <key> # Read secret
|
||||
```
|
||||
|
||||
### Key File Locations
|
||||
|
||||
**Wild Cloud Repository** (`WC_ROOT`):
|
||||
- `bin/` - All CLI commands
|
||||
- `apps/` - Application templates
|
||||
- `setup/` - Infrastructure templates
|
||||
- `docs/` - Documentation
|
||||
|
||||
**User Cloud Directory** (`WC_HOME`):
|
||||
- `config.yaml` - Main configuration
|
||||
- `secrets.yaml` - Sensitive data
|
||||
- `apps/` - Deployed app configs
|
||||
- `.wildcloud/` - Project marker
|
||||
|
||||
### Application Categories
|
||||
|
||||
- **Content**: Ghost (blog), Discourse (forum)
|
||||
- **Media**: Immich (photos)
|
||||
- **Development**: Gitea (Git), Docker Registry
|
||||
- **Databases**: PostgreSQL, MySQL, Redis
|
||||
- **AI/ML**: vLLM (LLM inference)
|
||||
|
||||
## Technology Stack Summary
|
||||
|
||||
### Core Infrastructure
|
||||
- **Talos Linux** - Immutable Kubernetes OS
|
||||
- **Kubernetes** - Container orchestration
|
||||
- **MetalLB** - Load balancing
|
||||
- **Traefik** - Ingress/reverse proxy
|
||||
- **Longhorn** - Distributed storage
|
||||
- **cert-manager** - TLS certificates
|
||||
|
||||
### Management Tools
|
||||
- **gomplate** - Template processing
|
||||
- **Kustomize** - Configuration management
|
||||
- **restic** - Backup system
|
||||
- **kubectl/talosctl** - Cluster management
|
||||
|
||||
## Common Agent Tasks
|
||||
|
||||
### When Users Ask About...
|
||||
|
||||
**"How do I deploy X?"**
|
||||
- Check apps-system.md for application management
|
||||
- Look for X in available applications list
|
||||
- Reference app deployment lifecycle
|
||||
|
||||
**"Setup isn't working"**
|
||||
- Review setup-process.md for troubleshooting
|
||||
- Check bin-scripts.md for command options
|
||||
- Verify prerequisites and dependencies
|
||||
|
||||
**"How do I configure Y?"**
|
||||
- Check configuration-system.md for config management
|
||||
- Look at project-architecture.md for file locations
|
||||
- Review template processing documentation
|
||||
|
||||
**"What does wild-X command do?"**
|
||||
- Reference bin-scripts.md for complete command documentation
|
||||
- Check command categories and usage patterns
|
||||
- Look at dependencies between commands
|
||||
|
||||
### Development Tasks
|
||||
|
||||
**Creating New Apps**:
|
||||
1. Review apps-system.md "Creating Custom Apps" section
|
||||
2. Follow Wild Cloud app structure conventions
|
||||
3. Use project-architecture.md for file organization
|
||||
4. Test with standard app deployment workflow
|
||||
|
||||
**Modifying Infrastructure**:
|
||||
1. Check setup-process.md for infrastructure components
|
||||
2. Review configuration-system.md for template processing
|
||||
3. Understand project-architecture.md file relationships
|
||||
4. Test changes carefully in development environment
|
||||
|
||||
**Troubleshooting Issues**:
|
||||
1. Use bin-scripts.md for diagnostic commands
|
||||
2. Check setup-process.md for component validation
|
||||
3. Review configuration-system.md for config problems
|
||||
4. Reference apps-system.md for application issues
|
||||
|
||||
## Best Practices for Agents
|
||||
|
||||
### Understanding User Context
|
||||
- Always check if they're in a Wild Cloud directory (look for `.wildcloud/`)
|
||||
- Determine if they need setup help vs operational help
|
||||
- Consider their experience level (beginner vs advanced)
|
||||
- Check what applications they're trying to deploy
|
||||
|
||||
### Providing Help
|
||||
- Reference specific documentation sections for detailed info
|
||||
- Provide exact command syntax from bin-scripts.md
|
||||
- Explain prerequisites and dependencies
|
||||
- Offer validation steps to verify success
|
||||
|
||||
### Safety Considerations
|
||||
- Always recommend testing in development first
|
||||
- Warn about destructive operations (delete, reset)
|
||||
- Emphasize backup importance before major changes
|
||||
- Explain security implications of configuration changes
|
||||
|
||||
### Common Gotchas
|
||||
- `secrets.yaml` has restricted permissions (600)
|
||||
- Templates need processing before deployment
|
||||
- Dependencies between applications must be satisfied
|
||||
- Node hardware detection requires maintenance mode boot
|
||||
|
||||
## Documentation Maintenance
|
||||
|
||||
This documentation should be updated when:
|
||||
- New commands are added to `bin/`
|
||||
- New applications are added to `apps/`
|
||||
- Infrastructure components change
|
||||
- Configuration schema evolves
|
||||
- Best practices are updated
|
||||
|
||||
Each documentation file includes:
|
||||
- Complete coverage of its topic area
|
||||
- Practical examples and use cases
|
||||
- Troubleshooting guidance
|
||||
- References to related documentation
|
||||
|
||||
This comprehensive context should enable AI agents to provide expert-level assistance with Wild Cloud projects across all aspects of the system.
|
||||
595
ai/wildcloud-v.PoC/apps-system.md
Normal file
595
ai/wildcloud-v.PoC/apps-system.md
Normal file
@@ -0,0 +1,595 @@
|
||||
# Wild Cloud Apps System
|
||||
|
||||
The Wild Cloud apps system provides a streamlined way to deploy and manage applications on your Kubernetes cluster. It uses Kustomize for configuration management and follows a standardized structure for consistent deployment patterns.
|
||||
|
||||
## App Structure and Components
|
||||
|
||||
### Directory Structure
|
||||
Each subdirectory represents a Wild Cloud app. Each app directory contains:
|
||||
|
||||
**Required Files:**
|
||||
- `manifest.yaml` - App metadata and configuration
|
||||
- `kustomization.yaml` - Kustomize configuration with Wild Cloud labels
|
||||
|
||||
**Standard Configuration Files (one or more YAML files containing Kubernetes resource definitions):**
|
||||
```
|
||||
apps/myapp/
|
||||
├── manifest.yaml # Required: App metadata and configuration
|
||||
├── kustomization.yaml # Required: Kustomize configuration with Wild Cloud labels
|
||||
├── namespace.yaml # Kubernetes namespace definition
|
||||
├── deployment.yaml # Application deployment
|
||||
├── service.yaml # Kubernetes service definition
|
||||
├── ingress.yaml # HTTPS ingress with external DNS
|
||||
├── pvc.yaml # Persistent volume claims (if needed)
|
||||
├── db-init-job.yaml # Database initialization (if needed)
|
||||
└── configmap.yaml # Configuration data (if needed)
|
||||
```
|
||||
|
||||
### App Manifest (`manifest.yaml`)
|
||||
|
||||
The required `manifest.yaml` file contains metadata about the app. Here's an example `manifest.yaml` file:
|
||||
|
||||
```yaml
|
||||
name: myapp
|
||||
description: A brief description of the application and its purpose.
|
||||
version: 1.0.0
|
||||
icon: https://example.com/icon.png
|
||||
requires:
|
||||
- name: postgres
|
||||
defaultConfig:
|
||||
image: myapp/server:1.0.0
|
||||
domain: myapp.{{ .cloud.domain }}
|
||||
timezone: UTC
|
||||
storage: 10Gi
|
||||
dbHostname: postgres.postgres.svc.cluster.local
|
||||
dbUsername: myapp
|
||||
requiredSecrets:
|
||||
- apps.myapp.dbPassword
|
||||
- apps.postgres.password
|
||||
```
|
||||
|
||||
**Manifest Fields**:
|
||||
- `name` - The name of the app, used for identification (must match directory name)
|
||||
- `description` - A brief description of the app
|
||||
- `version` - The version of the app (should generally follow the versioning scheme of the app itself)
|
||||
- `icon` - A URL to an icon representing the app
|
||||
- `requires` - A list of other apps that this app depends on (each entry should be the name of another app)
|
||||
- `defaultConfig` - A set of default configuration values for the app (when an app is added using `wild-app-add`, these values will be added to the Wild Cloud `config.yaml` file)
|
||||
- `requiredSecrets` - A list of secrets that must be set in the Wild Cloud `secrets.yaml` file for the app to function properly (these secrets are typically sensitive information like database passwords or API keys; keys with random values will be generated automatically when the app is added)
|
||||
|
||||
### Kustomization Configuration
|
||||
|
||||
Wild Cloud apps use standard Kustomize with required Wild Cloud labels:
|
||||
|
||||
```yaml
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
namespace: myapp
|
||||
labels:
|
||||
- includeSelectors: true
|
||||
pairs:
|
||||
app: myapp
|
||||
managedBy: kustomize
|
||||
partOf: wild-cloud
|
||||
resources:
|
||||
- namespace.yaml
|
||||
- deployment.yaml
|
||||
- service.yaml
|
||||
- ingress.yaml
|
||||
- pvc.yaml
|
||||
- db-init-job.yaml
|
||||
```
|
||||
|
||||
**Kustomization Requirements**:
|
||||
- Every Wild Cloud kustomization should include the Wild Cloud labels in its `kustomization.yaml` file (this allows Wild Cloud to identify and manage the app correctly)
|
||||
- The `app` label and `namespace` should match the app's name/directory
|
||||
- **includeSelectors: true** - Automatically applies labels to all resources AND their selectors
|
||||
|
||||
#### Standard Wild Cloud Labels
|
||||
|
||||
Wild Cloud uses a consistent labeling strategy across all apps:
|
||||
|
||||
```yaml
|
||||
labels:
|
||||
- includeSelectors: true
|
||||
pairs:
|
||||
app: myapp # The app name (matches directory)
|
||||
managedBy: kustomize # Managed by Kustomize
|
||||
partOf: wild-cloud # Part of Wild Cloud ecosystem
|
||||
```
|
||||
|
||||
The `includeSelectors: true` setting automatically applies these labels to all resources AND their selectors, which means:
|
||||
|
||||
1. **Resource labels** - All resources get the standard Wild Cloud labels
|
||||
2. **Selector labels** - All selectors automatically include these labels for robust selection
|
||||
|
||||
This allows individual resources to use simple, component-specific selectors:
|
||||
|
||||
```yaml
|
||||
selector:
|
||||
matchLabels:
|
||||
component: web
|
||||
```
|
||||
|
||||
Which Kustomize automatically expands to:
|
||||
|
||||
```yaml
|
||||
selector:
|
||||
matchLabels:
|
||||
app: myapp
|
||||
component: web
|
||||
managedBy: kustomize
|
||||
partOf: wild-cloud
|
||||
```
|
||||
|
||||
### Template System
|
||||
|
||||
Wild Cloud apps are actually **templates** that get compiled with your specific configuration when you run `wild-app-add`. This allows for:
|
||||
|
||||
- **Dynamic Configuration** - Reference user settings via `{{ .apps.appname.key }}`
|
||||
- **Gomplate Processing** - Full template capabilities including conditionals and loops
|
||||
- **Secret Integration** - Automatic secret generation and referencing
|
||||
- **Domain Management** - Automatic subdomain assignment based on your domain
|
||||
|
||||
**Template Variable Examples**:
|
||||
```yaml
|
||||
# Configuration references
|
||||
image: "{{ .apps.myapp.image }}"
|
||||
domain: "{{ .apps.myapp.domain }}"
|
||||
namespace: "{{ .apps.myapp.namespace }}"
|
||||
|
||||
# Cloud-wide settings
|
||||
timezone: "{{ .cloud.timezone }}"
|
||||
domain_suffix: "{{ .cloud.domain }}"
|
||||
|
||||
# Conditional logic
|
||||
{{- if .apps.myapp.enableSSL }}
|
||||
- name: ENABLE_SSL
|
||||
value: "true"
|
||||
{{- end }}
|
||||
```
|
||||
|
||||
## App Lifecycle Management
|
||||
|
||||
### 1. Discovery Phase
|
||||
**Command**: `wild-apps-list`
|
||||
|
||||
Lists all available applications with metadata:
|
||||
```bash
|
||||
wild-apps-list --verbose # Detailed view with descriptions
|
||||
wild-apps-list --json # JSON output for automation
|
||||
```
|
||||
|
||||
Shows:
|
||||
- App name and description
|
||||
- Version and dependencies
|
||||
- Installation status
|
||||
- Required configuration
|
||||
|
||||
### 2. Configuration Phase
|
||||
**Command**: `wild-app-add <app-name>`
|
||||
|
||||
Processes app templates and prepares for deployment:
|
||||
|
||||
**What it does**:
|
||||
1. Reads app manifest directly from Wild Cloud repository
|
||||
2. Merges default configuration with existing `config.yaml`
|
||||
3. Generates required secrets automatically
|
||||
4. Compiles templates with gomplate using your configuration
|
||||
5. Creates ready-to-deploy Kustomize files in `apps/<app-name>/`
|
||||
|
||||
**Generated Files**:
|
||||
- Compiled Kubernetes manifests (no more template variables)
|
||||
- Standard Kustomize configuration
|
||||
- App-specific configuration merged into your `config.yaml`
|
||||
- Required secrets added to your `secrets.yaml`
|
||||
|
||||
### 3. Deployment Phase
|
||||
**Command**: `wild-app-deploy <app-name>`
|
||||
|
||||
Deploys the app to your Kubernetes cluster:
|
||||
|
||||
**Deployment Process**:
|
||||
1. Creates namespace if it doesn't exist
|
||||
2. Handles app dependencies (deploys required apps first)
|
||||
3. Creates secrets from your `secrets.yaml`
|
||||
4. Applies Kustomize configuration to cluster
|
||||
5. Copies TLS certificates to app namespace
|
||||
6. Validates deployment success
|
||||
|
||||
**Options**:
|
||||
- `--force` - Overwrite existing resources
|
||||
- `--dry-run` - Preview changes without applying
|
||||
|
||||
### 4. Operations Phase
|
||||
|
||||
**Monitoring**: `wild-app-doctor <app-name>`
|
||||
- Runs app-specific diagnostic tests
|
||||
- Checks pod status, resource usage, connectivity
|
||||
- Options: `--keep`, `--follow`, `--timeout`
|
||||
|
||||
**Updates**: Re-run `wild-app-add` then `wild-app-deploy`
|
||||
- Use `--force` flag to overwrite existing configuration
|
||||
- Updates configuration changes
|
||||
- Handles image updates
|
||||
- Preserves persistent data
|
||||
|
||||
**Removal**: `wild-app-delete <app-name>`
|
||||
- Deletes namespace and all resources
|
||||
- Removes local configuration files
|
||||
- Options: `--force` for no confirmation
|
||||
|
||||
## Configuration System
|
||||
|
||||
### Configuration Storage
|
||||
|
||||
**Global Configuration** (`config.yaml`):
|
||||
```yaml
|
||||
cloud:
|
||||
domain: example.com
|
||||
timezone: America/New_York
|
||||
apps:
|
||||
myapp:
|
||||
domain: app.example.com
|
||||
image: myapp:1.0.0
|
||||
storage: 20Gi
|
||||
timezone: UTC
|
||||
```
|
||||
|
||||
**Secrets Management** (`secrets.yaml`):
|
||||
```yaml
|
||||
apps:
|
||||
myapp:
|
||||
dbPassword: "randomly-generated-password"
|
||||
adminPassword: "user-set-password"
|
||||
postgres:
|
||||
password: "randomly-generated-password"
|
||||
```
|
||||
|
||||
### Secret Generation
|
||||
|
||||
When you run `wild-app-add`, required secrets are automatically generated:
|
||||
- **Random Generation**: 32-character base64 strings for passwords/keys
|
||||
- **User Prompts**: For secrets that need specific values
|
||||
- **Preservation**: Existing secrets are never overwritten
|
||||
- **Permissions**: `secrets.yaml` has 600 permissions (owner-only)
|
||||
|
||||
### Configuration Commands
|
||||
```bash
|
||||
# Read app configuration
|
||||
wild-config apps.myapp.domain
|
||||
|
||||
# Set app configuration
|
||||
wild-config-set apps.myapp.storage "50Gi"
|
||||
|
||||
# Read app secrets
|
||||
wild-secret apps.myapp.dbPassword
|
||||
|
||||
# Set app secrets
|
||||
wild-secret-set apps.myapp.adminPassword "my-secure-password"
|
||||
```
|
||||
|
||||
## Networking and DNS
|
||||
|
||||
### External DNS Integration
|
||||
|
||||
Wild Cloud apps automatically manage DNS records through ingress annotations:
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
annotations:
|
||||
external-dns.alpha.kubernetes.io/target: {{ .cloud.domain }}
|
||||
external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
|
||||
```
|
||||
|
||||
**How it works**:
|
||||
1. App ingress created with external-dns annotations
|
||||
2. ExternalDNS controller detects new ingress
|
||||
3. Creates CNAME record: `app.yourdomain.com` → `yourdomain.com`
|
||||
4. DNS resolves to MetalLB load balancer IP
|
||||
5. Traefik routes traffic to appropriate service
|
||||
|
||||
### HTTPS Certificate Management
|
||||
|
||||
Automatic TLS certificates via cert-manager:
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
annotations:
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
traefik.ingress.kubernetes.io/router.tls.certresolver: letsencrypt
|
||||
spec:
|
||||
tls:
|
||||
- hosts:
|
||||
- {{ .apps.myapp.domain }}
|
||||
secretName: myapp-tls
|
||||
```
|
||||
|
||||
**Certificate Lifecycle**:
|
||||
1. Ingress created with TLS configuration
|
||||
2. cert-manager detects certificate requirement
|
||||
3. Let's Encrypt challenge initiated automatically
|
||||
4. Certificate issued and stored in Kubernetes secret
|
||||
5. Traefik uses certificate for TLS termination
|
||||
6. Automatic renewal before expiration
|
||||
|
||||
## Database Integration
|
||||
|
||||
### Database Initialization Jobs
|
||||
|
||||
Apps that require databases use initialization jobs to set up the database before the main application starts:
|
||||
|
||||
```yaml
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: myapp-db-init
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: db-init
|
||||
image: postgres:15
|
||||
command:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- |
|
||||
PGPASSWORD=$ROOT_PASSWORD psql -h $DB_HOST -U postgres -c "
|
||||
CREATE DATABASE IF NOT EXISTS $DB_NAME;
|
||||
CREATE USER $DB_USER WITH PASSWORD '$DB_PASSWORD';
|
||||
GRANT ALL PRIVILEGES ON DATABASE $DB_NAME TO $DB_USER;
|
||||
"
|
||||
env:
|
||||
- name: DB_HOST
|
||||
value: {{ .apps.myapp.dbHostname }}
|
||||
- name: ROOT_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: myapp-secrets
|
||||
key: apps.postgres.password
|
||||
restartPolicy: OnFailure
|
||||
```
|
||||
|
||||
**Database URL Secrets**: For apps requiring database URLs with embedded credentials, always use dedicated secrets:
|
||||
|
||||
```yaml
|
||||
# In manifest.yaml
|
||||
requiredSecrets:
|
||||
- apps.myapp.dbUrl
|
||||
|
||||
# Generated secret (by wild-app-add)
|
||||
apps:
|
||||
myapp:
|
||||
dbUrl: "postgresql://myapp:password123@postgres.postgres.svc.cluster.local/myapp"
|
||||
```
|
||||
|
||||
### Supported Databases
|
||||
|
||||
Wild Cloud apps commonly integrate with:
|
||||
- **PostgreSQL** - Via `postgres` app dependency
|
||||
- **MySQL** - Via `mysql` app dependency
|
||||
- **Redis** - Via `redis` app dependency
|
||||
- **SQLite** - For apps with embedded database needs
|
||||
|
||||
## Storage Management
|
||||
|
||||
### Persistent Volume Claims
|
||||
|
||||
Apps requiring persistent storage define PVCs:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: myapp-data
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: longhorn
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .apps.myapp.storage }}
|
||||
```
|
||||
|
||||
**Storage Integration**:
|
||||
- **Longhorn Storage Class** - Distributed, replicated storage
|
||||
- **Dynamic Provisioning** - Automatic volume creation
|
||||
- **Backup Support** - Via `wild-app-backup` command
|
||||
- **Expansion** - Update storage size in configuration
|
||||
|
||||
### Backup and Restore
|
||||
|
||||
**Application Backup**: `wild-app-backup <app-name>`
|
||||
- Discovers databases and PVCs automatically
|
||||
- Creates restic snapshots with deduplication
|
||||
- Supports PostgreSQL and MySQL database backups
|
||||
- Streams PVC data for efficient storage
|
||||
|
||||
**Application Restore**: `wild-app-restore <app-name> <snapshot-id>`
|
||||
- Restores from restic snapshots
|
||||
- Options: `--db-only`, `--pvc-only`, `--skip-globals`
|
||||
- Creates safety snapshots before destructive operations
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Pod Security Standards
|
||||
|
||||
All Wild Cloud apps comply with Pod Security Standards:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 999
|
||||
runAsGroup: 999
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: app
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
readOnlyRootFilesystem: false # Set to true when possible
|
||||
```
|
||||
|
||||
### Secret Management
|
||||
|
||||
- **Kubernetes Secrets** - All sensitive data stored as Kubernetes secrets
|
||||
- **Secret References** - Apps reference secrets via `secretKeyRef`, never inline
|
||||
- **Full Dotted Paths** - Always use complete secret paths (e.g., `apps.myapp.dbPassword`)
|
||||
- **No Plaintext** - Secrets never stored in manifests or config files
|
||||
|
||||
### Network Policies
|
||||
|
||||
Apps can define network policies for traffic isolation:
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: myapp-network-policy
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: myapp
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
name: traefik
|
||||
```
|
||||
|
||||
## Available Applications
|
||||
|
||||
Wild Cloud includes apps for common self-hosted services:
|
||||
|
||||
### Content Management
|
||||
- **Ghost** - Publishing platform for blogs and websites
|
||||
- **Discourse** - Community discussion platform
|
||||
|
||||
### Development & Project Management Tools
|
||||
- **Gitea** - Self-hosted Git service with web interface
|
||||
- **OpenProject** - Open-source project management software
|
||||
- **Docker Registry** - Private container image registry
|
||||
|
||||
### Media & File Management
|
||||
- **Immich** - Self-hosted photo and video backup solution
|
||||
|
||||
### Communication
|
||||
- **Keila** - Newsletter and email marketing platform
|
||||
- **Listmonk** - Newsletter and mailing list manager
|
||||
|
||||
### Databases
|
||||
- **PostgreSQL** - Relational database service
|
||||
- **MySQL** - Relational database service
|
||||
- **Redis** - In-memory data structure store
|
||||
- **Memcached** - Distributed memory caching system
|
||||
|
||||
### AI/ML
|
||||
- **vLLM** - Fast LLM inference server with OpenAI-compatible API
|
||||
|
||||
### Examples & Templates
|
||||
- **example-admin** - Example admin interface application
|
||||
- **example-app** - Template application for development reference
|
||||
|
||||
## Creating Custom Apps
|
||||
|
||||
### App Development Process
|
||||
|
||||
1. **Create Directory**: `apps/myapp/`
|
||||
2. **Write Manifest**: Define metadata and configuration
|
||||
3. **Create Resources**: Kubernetes manifests with templates
|
||||
4. **Test Locally**: Use `wild-app-add` and `wild-app-deploy`
|
||||
5. **Validate**: Ensure all resources deploy correctly
|
||||
|
||||
### Best Practices
|
||||
|
||||
**Manifest Design**:
|
||||
- Include comprehensive `defaultConfig` for all configurable values
|
||||
- List all `requiredSecrets` the app needs
|
||||
- Specify dependencies in `requires` field
|
||||
- Use semantic versioning
|
||||
|
||||
**Template Usage**:
|
||||
- Reference configuration via `{{ .apps.myapp.key }}`
|
||||
- Use conditionals for optional features
|
||||
- Include proper gomplate syntax for lists and objects
|
||||
- Test template compilation
|
||||
|
||||
**Resource Configuration**:
|
||||
- Always include Wild Cloud standard labels
|
||||
- Use appropriate security contexts
|
||||
- Define resource requests and limits
|
||||
- Include health checks and probes
|
||||
|
||||
**Storage and Networking**:
|
||||
- Use Longhorn storage class for persistence
|
||||
- Include external-dns annotations for automatic DNS
|
||||
- Configure TLS certificates via cert-manager annotations
|
||||
- Follow database initialization patterns for data apps
|
||||
|
||||
### Converting from Helm Charts
|
||||
|
||||
Wild Cloud provides tooling to convert Helm charts to Wild Cloud apps:
|
||||
|
||||
```bash
|
||||
# Convert Helm chart to Kustomize base
|
||||
helm fetch --untar --untardir charts stable/mysql
|
||||
helm template --output-dir base --namespace mysql mysql charts/mysql
|
||||
cd base/mysql
|
||||
kustomize create --autodetect
|
||||
|
||||
# Then customize for Wild Cloud:
|
||||
# 1. Add manifest.yaml
|
||||
# 2. Replace hardcoded values with templates
|
||||
# 3. Update labels to Wild Cloud standard
|
||||
# 4. Configure secrets properly
|
||||
```
|
||||
|
||||
## Troubleshooting Applications
|
||||
|
||||
### Common Issues
|
||||
|
||||
**App Won't Start**:
|
||||
- Check pod logs: `kubectl logs -n <app-namespace> deployment/<app-name>`
|
||||
- Verify secrets exist: `kubectl get secrets -n <app-namespace>`
|
||||
- Check resource constraints: `kubectl describe pod -n <app-namespace>`
|
||||
|
||||
**Database Connection Issues**:
|
||||
- Verify database is running: `kubectl get pods -n <db-namespace>`
|
||||
- Check database initialization job: `kubectl logs job/<app>-db-init -n <app-namespace>`
|
||||
- Validate database credentials in secrets
|
||||
|
||||
**DNS/Certificate Issues**:
|
||||
- Check ingress status: `kubectl get ingress -n <app-namespace>`
|
||||
- Verify certificate creation: `kubectl get certificates -n <app-namespace>`
|
||||
- Check external-dns logs: `kubectl logs -n external-dns deployment/external-dns`
|
||||
|
||||
**Storage Issues**:
|
||||
- Check PVC status: `kubectl get pvc -n <app-namespace>`
|
||||
- Verify Longhorn cluster health: Access Longhorn UI
|
||||
- Check storage class availability: `kubectl get storageclass`
|
||||
|
||||
### Diagnostic Tools
|
||||
|
||||
```bash
|
||||
# App-specific diagnostics
|
||||
wild-app-doctor <app-name>
|
||||
|
||||
# Resource inspection
|
||||
kubectl get all -n <app-namespace>
|
||||
kubectl describe deployment/<app-name> -n <app-namespace>
|
||||
|
||||
# Log analysis
|
||||
kubectl logs -f deployment/<app-name> -n <app-namespace>
|
||||
kubectl logs job/<app>-db-init -n <app-namespace>
|
||||
|
||||
# Configuration verification
|
||||
wild-config apps.<app-name>
|
||||
wild-secret apps.<app-name>
|
||||
```
|
||||
|
||||
The Wild Cloud apps system provides a powerful, consistent way to deploy and manage self-hosted applications with enterprise-grade features like automatic HTTPS, DNS management, backup/restore, and integrated security.
|
||||
262
ai/wildcloud-v.PoC/bin-scripts.md
Normal file
262
ai/wildcloud-v.PoC/bin-scripts.md
Normal file
@@ -0,0 +1,262 @@
|
||||
# Wild Cloud CLI Scripts Reference
|
||||
|
||||
Wild Cloud provides 34+ command-line tools (all prefixed with `wild-`) for managing your personal Kubernetes cloud infrastructure. These scripts handle everything from initial setup to day-to-day operations.
|
||||
|
||||
## Script Categories
|
||||
|
||||
### 🚀 Initial Setup & Scaffolding
|
||||
|
||||
**`wild-init`** - Initialize new Wild Cloud instance
|
||||
- Creates `.wildcloud` directory structure
|
||||
- Copies template files from repository
|
||||
- Sets up basic configuration (email, domains, cluster name)
|
||||
- **Usage**: `wild-init`
|
||||
- **When to use**: First command to run in a new directory
|
||||
|
||||
**`wild-setup`** - Master setup orchestrator
|
||||
- Runs complete Wild Cloud deployment sequence
|
||||
- Options: `--skip-cluster`, `--skip-services`
|
||||
- Executes: cluster setup → services setup
|
||||
- **Usage**: `wild-setup [options]`
|
||||
- **When to use**: After `wild-init` for complete automated setup
|
||||
|
||||
**`wild-update-docs`** - Copy documentation to cloud directory
|
||||
- Options: `--force` to overwrite existing docs
|
||||
- Copies `/docs` from repository to your cloud home
|
||||
- **Usage**: `wild-update-docs [--force]`
|
||||
|
||||
### ⚙️ Configuration Management
|
||||
|
||||
**`wild-config`** - Read configuration values
|
||||
- Access YAML paths from `config.yaml` (e.g., `cluster.name`, `cloud.domain`)
|
||||
- Option: `--check` to test key existence
|
||||
- **Usage**: `wild-config <key>` or `wild-config --check <key>`
|
||||
|
||||
**`wild-config-set`** - Write configuration values
|
||||
- Sets values using YAML paths, creates config file if needed
|
||||
- **Usage**: `wild-config-set <key> <value>`
|
||||
|
||||
**`wild-secret`** - Read secret values
|
||||
- Similar to `wild-config` but for sensitive data in `secrets.yaml`
|
||||
- File has restrictive permissions (600)
|
||||
- **Usage**: `wild-secret <key>` or `wild-secret --check <key>`
|
||||
|
||||
**`wild-secret-set`** - Write secret values
|
||||
- Generates random values if none provided (32-char base64)
|
||||
- **Usage**: `wild-secret-set <key> [value]`
|
||||
|
||||
**`wild-compile-template`** - Process gomplate templates
|
||||
- Uses `config.yaml` and `secrets.yaml` as template context
|
||||
- **Usage**: `wild-compile-template < template.yaml`
|
||||
|
||||
**`wild-compile-template-dir`** - Process template directories
|
||||
- Options: `--clean` to remove destination first
|
||||
- **Usage**: `wild-compile-template-dir <source> <destination>`
|
||||
|
||||
### 🏗️ Cluster Infrastructure Management
|
||||
|
||||
**`wild-setup-cluster`** - Complete cluster setup (Phases 1-3)
|
||||
- Automated control plane node setup and bootstrapping
|
||||
- Configures Talos control plane nodes using wild-node-setup
|
||||
- Options: `--skip-hardware`
|
||||
- **Usage**: `wild-setup-cluster [options]`
|
||||
- **Requires**: `wild-init` completed first
|
||||
|
||||
**`wild-cluster-config-generate`** - Generate Talos cluster config
|
||||
- Creates base `controlplane.yaml` and `worker.yaml`
|
||||
- Generates cluster secrets using `talosctl gen config`
|
||||
- **Usage**: `wild-cluster-config-generate`
|
||||
|
||||
**`wild-node-setup`** - Complete node lifecycle management
|
||||
- Handles detect → configure → patch → deploy for individual nodes
|
||||
- Automatically detects maintenance mode and handles IP transitions
|
||||
- Options: `--reconfigure`, `--no-deploy`
|
||||
- **Usage**: `wild-node-setup <node-name> [options]`
|
||||
- **Examples**:
|
||||
- `wild-node-setup control-1` (complete setup)
|
||||
- `wild-node-setup worker-1 --reconfigure` (force node reconfiguration)
|
||||
- `wild-node-setup control-2 --no-deploy` (configuration only)
|
||||
|
||||
**`wild-node-detect`** - Hardware detection utility
|
||||
- Discovers network interfaces and disks from maintenance mode
|
||||
- Returns JSON with hardware specifications and maintenance mode status
|
||||
- **Usage**: `wild-node-detect <node-ip>`
|
||||
- **Note**: Primarily used internally by `wild-node-setup`
|
||||
|
||||
**`wild-cluster-node-ip`** - Get node IP addresses
|
||||
- Sources: config.yaml, kubectl, or talosctl
|
||||
- Options: `--from-config`, `--from-talosctl`
|
||||
- **Usage**: `wild-cluster-node-ip <node-name> [options]`
|
||||
|
||||
### 🔧 Cluster Services Management
|
||||
|
||||
**`wild-setup-services`** - Set up all cluster services (Phase 4)
|
||||
- Manages MetalLB, Traefik, cert-manager, etc. in dependency order
|
||||
- Options: `--fetch` for fresh templates, `--no-deploy` for config-only
|
||||
- **Usage**: `wild-setup-services [options]`
|
||||
- **Requires**: Working Kubernetes cluster
|
||||
|
||||
**`wild-service-setup`** - Complete service lifecycle management
|
||||
- Handles fetch → configure → deploy for individual services
|
||||
- Options: `--fetch` for fresh templates, `--no-deploy` for config-only
|
||||
- **Usage**: `wild-service-setup <service> [--fetch] [--no-deploy]`
|
||||
- **Examples**:
|
||||
- `wild-service-setup cert-manager` (configure + deploy)
|
||||
- `wild-service-setup cert-manager --fetch` (fetch + configure + deploy)
|
||||
- `wild-service-setup cert-manager --no-deploy` (configure only)
|
||||
|
||||
**`wild-dashboard-token`** - Get Kubernetes dashboard token
|
||||
- Extracts token for dashboard authentication
|
||||
- Copies to clipboard if available
|
||||
- **Usage**: `wild-dashboard-token`
|
||||
|
||||
**`wild-cluster-secret-copy`** - Copy secrets between namespaces
|
||||
- **Usage**: `wild-cluster-secret-copy <source-ns:secret> <target-ns1> [target-ns2]`
|
||||
|
||||
### 📱 Application Management
|
||||
|
||||
**`wild-apps-list`** - List available applications
|
||||
- Shows metadata, installation status, dependencies
|
||||
- Options: `--verbose`, `--json`, `--yaml`
|
||||
- **Usage**: `wild-apps-list [options]`
|
||||
|
||||
**`wild-app-add`** - Configure app from repository
|
||||
- Processes manifest.yaml with configuration
|
||||
- Generates required secrets automatically
|
||||
- Options: `--force` to overwrite existing app files
|
||||
- **Usage**: `wild-app-add <app-name> [--force]`
|
||||
|
||||
**`wild-app-deploy`** - Deploy application to cluster
|
||||
- Creates namespaces, handles dependencies
|
||||
- Options: `--force`, `--dry-run`
|
||||
- **Usage**: `wild-app-deploy <app-name> [options]`
|
||||
|
||||
**`wild-app-delete`** - Remove application
|
||||
- Deletes namespace and all resources
|
||||
- Options: `--force`, `--dry-run`
|
||||
- **Usage**: `wild-app-delete <app-name> [options]`
|
||||
|
||||
**`wild-app-doctor`** - Run application diagnostics
|
||||
- Executes app-specific diagnostic tests
|
||||
- Options: `--keep`, `--follow`, `--timeout`
|
||||
- **Usage**: `wild-app-doctor <app-name> [options]`
|
||||
|
||||
### 💾 Backup & Restore
|
||||
|
||||
**`wild-backup`** - Comprehensive backup system
|
||||
- Backs up home directory, apps, and cluster resources
|
||||
- Options: `--home-only`, `--apps-only`, `--cluster-only`
|
||||
- Uses restic for deduplication
|
||||
- **Usage**: `wild-backup [options]`
|
||||
|
||||
**`wild-app-backup`** - Application-specific backups
|
||||
- Discovers databases and PVCs automatically
|
||||
- Supports PostgreSQL and MySQL
|
||||
- Options: `--all` for all applications
|
||||
- **Usage**: `wild-app-backup <app-name> [--all]`
|
||||
|
||||
**`wild-app-restore`** - Application restore
|
||||
- Restores databases and/or PVC data
|
||||
- Options: `--db-only`, `--pvc-only`, `--skip-globals`
|
||||
- **Usage**: `wild-app-restore <app-name> <snapshot-id> [options]`
|
||||
|
||||
### 🔍 Utilities & Helpers
|
||||
|
||||
**`wild-health`** - Comprehensive infrastructure validation
|
||||
- Validates core components (MetalLB, Traefik, CoreDNS)
|
||||
- Checks installed services (cert-manager, ExternalDNS, Kubernetes Dashboard)
|
||||
- Tests DNS resolution, routing, certificates, and storage systems
|
||||
- **Usage**: `wild-health`
|
||||
|
||||
**`wild-talos-schema`** - Talos schema management
|
||||
- Handles configuration schema operations
|
||||
- **Usage**: `wild-talos-schema [options]`
|
||||
|
||||
**`wild-cluster-node-boot-assets-download`** - Download Talos assets
|
||||
- Downloads installation images for nodes
|
||||
- **Usage**: `wild-cluster-node-boot-assets-download`
|
||||
|
||||
**`wild-dnsmasq-install`** - Install dnsmasq services
|
||||
- Sets up DNS and DHCP for cluster networking
|
||||
- **Usage**: `wild-dnsmasq-install`
|
||||
|
||||
## Common Usage Patterns
|
||||
|
||||
### Complete Setup from Scratch
|
||||
```bash
|
||||
wild-init # Initialize cloud directory
|
||||
wild-setup # Complete automated setup
|
||||
# or step by step:
|
||||
wild-setup-cluster # Just cluster infrastructure
|
||||
wild-setup-services # Just cluster services
|
||||
```
|
||||
|
||||
### Individual Service Management
|
||||
```bash
|
||||
# Most common - reconfigure and deploy service
|
||||
wild-service-setup cert-manager
|
||||
|
||||
# Get fresh templates and deploy (for updates)
|
||||
wild-service-setup cert-manager --fetch
|
||||
|
||||
# Configure only, don't deploy (for planning)
|
||||
wild-service-setup cert-manager --no-deploy
|
||||
|
||||
# Fix failed service and resume setup
|
||||
wild-service-setup cert-manager --fetch
|
||||
wild-setup-services # Resume full setup if needed
|
||||
```
|
||||
|
||||
### Application Management
|
||||
```bash
|
||||
wild-apps-list # See available apps
|
||||
wild-app-add ghost # Configure app
|
||||
wild-app-deploy ghost # Deploy to cluster
|
||||
wild-app-doctor ghost # Troubleshoot issues
|
||||
```
|
||||
|
||||
### Configuration Management
|
||||
```bash
|
||||
wild-config cluster.name # Read values
|
||||
wild-config-set apps.ghost.domain "blog.example.com" # Write values
|
||||
wild-secret apps.ghost.adminPassword # Read secrets
|
||||
wild-secret-set apps.ghost.apiKey # Generate random secret
|
||||
```
|
||||
|
||||
### Cluster Operations
|
||||
```bash
|
||||
wild-cluster-node-ip control-1 # Get node IP
|
||||
wild-dashboard-token # Get dashboard access
|
||||
wild-health # Check system health
|
||||
```
|
||||
|
||||
## Script Design Principles
|
||||
|
||||
1. **Consistent Interface**: All scripts use `--help` and follow common argument patterns
|
||||
2. **Error Handling**: All scripts use `set -e` and `set -o pipefail` for robust error handling
|
||||
3. **Idempotent**: Scripts check existing state before making changes
|
||||
4. **Template-Driven**: Extensive use of gomplate for configuration flexibility
|
||||
5. **Environment-Aware**: Scripts source `wild-common.sh` and initialize Wild Cloud environment
|
||||
6. **Progressive Disclosure**: Complex operations broken into phases with individual controls
|
||||
|
||||
## Dependencies Between Scripts
|
||||
|
||||
### Setup Phase Dependencies
|
||||
1. `wild-init` → creates basic structure
|
||||
2. `wild-setup-cluster` → provisions infrastructure
|
||||
3. `wild-setup-services` → installs cluster services
|
||||
4. `wild-setup` → orchestrates all phases
|
||||
|
||||
### App Deployment Pipeline
|
||||
1. `wild-apps-list` → discover applications
|
||||
2. `wild-app-add` → configure and prepare application
|
||||
3. `wild-app-deploy` → deploy to cluster
|
||||
|
||||
### Node Management Flow
|
||||
1. `wild-cluster-config-generate` → base configurations
|
||||
2. `wild-node-setup <node-name>` → atomic node operations (detect → patch → deploy)
|
||||
- Internally uses `wild-node-detect` for hardware discovery
|
||||
- Generates node-specific patches and final configurations
|
||||
- Deploys configuration to target node
|
||||
|
||||
All scripts are designed to work together as a cohesive Infrastructure as Code system for personal Kubernetes deployments.
|
||||
602
ai/wildcloud-v.PoC/configuration-system.md
Normal file
602
ai/wildcloud-v.PoC/configuration-system.md
Normal file
@@ -0,0 +1,602 @@
|
||||
# Wild Cloud Configuration System
|
||||
|
||||
Wild Cloud uses a comprehensive configuration management system that handles both non-sensitive configuration data and sensitive secrets through separate files and commands. The system supports YAML path-based access, template processing, and environment-specific customization.
|
||||
|
||||
## Configuration Architecture
|
||||
|
||||
### Core Components
|
||||
|
||||
1. **`config.yaml`** - Main configuration file for non-sensitive settings
|
||||
2. **`secrets.yaml`** - Encrypted/protected storage for sensitive data
|
||||
3. **`.wildcloud/`** - Project marker and cache directory
|
||||
4. **`env.sh`** - Environment setup and path configuration
|
||||
5. **Template System** - gomplate-based dynamic configuration processing
|
||||
|
||||
### File Structure of a Wild Cloud Project
|
||||
|
||||
```
|
||||
your-cloud-directory/
|
||||
├── .wildcloud/ # Project marker and cache
|
||||
│ ├── cache/ # Downloaded templates and temporary files
|
||||
│ └── logs/ # Operation logs
|
||||
├── config.yaml # Main configuration (tracked in git)
|
||||
├── secrets.yaml # Sensitive data (NOT tracked in git, 600 perms)
|
||||
├── env.sh # Environment setup (auto-generated)
|
||||
├── apps/ # Deployed application configurations
|
||||
├── setup/ # Infrastructure setup files
|
||||
└── docs/ # Project documentation
|
||||
```
|
||||
|
||||
## Configuration File (`config.yaml`)
|
||||
|
||||
### Structure and Organization
|
||||
|
||||
The configuration file uses a hierarchical YAML structure for organizing settings:
|
||||
|
||||
```yaml
|
||||
# Cloud-wide settings
|
||||
cloud:
|
||||
domain: "example.com"
|
||||
email: "admin@example.com"
|
||||
timezone: "America/New_York"
|
||||
|
||||
# Cluster infrastructure settings
|
||||
cluster:
|
||||
name: "wild-cluster"
|
||||
nodeCount: 3
|
||||
network:
|
||||
subnet: "192.168.1.0/24"
|
||||
gateway: "192.168.1.1"
|
||||
dnsServer: "192.168.1.50"
|
||||
metallbPool: "192.168.1.80-89"
|
||||
controlPlaneVip: "192.168.1.90"
|
||||
nodes:
|
||||
control-1:
|
||||
ip: "192.168.1.91"
|
||||
mac: "00:11:22:33:44:55"
|
||||
interface: "eth0"
|
||||
disk: "/dev/sda"
|
||||
control-2:
|
||||
ip: "192.168.1.92"
|
||||
mac: "00:11:22:33:44:56"
|
||||
interface: "eth0"
|
||||
disk: "/dev/sda"
|
||||
|
||||
# Application-specific settings
|
||||
apps:
|
||||
ghost:
|
||||
domain: "blog.example.com"
|
||||
image: "ghost:5.0.0"
|
||||
storage: "10Gi"
|
||||
timezone: "UTC"
|
||||
namespace: "ghost"
|
||||
immich:
|
||||
domain: "photos.example.com"
|
||||
serverImage: "ghcr.io/immich-app/immich-server:release"
|
||||
storage: "250Gi"
|
||||
namespace: "immich"
|
||||
|
||||
# Service configurations
|
||||
services:
|
||||
traefik:
|
||||
replicas: 2
|
||||
dashboard: true
|
||||
longhorn:
|
||||
defaultReplicas: 3
|
||||
storageClass: "longhorn"
|
||||
```
|
||||
|
||||
### Configuration Commands
|
||||
|
||||
**Reading Configuration Values**:
|
||||
```bash
|
||||
# Read simple values
|
||||
wild-config cloud.domain # "example.com"
|
||||
wild-config cluster.name # "wild-cluster"
|
||||
|
||||
# Read nested values
|
||||
wild-config apps.ghost.domain # "blog.example.com"
|
||||
wild-config cluster.nodes.control-1.ip # "192.168.1.91"
|
||||
|
||||
# Check if key exists
|
||||
wild-config --check apps.newapp.domain # Returns exit code 0/1
|
||||
```
|
||||
|
||||
**Writing Configuration Values**:
|
||||
```bash
|
||||
# Set simple values
|
||||
wild-config-set cloud.domain "newdomain.com"
|
||||
wild-config-set cluster.nodeCount 5
|
||||
|
||||
# Set nested values
|
||||
wild-config-set apps.ghost.storage "20Gi"
|
||||
wild-config-set cluster.nodes.worker-1.ip "192.168.1.94"
|
||||
|
||||
# Set complex values (JSON format)
|
||||
wild-config-set apps.ghost '{"domain":"blog.com","storage":"50Gi"}'
|
||||
```
|
||||
|
||||
### Configuration Sections
|
||||
|
||||
#### Cloud Settings (`cloud.*`)
|
||||
Global settings that affect the entire Wild Cloud deployment:
|
||||
|
||||
```yaml
|
||||
cloud:
|
||||
domain: "example.com" # Primary domain for services
|
||||
email: "admin@example.com" # Contact email for certificates
|
||||
timezone: "America/New_York" # Default timezone for services
|
||||
backupLocation: "s3://backup" # Backup storage location
|
||||
monitoring: true # Enable monitoring services
|
||||
```
|
||||
|
||||
#### Cluster Settings (`cluster.*`)
|
||||
Infrastructure and node configuration:
|
||||
|
||||
```yaml
|
||||
cluster:
|
||||
name: "production-cluster"
|
||||
version: "v1.28.0"
|
||||
network:
|
||||
subnet: "10.0.0.0/16" # Cluster network range
|
||||
serviceCIDR: "10.96.0.0/12" # Service network range
|
||||
podCIDR: "10.244.0.0/16" # Pod network range
|
||||
nodes:
|
||||
control-1:
|
||||
ip: "10.0.0.10"
|
||||
role: "controlplane"
|
||||
taints: []
|
||||
worker-1:
|
||||
ip: "10.0.0.20"
|
||||
role: "worker"
|
||||
labels:
|
||||
node-type: "compute"
|
||||
```
|
||||
|
||||
#### Application Settings (`apps.*`)
|
||||
Per-application configuration that overrides defaults from app manifests:
|
||||
|
||||
```yaml
|
||||
apps:
|
||||
postgresql:
|
||||
storage: "100Gi"
|
||||
maxConnections: 200
|
||||
sharedBuffers: "256MB"
|
||||
redis:
|
||||
memory: "1Gi"
|
||||
persistence: true
|
||||
ghost:
|
||||
domain: "blog.example.com"
|
||||
theme: "casper"
|
||||
storage: "10Gi"
|
||||
replicas: 2
|
||||
```
|
||||
|
||||
## Secrets Management (`secrets.yaml`)
|
||||
|
||||
### Security Model
|
||||
|
||||
The `secrets.yaml` file stores all sensitive data with the following security measures:
|
||||
|
||||
- **File Permissions**: Automatically set to 600 (owner read/write only)
|
||||
- **Git Exclusion**: Included in `.gitignore` by default
|
||||
- **Encryption Support**: Can be encrypted at rest using tools like `age` or `gpg`
|
||||
- **Access Control**: Only Wild Cloud commands can read/write secrets
|
||||
|
||||
### Secret Structure
|
||||
|
||||
```yaml
|
||||
# Generated cluster secrets
|
||||
cluster:
|
||||
talos:
|
||||
secrets: "base64-encoded-cluster-secrets"
|
||||
adminKey: "talos-admin-private-key"
|
||||
kubernetes:
|
||||
adminToken: "k8s-admin-service-account-token"
|
||||
|
||||
# Application secrets
|
||||
apps:
|
||||
postgresql:
|
||||
rootPassword: "randomly-generated-32-char-string"
|
||||
replicationPassword: "randomly-generated-32-char-string"
|
||||
ghost:
|
||||
dbPassword: "randomly-generated-password"
|
||||
adminPassword: "user-set-password"
|
||||
jwtSecret: "randomly-generated-jwt-secret"
|
||||
immich:
|
||||
dbPassword: "randomly-generated-password"
|
||||
dbUrl: "postgresql://immich:password@postgres:5432/immich"
|
||||
jwtSecret: "jwt-signing-key"
|
||||
|
||||
# External service credentials
|
||||
external:
|
||||
cloudflare:
|
||||
apiToken: "cloudflare-dns-api-token"
|
||||
letsencrypt:
|
||||
email: "admin@example.com"
|
||||
backup:
|
||||
s3AccessKey: "backup-s3-access-key"
|
||||
s3SecretKey: "backup-s3-secret-key"
|
||||
```
|
||||
|
||||
### Secret Commands
|
||||
|
||||
**Reading Secrets**:
|
||||
```bash
|
||||
# Read secret values
|
||||
wild-secret apps.postgresql.rootPassword
|
||||
wild-secret cluster.kubernetes.adminToken
|
||||
|
||||
# Check if secret exists
|
||||
wild-secret --check apps.newapp.apiKey
|
||||
```
|
||||
|
||||
**Writing Secrets**:
|
||||
```bash
|
||||
# Set specific secret value
|
||||
wild-secret-set apps.ghost.adminPassword "my-secure-password"
|
||||
|
||||
# Generate random secret (if no value provided)
|
||||
wild-secret-set apps.newapp.apiKey # Generates 32-char base64 string
|
||||
|
||||
# Set complex secret (JSON format)
|
||||
wild-secret-set apps.database '{"user":"admin","password":"secret"}'
|
||||
```
|
||||
|
||||
### Automatic Secret Generation
|
||||
|
||||
When you run `wild-app-add`, Wild Cloud automatically generates required secrets:
|
||||
|
||||
1. **Reads App Manifest**: Identifies `requiredSecrets` list
|
||||
2. **Checks Existing Secrets**: Never overwrites existing values
|
||||
3. **Generates Missing Secrets**: Creates secure random values
|
||||
4. **Updates secrets.yaml**: Adds new secrets with proper structure
|
||||
|
||||
**Example App Manifest**:
|
||||
```yaml
|
||||
name: ghost
|
||||
requiredSecrets:
|
||||
- apps.ghost.dbPassword # Auto-generated if missing
|
||||
- apps.ghost.jwtSecret # Auto-generated if missing
|
||||
- apps.postgresql.password # Auto-generated if missing (dependency)
|
||||
```
|
||||
|
||||
**Resulting secrets.yaml**:
|
||||
```yaml
|
||||
apps:
|
||||
ghost:
|
||||
dbPassword: "aB3kL9mN2pQ7rS8tU1vW4xY5zA6bC0dE"
|
||||
jwtSecret: "jF2gH5iJ8kL1mN4oP7qR0sT3uV6wX9yZ"
|
||||
postgresql:
|
||||
password: "eE8fF1gG4hH7iI0jJ3kK6lL9mM2nN5oO"
|
||||
```
|
||||
|
||||
## Template System
|
||||
|
||||
### gomplate Integration
|
||||
|
||||
Wild Cloud uses [gomplate](https://gomplate.ca/) for dynamic configuration processing, allowing templates to access both configuration and secrets:
|
||||
|
||||
```yaml
|
||||
# Template example (before processing)
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: ghost-config
|
||||
namespace: {{ .apps.ghost.namespace }}
|
||||
data:
|
||||
url: "https://{{ .apps.ghost.domain }}"
|
||||
timezone: "{{ .apps.ghost.timezone | default .cloud.timezone }}"
|
||||
database_host: "{{ .apps.postgresql.hostname }}"
|
||||
# Conditionals
|
||||
{{- if .apps.ghost.enableSSL }}
|
||||
ssl_enabled: "true"
|
||||
{{- end }}
|
||||
# Loops
|
||||
allowed_domains: |
|
||||
{{- range .apps.ghost.allowedDomains }}
|
||||
- {{ . }}
|
||||
{{- end }}
|
||||
```
|
||||
|
||||
### Template Processing Commands
|
||||
|
||||
**Process Single Template**:
|
||||
```bash
|
||||
# From stdin
|
||||
cat template.yaml | wild-compile-template > output.yaml
|
||||
|
||||
# With custom context
|
||||
echo "domain: {{ .cloud.domain }}" | wild-compile-template
|
||||
```
|
||||
|
||||
**Process Template Directory**:
|
||||
```bash
|
||||
# Recursively process all templates
|
||||
wild-compile-template-dir source-dir output-dir
|
||||
|
||||
# Clean destination first
|
||||
wild-compile-template-dir --clean source-dir output-dir
|
||||
```
|
||||
|
||||
### Template Context
|
||||
|
||||
Templates have access to the complete configuration and secrets context:
|
||||
|
||||
```go
|
||||
// Available template variables
|
||||
.cloud.* // All cloud configuration
|
||||
.cluster.* // All cluster configuration
|
||||
.apps.* // All application configuration
|
||||
.services.* // All service configuration
|
||||
|
||||
// Special functions
|
||||
.cloud.domain // Primary domain
|
||||
default "fallback" // Default value if key missing
|
||||
env "VAR_NAME" // Environment variable
|
||||
file "path/to/file" // File contents
|
||||
```
|
||||
|
||||
**Template Examples**:
|
||||
```yaml
|
||||
# Basic variable substitution
|
||||
domain: {{ .apps.myapp.domain }}
|
||||
|
||||
# Default values
|
||||
timezone: {{ .apps.myapp.timezone | default .cloud.timezone }}
|
||||
|
||||
# Conditionals
|
||||
{{- if .apps.myapp.enableFeature }}
|
||||
feature_enabled: true
|
||||
{{- else }}
|
||||
feature_enabled: false
|
||||
{{- end }}
|
||||
|
||||
# Lists and iteration
|
||||
allowed_hosts:
|
||||
{{- range .apps.myapp.allowedHosts }}
|
||||
- {{ . }}
|
||||
{{- end }}
|
||||
|
||||
# Complex expressions
|
||||
replicas: {{ if eq .cluster.environment "production" }}3{{ else }}1{{ end }}
|
||||
```
|
||||
|
||||
## Environment Setup
|
||||
|
||||
### Environment Detection
|
||||
|
||||
Wild Cloud automatically detects and configures the environment through several mechanisms:
|
||||
|
||||
**Project Detection**:
|
||||
- Searches for `.wildcloud` directory in current or parent directories
|
||||
- Sets `WC_HOME` to the directory containing `.wildcloud`
|
||||
- Fails if no Wild Cloud project found
|
||||
|
||||
**Repository Detection**:
|
||||
- Locates Wild Cloud repository (source code)
|
||||
- Sets `WC_ROOT` to repository location
|
||||
- Used for accessing app templates and setup scripts
|
||||
|
||||
### Environment Variables
|
||||
|
||||
**Key Environment Variables**:
|
||||
```bash
|
||||
WC_HOME="/path/to/your-cloud" # Your cloud directory
|
||||
WC_ROOT="/path/to/wild-cloud-repo" # Wild Cloud repository
|
||||
PATH="$WC_ROOT/bin:$PATH" # Wild Cloud commands available
|
||||
KUBECONFIG="$WC_HOME/.kube/config" # Kubernetes configuration
|
||||
TALOSCONFIG="$WC_HOME/.talos/config" # Talos configuration
|
||||
```
|
||||
|
||||
**Environment Setup Script** (`env.sh`):
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Auto-generated environment setup
|
||||
|
||||
export WC_HOME="/home/user/my-cloud"
|
||||
export WC_ROOT="/opt/wild-cloud"
|
||||
export PATH="$WC_ROOT/bin:$PATH"
|
||||
export KUBECONFIG="$WC_HOME/.kubeconfig"
|
||||
export TALOSCONFIG="$WC_HOME/setup/cluster-nodes/generated/talosconfig"
|
||||
|
||||
# Source this file to set up Wild Cloud environment
|
||||
# source env.sh
|
||||
```
|
||||
|
||||
### Common Script Pattern
|
||||
|
||||
Most Wild Cloud scripts follow this initialization pattern:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
set -e
|
||||
set -o pipefail
|
||||
|
||||
# Initialize Wild Cloud environment
|
||||
if [ -z "${WC_ROOT}" ]; then
|
||||
print "WC_ROOT is not set."
|
||||
exit 1
|
||||
else
|
||||
source "${WC_ROOT}/scripts/common.sh"
|
||||
init_wild_env
|
||||
fi
|
||||
|
||||
# Script logic here...
|
||||
```
|
||||
|
||||
## Configuration Validation
|
||||
|
||||
### Schema Validation
|
||||
|
||||
Wild Cloud validates configuration against expected schemas:
|
||||
|
||||
**Cluster Configuration Validation**:
|
||||
- Node IP addresses are valid and unique
|
||||
- Network ranges don't overlap
|
||||
- Required fields are present
|
||||
- Hardware specifications meet minimums
|
||||
|
||||
**Application Configuration Validation**:
|
||||
- Domain names are valid DNS names
|
||||
- Storage sizes use valid Kubernetes formats
|
||||
- Image references are valid container images
|
||||
- Dependencies are satisfied
|
||||
|
||||
### Validation Commands
|
||||
|
||||
```bash
|
||||
# Validate current configuration
|
||||
wild-config --validate
|
||||
|
||||
# Check specific configuration sections
|
||||
wild-config --validate --section cluster
|
||||
wild-config --validate --section apps.ghost
|
||||
|
||||
# Test template compilation
|
||||
wild-compile-template --validate < template.yaml
|
||||
```
|
||||
|
||||
## Configuration Best Practices
|
||||
|
||||
### Organization
|
||||
|
||||
**Hierarchical Structure**:
|
||||
- Group related settings under common prefixes
|
||||
- Use consistent naming conventions
|
||||
- Keep application configs under `apps.*`
|
||||
- Separate infrastructure from application settings
|
||||
|
||||
**Documentation**:
|
||||
```yaml
|
||||
# Document complex configurations
|
||||
cluster:
|
||||
# Node configuration - update IPs after hardware changes
|
||||
nodes:
|
||||
control-1:
|
||||
ip: "192.168.1.91" # Main control plane node
|
||||
interface: "eth0" # Primary network interface
|
||||
```
|
||||
|
||||
### Security
|
||||
|
||||
**Configuration Security**:
|
||||
- Never store secrets in `config.yaml`
|
||||
- Use `wild-secret-set` for all sensitive data
|
||||
- Regularly rotate generated secrets
|
||||
- Backup `secrets.yaml` securely
|
||||
|
||||
**Access Control**:
|
||||
```bash
|
||||
# Ensure proper permissions
|
||||
chmod 600 secrets.yaml
|
||||
chmod 644 config.yaml
|
||||
|
||||
# Restrict directory access
|
||||
chmod 755 your-cloud-directory
|
||||
chmod 700 .wildcloud/
|
||||
```
|
||||
|
||||
### Version Control
|
||||
|
||||
**Git Integration**:
|
||||
```gitignore
|
||||
# .gitignore for Wild Cloud projects
|
||||
secrets.yaml # Never commit secrets
|
||||
.wildcloud/cache/ # Temporary files
|
||||
.wildcloud/logs/ # Operation logs
|
||||
setup/cluster-nodes/generated/ # Generated cluster configs
|
||||
.kube/ # Kubernetes configs
|
||||
.talos/ # Talos configs
|
||||
```
|
||||
|
||||
**Configuration Changes**:
|
||||
- Commit `config.yaml` changes with descriptive messages
|
||||
- Tag major configuration changes
|
||||
- Use branches for experimental configurations
|
||||
- Document configuration changes in commit messages
|
||||
|
||||
### Backup and Recovery
|
||||
|
||||
**Configuration Backup**:
|
||||
```bash
|
||||
# Backup configuration and secrets
|
||||
wild-backup --home-only
|
||||
|
||||
# Export configuration for disaster recovery
|
||||
cp config.yaml config-backup-$(date +%Y%m%d).yaml
|
||||
cp secrets.yaml secrets-backup-$(date +%Y%m%d).yaml.gpg # Encrypt first
|
||||
```
|
||||
|
||||
**Recovery Process**:
|
||||
1. Restore `config.yaml` from backup
|
||||
2. Decrypt and restore `secrets.yaml`
|
||||
3. Re-run `wild-setup` if needed
|
||||
4. Validate configuration with `wild-config --validate`
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Multi-Environment Setup
|
||||
|
||||
**Development Environment**:
|
||||
```yaml
|
||||
cloud:
|
||||
domain: "dev.example.com"
|
||||
cluster:
|
||||
name: "dev-cluster"
|
||||
nodeCount: 1
|
||||
apps:
|
||||
ghost:
|
||||
domain: "blog.dev.example.com"
|
||||
replicas: 1
|
||||
```
|
||||
|
||||
**Production Environment**:
|
||||
```yaml
|
||||
cloud:
|
||||
domain: "example.com"
|
||||
cluster:
|
||||
name: "prod-cluster"
|
||||
nodeCount: 5
|
||||
apps:
|
||||
ghost:
|
||||
domain: "blog.example.com"
|
||||
replicas: 3
|
||||
```
|
||||
|
||||
### Configuration Inheritance
|
||||
|
||||
**Base Configuration**:
|
||||
```yaml
|
||||
# config.base.yaml
|
||||
cloud:
|
||||
timezone: "UTC"
|
||||
email: "admin@example.com"
|
||||
apps:
|
||||
postgresql:
|
||||
storage: "10Gi"
|
||||
```
|
||||
|
||||
**Environment-Specific Override**:
|
||||
```yaml
|
||||
# config.prod.yaml (merged with base)
|
||||
apps:
|
||||
postgresql:
|
||||
storage: "100Gi" # Override for production
|
||||
replicas: 3 # Additional production setting
|
||||
```
|
||||
|
||||
### Dynamic Configuration
|
||||
|
||||
**Runtime Configuration Updates**:
|
||||
```bash
|
||||
# Update configuration without restart
|
||||
wild-config-set apps.ghost.replicas 3
|
||||
wild-app-deploy ghost # Apply changes
|
||||
|
||||
# Rolling updates
|
||||
wild-config-set apps.ghost.image "ghost:5.1.0"
|
||||
wild-app-deploy ghost --rolling-update
|
||||
```
|
||||
|
||||
The Wild Cloud configuration system provides a powerful, secure, and flexible foundation for managing complex infrastructure deployments while maintaining simplicity for common use cases.
|
||||
443
ai/wildcloud-v.PoC/overview.md
Normal file
443
ai/wildcloud-v.PoC/overview.md
Normal file
@@ -0,0 +1,443 @@
|
||||
# Wild Cloud Overview
|
||||
|
||||
Wild Cloud is a complete, production-ready Kubernetes infrastructure designed for personal use. It combines enterprise-grade technologies to create a self-hosted cloud platform with automated deployment, HTTPS certificates, and web management interfaces.
|
||||
|
||||
## What is Wild Cloud?
|
||||
|
||||
### Vision
|
||||
In a world where digital lives are increasingly controlled by large corporations, Wild Cloud puts you back in control by providing:
|
||||
|
||||
- **Privacy**: Your data stays on your hardware, under your control
|
||||
- **Ownership**: No subscription fees or sudden price increases
|
||||
- **Freedom**: Run the apps you want, the way you want them
|
||||
- **Learning**: Gain valuable skills in modern cloud technologies
|
||||
- **Resilience**: Reduce reliance on third-party services that can disappear
|
||||
|
||||
### Core Capabilities
|
||||
|
||||
**Complete Infrastructure Stack**:
|
||||
- Kubernetes cluster with Talos Linux
|
||||
- Automatic HTTPS certificates via Let's Encrypt
|
||||
- Load balancing with MetalLB
|
||||
- Ingress routing with Traefik
|
||||
- Distributed storage with Longhorn
|
||||
- DNS management with CoreDNS and ExternalDNS
|
||||
|
||||
**Application Platform**:
|
||||
- One-command application deployment
|
||||
- Pre-built apps for common self-hosted services
|
||||
- Automatic database setup and configuration
|
||||
- Integrated backup and restore system
|
||||
- Web-based management interfaces
|
||||
|
||||
**Enterprise Features**:
|
||||
- High availability and fault tolerance
|
||||
- Automated certificate management
|
||||
- Network policies and security contexts
|
||||
- Monitoring and observability
|
||||
- Infrastructure as code principles
|
||||
|
||||
## Technology Stack
|
||||
|
||||
### Core Infrastructure
|
||||
- **Talos Linux** - Immutable OS designed for Kubernetes
|
||||
- **Kubernetes** - Container orchestration platform
|
||||
- **MetalLB** - Load balancer for bare metal deployments
|
||||
- **Traefik** - HTTP reverse proxy and ingress controller
|
||||
- **Longhorn** - Distributed block storage system
|
||||
- **cert-manager** - Automatic TLS certificate management
|
||||
|
||||
### Supporting Services
|
||||
- **CoreDNS** - DNS server for service discovery
|
||||
- **ExternalDNS** - Automatic DNS record management
|
||||
- **Kubernetes Dashboard** - Web UI for cluster management
|
||||
- **restic** - Backup solution with deduplication
|
||||
- **gomplate** - Template processor for configurations
|
||||
|
||||
### Development Tools
|
||||
- **Kustomize** - Kubernetes configuration management
|
||||
- **kubectl** - Kubernetes command line interface
|
||||
- **talosctl** - Talos Linux management tool
|
||||
- **Bats** - Testing framework for bash scripts
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Internet │
|
||||
└─────────────────┬───────────────────────────────────────────────┘
|
||||
│
|
||||
┌─────────────────▼───────────────────────────────────────────────┐
|
||||
│ DNS Provider │
|
||||
│ (Cloudflare, Route53, etc.) │
|
||||
└─────────────────┬───────────────────────────────────────────────┘
|
||||
│
|
||||
┌─────────────────▼───────────────────────────────────────────────┐
|
||||
│ Your Network │
|
||||
│ ┌─────────────┐ ┌─────────────────────────────────────────┐ │
|
||||
│ │ dnsmasq │ │ Kubernetes Cluster │ │
|
||||
│ │ Server │ │ ┌─────────────┐ ┌─────────────────┐ │ │
|
||||
│ │ │ │ │ MetalLB │ │ Traefik │ │ │
|
||||
│ │ DNS + DHCP │ │ │ LoadBalancer│ │ Ingress │ │ │
|
||||
│ └─────────────┘ │ └─────────────┘ └─────────────────┘ │ │
|
||||
│ │ ┌───────────────────────────────────┐ │ │
|
||||
│ │ │ Applications │ │ │
|
||||
│ │ │ Ghost, Immich, Gitea, vLLM... │ │ │
|
||||
│ │ └───────────────────────────────────┘ │ │
|
||||
│ └─────────────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Traffic Flow
|
||||
1. **External Request** → DNS resolution via provider
|
||||
2. **DNS Response** → Points to your cluster's external IP
|
||||
3. **Network Request** → Hits MetalLB load balancer
|
||||
4. **Load Balancer** → Routes to Traefik ingress controller
|
||||
5. **Ingress Controller** → Terminates TLS and routes to application
|
||||
6. **Application** → Serves content from Kubernetes pod
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Prerequisites
|
||||
|
||||
**Hardware Requirements**:
|
||||
- Minimum 3 nodes for high availability
|
||||
- 8GB RAM per node (16GB+ recommended)
|
||||
- 100GB+ storage per node
|
||||
- Gigabit network connectivity
|
||||
- x86_64 architecture
|
||||
|
||||
**Network Requirements**:
|
||||
- All nodes on same network segment
|
||||
- One dedicated machine for dnsmasq (can be lightweight)
|
||||
- Static IP assignments or DHCP reservations
|
||||
- Internet connectivity for downloads and certificates
|
||||
|
||||
### Quick Start Guide
|
||||
|
||||
#### 1. Install Dependencies
|
||||
```bash
|
||||
# Clone Wild Cloud repository
|
||||
git clone https://github.com/your-org/wild-cloud
|
||||
cd wild-cloud
|
||||
|
||||
# Install required tools
|
||||
scripts/setup-utils.sh
|
||||
```
|
||||
|
||||
#### 2. Initialize Your Cloud
|
||||
```bash
|
||||
# Create and initialize new cloud directory
|
||||
mkdir my-cloud && cd my-cloud
|
||||
wild-init
|
||||
|
||||
# Follow interactive setup prompts for:
|
||||
# - Domain name configuration
|
||||
# - Email for certificates
|
||||
# - Network settings
|
||||
```
|
||||
|
||||
#### 3. Deploy Infrastructure
|
||||
```bash
|
||||
# Complete automated setup
|
||||
wild-setup
|
||||
|
||||
# Or step-by-step:
|
||||
wild-setup-cluster # Deploy Kubernetes cluster
|
||||
wild-setup-services # Install core services
|
||||
```
|
||||
|
||||
#### 4. Deploy Your First App
|
||||
```bash
|
||||
# List available applications
|
||||
wild-apps-list
|
||||
|
||||
# Deploy a blog
|
||||
wild-app-add ghost
|
||||
wild-app-deploy ghost
|
||||
|
||||
# Access at https://ghost.yourdomain.com
|
||||
```
|
||||
|
||||
#### 5. Verify Deployment
|
||||
```bash
|
||||
# Check system health
|
||||
wild-health
|
||||
|
||||
# Access Kubernetes dashboard
|
||||
wild-dashboard-token
|
||||
# Visit https://dashboard.internal.yourdomain.com
|
||||
```
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### Configuration Management
|
||||
|
||||
Wild Cloud uses a dual-file configuration system:
|
||||
|
||||
**`config.yaml`** - Non-sensitive settings:
|
||||
```yaml
|
||||
cloud:
|
||||
domain: "example.com"
|
||||
email: "admin@example.com"
|
||||
apps:
|
||||
ghost:
|
||||
domain: "blog.example.com"
|
||||
storage: "10Gi"
|
||||
```
|
||||
|
||||
**`secrets.yaml`** - Sensitive data (auto-generated):
|
||||
```yaml
|
||||
apps:
|
||||
ghost:
|
||||
dbPassword: "secure-random-password"
|
||||
postgresql:
|
||||
rootPassword: "another-secure-password"
|
||||
```
|
||||
|
||||
### Template System
|
||||
|
||||
All configurations are templates processed with gomplate:
|
||||
|
||||
**Before Processing** (in repository):
|
||||
```yaml
|
||||
domain: {{ .apps.ghost.domain }}
|
||||
storage: {{ .apps.ghost.storage | default "5Gi" }}
|
||||
```
|
||||
|
||||
**After Processing** (in your cloud):
|
||||
```yaml
|
||||
domain: blog.example.com
|
||||
storage: 10Gi
|
||||
```
|
||||
|
||||
### Application Lifecycle
|
||||
|
||||
1. **Discovery**: `wild-apps-list` - Browse available apps
|
||||
2. **Configuration**: `wild-app-add app-name` - Configure and prepare application
|
||||
3. **Deployment**: `wild-app-deploy app-name` - Deploy to cluster
|
||||
4. **Operations**: `wild-app-doctor app-name` - Monitor and troubleshoot
|
||||
|
||||
## Available Applications
|
||||
|
||||
### Content Management & Publishing
|
||||
- **Ghost** - Modern publishing platform for blogs and websites
|
||||
- **Discourse** - Community discussion platform with modern features
|
||||
|
||||
### Media & File Management
|
||||
- **Immich** - Self-hosted photo and video backup solution
|
||||
|
||||
### Development Tools
|
||||
- **Gitea** - Self-hosted Git service with web interface
|
||||
- **Docker Registry** - Private container image registry
|
||||
|
||||
### Communication & Marketing
|
||||
- **Keila** - Newsletter and email marketing platform
|
||||
- **Listmonk** - High-performance newsletter and mailing list manager
|
||||
|
||||
### Databases & Caching
|
||||
- **PostgreSQL** - Advanced open-source relational database
|
||||
- **MySQL** - Popular relational database management system
|
||||
- **Redis** - In-memory data structure store and cache
|
||||
- **Memcached** - Distributed memory caching system
|
||||
|
||||
### AI & Machine Learning
|
||||
- **vLLM** - High-performance LLM inference server with OpenAI-compatible API
|
||||
|
||||
## Core Commands Reference
|
||||
|
||||
### Setup & Initialization
|
||||
```bash
|
||||
wild-init # Initialize new cloud directory
|
||||
wild-setup # Complete infrastructure deployment
|
||||
wild-setup-cluster # Deploy Kubernetes cluster only
|
||||
wild-setup-services # Deploy cluster services only
|
||||
```
|
||||
|
||||
### Application Management
|
||||
```bash
|
||||
wild-apps-list # List available applications
|
||||
wild-app-add <app> # Configure application
|
||||
wild-app-deploy <app> # Deploy to cluster
|
||||
wild-app-delete <app> # Remove application
|
||||
wild-app-doctor <app> # Run diagnostics
|
||||
```
|
||||
|
||||
### Configuration Management
|
||||
```bash
|
||||
wild-config <key> # Read configuration value
|
||||
wild-config-set <key> <val> # Set configuration value
|
||||
wild-secret <key> # Read secret value
|
||||
wild-secret-set <key> <val> # Set secret value
|
||||
```
|
||||
|
||||
### Operations & Monitoring
|
||||
```bash
|
||||
wild-health # System health check
|
||||
wild-dashboard-token # Get dashboard access token
|
||||
wild-backup # Backup system and apps
|
||||
wild-app-backup <app> # Backup specific application
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Security
|
||||
- Never commit `secrets.yaml` to version control
|
||||
- Use strong, unique passwords for all services
|
||||
- Regularly update system and application images
|
||||
- Monitor certificate expiration and renewal
|
||||
- Implement network policies for production workloads
|
||||
|
||||
### Configuration Management
|
||||
- Store `config.yaml` in version control with proper .gitignore
|
||||
- Document configuration changes in commit messages
|
||||
- Use branches for experimental configurations
|
||||
- Backup configuration files before major changes
|
||||
- Test configuration changes in development environment
|
||||
|
||||
### Operations
|
||||
- Monitor cluster health with `wild-health`
|
||||
- Set up regular backup schedules with `wild-backup`
|
||||
- Keep applications updated with latest security patches
|
||||
- Monitor disk usage and expand storage as needed
|
||||
- Document custom configurations and procedures
|
||||
|
||||
### Development
|
||||
- Follow Wild Cloud app structure conventions
|
||||
- Use proper Kubernetes security contexts
|
||||
- Include comprehensive health checks and probes
|
||||
- Test applications thoroughly before deployment
|
||||
- Document application-specific configuration requirements
|
||||
|
||||
## Common Use Cases
|
||||
|
||||
### Personal Blog/Website
|
||||
```bash
|
||||
# Deploy Ghost blog with custom domain
|
||||
wild-config-set apps.ghost.domain "blog.yourdomain.com"
|
||||
wild-app-add ghost
|
||||
wild-app-deploy ghost
|
||||
```
|
||||
|
||||
### Photo Management
|
||||
```bash
|
||||
# Deploy Immich for photo backup and management
|
||||
wild-app-add postgresql immich
|
||||
wild-app-deploy postgresql immich
|
||||
```
|
||||
|
||||
### Development Environment
|
||||
```bash
|
||||
# Set up Git hosting and container registry
|
||||
wild-app-add gitea docker-registry
|
||||
wild-app-deploy gitea docker-registry
|
||||
```
|
||||
|
||||
### AI/ML Workloads
|
||||
```bash
|
||||
# Deploy vLLM for local AI inference
|
||||
wild-config-set apps.vllm.model "Qwen/Qwen2.5-7B-Instruct"
|
||||
wild-app-add vllm
|
||||
wild-app-deploy vllm
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Cluster Not Responding**:
|
||||
```bash
|
||||
# Check node status
|
||||
kubectl get nodes
|
||||
talosctl health
|
||||
|
||||
# Verify network connectivity
|
||||
ping <node-ip>
|
||||
```
|
||||
|
||||
**Applications Not Starting**:
|
||||
```bash
|
||||
# Check pod status
|
||||
kubectl get pods -n <app-namespace>
|
||||
|
||||
# View logs
|
||||
kubectl logs deployment/<app-name> -n <app-namespace>
|
||||
|
||||
# Run diagnostics
|
||||
wild-app-doctor <app-name>
|
||||
```
|
||||
|
||||
**Certificate Issues**:
|
||||
```bash
|
||||
# Check certificate status
|
||||
kubectl get certificates -A
|
||||
|
||||
# View cert-manager logs
|
||||
kubectl logs -n cert-manager deployment/cert-manager
|
||||
```
|
||||
|
||||
**DNS Problems**:
|
||||
```bash
|
||||
# Test DNS resolution
|
||||
nslookup <app-domain>
|
||||
|
||||
# Check external-dns logs
|
||||
kubectl logs -n external-dns deployment/external-dns
|
||||
```
|
||||
|
||||
### Getting Help
|
||||
|
||||
**Documentation**:
|
||||
- Check `docs/` directory for detailed guides
|
||||
- Review application-specific README files
|
||||
- Consult Kubernetes and Talos documentation
|
||||
|
||||
**Community Support**:
|
||||
- Report issues on GitHub repository
|
||||
- Join community forums and discussions
|
||||
- Share configurations and troubleshooting tips
|
||||
|
||||
**Professional Support**:
|
||||
- Consider professional services for production deployments
|
||||
- Engage with cloud infrastructure consultants
|
||||
- Participate in training and certification programs
|
||||
|
||||
## Advanced Topics
|
||||
|
||||
### Custom Applications
|
||||
|
||||
Create your own Wild Cloud applications:
|
||||
|
||||
1. **Create App Directory**: `apps/myapp/`
|
||||
2. **Define Manifest**: Include metadata and configuration defaults
|
||||
3. **Create Templates**: Kubernetes resources with gomplate variables
|
||||
4. **Test Deployment**: Use standard Wild Cloud workflow
|
||||
5. **Share**: Contribute back to the community
|
||||
|
||||
### Multi-Environment Deployments
|
||||
|
||||
Manage multiple Wild Cloud instances:
|
||||
|
||||
- **Development**: Single-node cluster for testing
|
||||
- **Staging**: Multi-node cluster mirroring production
|
||||
- **Production**: Full HA cluster with monitoring and backups
|
||||
|
||||
### Integration with External Services
|
||||
|
||||
Extend Wild Cloud capabilities:
|
||||
|
||||
- **External DNS Providers**: Cloudflare, Route53, Google DNS
|
||||
- **Backup Storage**: S3, Google Cloud Storage, Azure Blob
|
||||
- **Monitoring**: Prometheus, Grafana, AlertManager
|
||||
- **CI/CD**: GitLab CI, GitHub Actions, Jenkins
|
||||
|
||||
### Performance Optimization
|
||||
|
||||
Optimize for your workloads:
|
||||
|
||||
- **Resource Allocation**: CPU and memory limits/requests
|
||||
- **Storage Performance**: NVMe SSDs, storage classes
|
||||
- **Network Optimization**: Network policies, service mesh
|
||||
- **Scaling**: Horizontal pod autoscaling, cluster autoscaling
|
||||
|
||||
Wild Cloud provides a solid foundation for personal cloud infrastructure while maintaining the flexibility to grow and adapt to changing needs. Whether you're running a simple blog or a complex multi-service application, Wild Cloud's enterprise-grade technologies ensure your infrastructure is reliable, secure, and maintainable.
|
||||
487
ai/wildcloud-v.PoC/project-architecture.md
Normal file
487
ai/wildcloud-v.PoC/project-architecture.md
Normal file
@@ -0,0 +1,487 @@
|
||||
# Wild Cloud Project Architecture
|
||||
|
||||
Wild Cloud consists of two main directory structures: the **Wild Cloud Repository** (source code and templates) and **User Cloud Directories** (individual deployments). Understanding this architecture is essential for working with Wild Cloud effectively.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
Wild Cloud Repository (/path/to/wild-cloud-repo) ← Source code, templates, scripts
|
||||
↓
|
||||
User Cloud Directory (/path/to/my-cloud) ← Individual deployment instance
|
||||
↓
|
||||
Kubernetes Cluster ← Running infrastructure
|
||||
```
|
||||
|
||||
## Wild Cloud Repository Structure
|
||||
|
||||
The Wild Cloud repository (`WC_ROOT`) contains the source code, templates, and tools:
|
||||
|
||||
### `/bin/` - Command Line Interface
|
||||
**Purpose**: All Wild Cloud CLI commands and utilities
|
||||
```
|
||||
bin/
|
||||
├── wild-* # All user-facing commands (34+ scripts)
|
||||
├── wild-common.sh # Common utilities and functions
|
||||
├── README.md # CLI documentation
|
||||
└── helm-chart-to-kustomize # Utility for converting Helm charts
|
||||
```
|
||||
|
||||
**Key Commands**:
|
||||
- **Setup**: `wild-init`, `wild-setup`, `wild-setup-cluster`, `wild-setup-services`
|
||||
- **Apps**: `wild-app-*`, `wild-apps-list`
|
||||
- **Config**: `wild-config*`, `wild-secret*`
|
||||
- **Operations**: `wild-backup`, `wild-health`, `wild-dashboard-token`
|
||||
|
||||
### `/apps/` - Application Templates
|
||||
**Purpose**: Pre-built applications ready for deployment
|
||||
```
|
||||
apps/
|
||||
├── README.md # Apps system documentation
|
||||
├── ghost/ # Blog publishing platform
|
||||
│ ├── manifest.yaml # App metadata and defaults
|
||||
│ ├── kustomization.yaml # Kustomize configuration
|
||||
│ ├── deployment.yaml # Kubernetes deployment
|
||||
│ ├── service.yaml # Service definition
|
||||
│ ├── ingress.yaml # HTTPS ingress
|
||||
│ └── ...
|
||||
├── immich/ # Photo management
|
||||
├── gitea/ # Git hosting
|
||||
├── postgresql/ # Database service
|
||||
├── vllm/ # AI/LLM inference
|
||||
└── ...
|
||||
```
|
||||
|
||||
**Application Categories**:
|
||||
- **Content Management**: Ghost, Discourse
|
||||
- **Media**: Immich
|
||||
- **Development**: Gitea, Docker Registry
|
||||
- **Databases**: PostgreSQL, MySQL, Redis
|
||||
- **AI/ML**: vLLM
|
||||
- **Infrastructure**: Memcached, NFS
|
||||
|
||||
### `/setup/` - Infrastructure Templates
|
||||
**Purpose**: Cluster and service deployment templates
|
||||
```
|
||||
setup/
|
||||
├── README.md
|
||||
├── cluster-nodes/ # Talos node configuration
|
||||
│ ├── init-cluster.sh # Cluster initialization script
|
||||
│ ├── patch.templates/ # Node-specific config templates
|
||||
│ │ ├── controlplane.yaml # Control plane template
|
||||
│ │ └── worker.yaml # Worker node template
|
||||
│ └── talos-schemas.yaml # Version mappings
|
||||
├── cluster-services/ # Core Kubernetes services
|
||||
│ ├── README.md
|
||||
│ ├── metallb/ # Load balancer
|
||||
│ ├── traefik/ # Ingress controller
|
||||
│ ├── cert-manager/ # Certificate management
|
||||
│ ├── longhorn/ # Distributed storage
|
||||
│ ├── coredns/ # DNS resolution
|
||||
│ ├── externaldns/ # DNS record management
|
||||
│ ├── kubernetes-dashboard/ # Web UI
|
||||
│ └── ...
|
||||
├── dnsmasq/ # DNS and PXE boot server
|
||||
├── home-scaffold/ # User directory templates
|
||||
└── operator/ # Additional operator tools
|
||||
```
|
||||
|
||||
### `/experimental/` - Development Projects
|
||||
**Purpose**: Experimental features and development tools
|
||||
```
|
||||
experimental/
|
||||
├── daemon/ # Go API daemon
|
||||
│ ├── main.go # API server
|
||||
│ ├── Makefile # Build automation
|
||||
│ └── README.md
|
||||
└── app/ # React dashboard
|
||||
├── src/ # React source code
|
||||
├── package.json # Dependencies
|
||||
├── pnpm-lock.yaml # Lock file
|
||||
└── README.md
|
||||
```
|
||||
|
||||
### `/scripts/` - Utility Scripts
|
||||
**Purpose**: Installation and utility scripts
|
||||
```
|
||||
scripts/
|
||||
├── setup-utils.sh # Install dependencies
|
||||
└── install-wild-cloud-dependencies.sh
|
||||
```
|
||||
|
||||
### `/docs/` - Documentation
|
||||
**Purpose**: User guides and documentation
|
||||
```
|
||||
docs/
|
||||
├── guides/ # Setup and usage guides
|
||||
├── agent-context/ # Agent documentation
|
||||
│ └── wildcloud/ # Context files for AI agents
|
||||
└── *.md # Various documentation files
|
||||
```
|
||||
|
||||
### `/test/` - Test Suite
|
||||
**Purpose**: Automated testing with Bats
|
||||
```
|
||||
test/
|
||||
├── bats/ # Bats testing framework
|
||||
├── fixtures/ # Test data and configurations
|
||||
├── run_bats_tests.sh # Test runner
|
||||
└── *.bats # Individual test files
|
||||
```
|
||||
|
||||
### Root Files
|
||||
```
|
||||
/
|
||||
├── README.md # Project overview
|
||||
├── CLAUDE.md # AI assistant context
|
||||
├── LICENSE # GNU AGPLv3
|
||||
├── CONTRIBUTING.md # Contribution guidelines
|
||||
├── env.sh # Environment setup
|
||||
├── .gitignore # Git exclusions
|
||||
└── .gitmodules # Git submodules
|
||||
```
|
||||
|
||||
## User Cloud Directory Structure
|
||||
|
||||
Each user deployment (`WC_HOME`) is an independent cloud instance:
|
||||
|
||||
### Directory Layout
|
||||
```
|
||||
my-cloud/ # User's cloud directory
|
||||
├── .wildcloud/ # Project marker and cache
|
||||
│ ├── cache/ # Downloaded templates
|
||||
│ │ ├── apps/ # Cached app templates
|
||||
│ │ └── services/ # Cached service templates
|
||||
│ └── logs/ # Operation logs
|
||||
├── config.yaml # Main configuration
|
||||
├── secrets.yaml # Sensitive data (600 permissions)
|
||||
├── env.sh # Environment setup (auto-generated)
|
||||
├── apps/ # Deployed application configs
|
||||
│ ├── ghost/ # Compiled ghost configuration
|
||||
│ ├── postgresql/ # Database configuration
|
||||
│ └── ...
|
||||
├── setup/ # Infrastructure configurations
|
||||
│ ├── cluster-nodes/ # Node-specific configurations
|
||||
│ │ └── generated/ # Generated Talos configs
|
||||
│ └── cluster-services/ # Compiled service configurations
|
||||
├── docs/ # Project-specific documentation
|
||||
├── .kube/ # Kubernetes configuration
|
||||
│ └── config # kubectl configuration
|
||||
├── .talos/ # Talos configuration
|
||||
│ └── config # talosctl configuration
|
||||
└── backups/ # Local backup staging
|
||||
```
|
||||
|
||||
### Configuration Files
|
||||
|
||||
**`config.yaml`** - Main configuration (version controlled):
|
||||
```yaml
|
||||
cloud:
|
||||
domain: "example.com"
|
||||
email: "admin@example.com"
|
||||
cluster:
|
||||
name: "my-cluster"
|
||||
nodeCount: 3
|
||||
apps:
|
||||
ghost:
|
||||
domain: "blog.example.com"
|
||||
```
|
||||
|
||||
**`secrets.yaml`** - Sensitive data (not version controlled):
|
||||
```yaml
|
||||
apps:
|
||||
ghost:
|
||||
dbPassword: "generated-password"
|
||||
postgresql:
|
||||
rootPassword: "generated-password"
|
||||
cluster:
|
||||
talos:
|
||||
secrets: "base64-encoded-secrets"
|
||||
```
|
||||
|
||||
**`.wildcloud/`** - Project metadata:
|
||||
- Marks directory as Wild Cloud project
|
||||
- Contains cached templates and temporary files
|
||||
- Used for project detection by scripts
|
||||
|
||||
### Generated Directories
|
||||
|
||||
**`apps/`** - Compiled application configurations:
|
||||
- Created by `wild-app-add` command
|
||||
- Contains ready-to-deploy Kubernetes manifests
|
||||
- Templates processed with user configuration
|
||||
- Each app in separate subdirectory
|
||||
|
||||
**`setup/cluster-nodes/generated/`** - Talos configurations:
|
||||
- Base cluster configuration (`controlplane.yaml`, `worker.yaml`)
|
||||
- Node-specific patches and final configs
|
||||
- Cluster secrets and certificates
|
||||
- Generated by `wild-cluster-config-generate`
|
||||
|
||||
**`setup/cluster-services/`** - Kubernetes services:
|
||||
- Compiled service configurations
|
||||
- Generated by `wild-cluster-services-configure`
|
||||
- Ready for deployment to cluster
|
||||
|
||||
## Template Processing Flow
|
||||
|
||||
### From Repository to Deployment
|
||||
|
||||
1. **Template Storage**: Templates stored in repository with placeholder variables
|
||||
2. **Configuration Merge**: `wild-app-add` reads templates directly from repository and merges app defaults with user config
|
||||
3. **Template Compilation**: gomplate processes templates with user data
|
||||
4. **Manifest Generation**: Final Kubernetes manifests created in user directory
|
||||
5. **Deployment**: `wild-app-deploy` applies manifests to cluster
|
||||
|
||||
### Template Variables
|
||||
|
||||
**Repository Templates** (before processing):
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: ghost
|
||||
namespace: {{ .apps.ghost.namespace }}
|
||||
spec:
|
||||
replicas: {{ .apps.ghost.replicas | default 1 }}
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: ghost
|
||||
image: "{{ .apps.ghost.image }}"
|
||||
env:
|
||||
- name: url
|
||||
value: "https://{{ .apps.ghost.domain }}"
|
||||
```
|
||||
|
||||
**User Directory** (after processing):
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: ghost
|
||||
namespace: ghost
|
||||
spec:
|
||||
replicas: 2
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: ghost
|
||||
image: "ghost:5.0.0"
|
||||
env:
|
||||
- name: url
|
||||
value: "https://blog.example.com"
|
||||
```
|
||||
|
||||
## File Permissions and Security
|
||||
|
||||
### Security Model
|
||||
|
||||
**Configuration Security**:
|
||||
```bash
|
||||
config.yaml # 644 (readable by group)
|
||||
secrets.yaml # 600 (owner only)
|
||||
.wildcloud/ # 755 (standard directory)
|
||||
apps/ # 755 (standard directory)
|
||||
```
|
||||
|
||||
**Git Integration**:
|
||||
```gitignore
|
||||
# Automatically excluded from version control
|
||||
secrets.yaml # Never commit secrets
|
||||
.wildcloud/cache/ # Temporary files
|
||||
.wildcloud/logs/ # Operation logs
|
||||
setup/cluster-nodes/generated/ # Generated configs
|
||||
.kube/ # Kubernetes configs
|
||||
.talos/ # Talos configs
|
||||
backups/ # Backup files
|
||||
```
|
||||
|
||||
### Access Patterns
|
||||
|
||||
**Read Operations**:
|
||||
- Scripts read config and secrets via `wild-config` and `wild-secret`
|
||||
- Template processor accesses both files for compilation
|
||||
- Kubernetes tools read generated manifests
|
||||
|
||||
**Write Operations**:
|
||||
- Only Wild Cloud commands modify config and secrets
|
||||
- Manual editing supported but not recommended
|
||||
- Backup processes create read-only copies
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### Repository Development
|
||||
|
||||
**Setup Development Environment**:
|
||||
```bash
|
||||
git clone https://github.com/username/wild-cloud
|
||||
cd wild-cloud
|
||||
source env.sh # Set up environment
|
||||
scripts/setup-utils.sh # Install dependencies
|
||||
```
|
||||
|
||||
**Testing Changes**:
|
||||
```bash
|
||||
# Test specific functionality
|
||||
test/run_bats_tests.sh
|
||||
|
||||
# Test with real cloud directory
|
||||
cd /path/to/test-cloud
|
||||
wild-app-add myapp # Test app changes
|
||||
wild-setup-cluster --dry-run # Test cluster changes
|
||||
```
|
||||
|
||||
### User Workflow
|
||||
|
||||
**Initial Setup**:
|
||||
```bash
|
||||
mkdir my-cloud && cd my-cloud
|
||||
wild-init # Initialize project
|
||||
wild-setup # Deploy infrastructure
|
||||
```
|
||||
|
||||
**Daily Operations**:
|
||||
```bash
|
||||
wild-apps-list # Browse available apps
|
||||
wild-app-add ghost # Configure app
|
||||
wild-app-deploy ghost # Deploy to cluster
|
||||
```
|
||||
|
||||
**Configuration Management**:
|
||||
```bash
|
||||
wild-config apps.ghost.domain # Read configuration
|
||||
wild-config-set apps.ghost.storage "20Gi" # Update configuration
|
||||
wild-app-deploy ghost # Apply changes
|
||||
```
|
||||
|
||||
## Integration Points
|
||||
|
||||
### External Systems
|
||||
|
||||
**DNS Providers**:
|
||||
- Cloudflare API for DNS record management
|
||||
- Route53 support for AWS domains
|
||||
- Generic webhook support for other providers
|
||||
|
||||
**Certificate Authorities**:
|
||||
- Let's Encrypt (primary)
|
||||
- Custom CA support
|
||||
- Manual certificate import
|
||||
|
||||
**Storage Backends**:
|
||||
- Local storage via Longhorn
|
||||
- NFS network storage
|
||||
- Cloud storage integration (S3, etc.)
|
||||
|
||||
**Backup Systems**:
|
||||
- Restic for deduplication and encryption
|
||||
- S3-compatible storage backends
|
||||
- Local and remote backup targets
|
||||
|
||||
### Kubernetes Integration
|
||||
|
||||
**Custom Resources**:
|
||||
- Traefik IngressRoute and Middleware
|
||||
- cert-manager Certificate and Issuer
|
||||
- Longhorn Volume and Engine
|
||||
- ExternalDNS DNSEndpoint
|
||||
|
||||
**Standard Resources**:
|
||||
- Deployments, Services, ConfigMaps
|
||||
- Ingress, PersistentVolumes, Secrets
|
||||
- NetworkPolicies, ServiceAccounts
|
||||
- Jobs, CronJobs, DaemonSets
|
||||
|
||||
## Extensibility Points
|
||||
|
||||
### Custom Applications
|
||||
|
||||
**Create New Apps**:
|
||||
1. Create directory in `apps/`
|
||||
2. Define `manifest.yaml` with metadata
|
||||
3. Create Kubernetes resource templates
|
||||
4. Test with `wild-app-add` and `wild-app-deploy`
|
||||
|
||||
**App Requirements**:
|
||||
- Follow Wild Cloud labeling standards
|
||||
- Use gomplate template syntax
|
||||
- Include external-dns annotations
|
||||
- Implement proper security contexts
|
||||
|
||||
### Custom Services
|
||||
|
||||
**Add Infrastructure Services**:
|
||||
1. Create directory in `setup/cluster-services/`
|
||||
2. Define installation and configuration scripts
|
||||
3. Create Kubernetes manifests with templates
|
||||
4. Integrate with service deployment pipeline
|
||||
|
||||
### Script Extensions
|
||||
|
||||
**Extend CLI**:
|
||||
- Add scripts to `bin/` directory with `wild-` prefix
|
||||
- Follow common script patterns (error handling, help text)
|
||||
- Source `wild-common.sh` for utilities
|
||||
- Use configuration system for customization
|
||||
|
||||
## Deployment Patterns
|
||||
|
||||
### Single-Node Development
|
||||
|
||||
**Configuration**:
|
||||
```yaml
|
||||
cluster:
|
||||
nodeCount: 1
|
||||
nodes:
|
||||
all-in-one:
|
||||
roles: ["controlplane", "worker"]
|
||||
```
|
||||
|
||||
**Suitable For**:
|
||||
- Development and testing
|
||||
- Learning Kubernetes concepts
|
||||
- Small personal deployments
|
||||
- Resource-constrained environments
|
||||
|
||||
### Multi-Node Production
|
||||
|
||||
**Configuration**:
|
||||
```yaml
|
||||
cluster:
|
||||
nodeCount: 5
|
||||
nodes:
|
||||
control-1: { role: "controlplane" }
|
||||
control-2: { role: "controlplane" }
|
||||
control-3: { role: "controlplane" }
|
||||
worker-1: { role: "worker" }
|
||||
worker-2: { role: "worker" }
|
||||
```
|
||||
|
||||
**Suitable For**:
|
||||
- Production workloads
|
||||
- High availability requirements
|
||||
- Scalable application hosting
|
||||
- Enterprise-grade deployments
|
||||
|
||||
### Hybrid Deployments
|
||||
|
||||
**Configuration**:
|
||||
```yaml
|
||||
cluster:
|
||||
nodes:
|
||||
control-1:
|
||||
role: "controlplane"
|
||||
taints: [] # Allow workloads on control plane
|
||||
worker-gpu:
|
||||
role: "worker"
|
||||
labels:
|
||||
nvidia.com/gpu: "true" # GPU-enabled node
|
||||
```
|
||||
|
||||
**Use Cases**:
|
||||
- Mixed workload requirements
|
||||
- Specialized hardware (GPU, storage)
|
||||
- Cost optimization
|
||||
- Gradual scaling
|
||||
|
||||
The Wild Cloud architecture provides a solid foundation for personal cloud infrastructure while maintaining flexibility for customization and extension.
|
||||
390
ai/wildcloud-v.PoC/setup-process.md
Normal file
390
ai/wildcloud-v.PoC/setup-process.md
Normal file
@@ -0,0 +1,390 @@
|
||||
# Wild Cloud Setup Process & Infrastructure
|
||||
|
||||
Wild Cloud provides a complete, production-ready Kubernetes infrastructure designed for personal use. It combines enterprise-grade technologies to create a self-hosted cloud platform with automated deployment, HTTPS certificates, and web management interfaces.
|
||||
|
||||
## Setup Phases Overview
|
||||
|
||||
The Wild Cloud setup follows a sequential, dependency-aware process:
|
||||
|
||||
1. **Environment Setup** - Install required tools and dependencies
|
||||
2. **DNS/Network Foundation** - Set up dnsmasq for DNS and PXE booting
|
||||
3. **Cluster Infrastructure** - Deploy Talos Linux nodes and Kubernetes cluster
|
||||
4. **Cluster Services** - Install core services (ingress, storage, certificates, etc.)
|
||||
|
||||
## Phase 1: Environment Setup
|
||||
|
||||
### Dependencies Installation
|
||||
**Script**: `scripts/setup-utils.sh`
|
||||
|
||||
**Required Tools**:
|
||||
- `kubectl` - Kubernetes CLI
|
||||
- `gomplate` - Template processor for configuration
|
||||
- `kustomize` - Kubernetes configuration management
|
||||
- `yq` - YAML processor
|
||||
- `restic` - Backup tool
|
||||
- `talosctl` - Talos Linux cluster management
|
||||
|
||||
### Project Initialization
|
||||
**Command**: `wild-init`
|
||||
|
||||
Creates the basic Wild Cloud directory structure:
|
||||
- `.wildcloud/` - Project marker and cache
|
||||
- `config.yaml` - Main configuration file
|
||||
- `secrets.yaml` - Sensitive data storage
|
||||
- Basic project scaffolding
|
||||
|
||||
## Phase 2: DNS/Network Foundation
|
||||
|
||||
### dnsmasq Infrastructure
|
||||
**Location**: `setup/dnsmasq/`
|
||||
**Requirements**: Dedicated Linux machine with static IP
|
||||
|
||||
**Services Provided**:
|
||||
1. **LAN DNS Server**
|
||||
- Forwards internal domains (`*.internal.domain.com`) to cluster
|
||||
- Forwards external domains (`*.domain.com`) to cluster
|
||||
- Provides DNS resolution for entire network
|
||||
|
||||
2. **PXE Boot Server**
|
||||
- Enables network booting for cluster node installation
|
||||
- DHCP/TFTP services for Talos Linux deployment
|
||||
- Automated node provisioning
|
||||
|
||||
**Network Configuration Example**:
|
||||
```yaml
|
||||
network:
|
||||
subnet: 192.168.1.0/24
|
||||
gateway: 192.168.1.1
|
||||
dnsmasq_ip: 192.168.1.50
|
||||
dhcp_range: 192.168.1.100-200
|
||||
metallb_pool: 192.168.1.80-89
|
||||
control_plane_vip: 192.168.1.90
|
||||
node_ips: 192.168.1.91-93
|
||||
```
|
||||
|
||||
## Phase 3: Cluster Infrastructure Setup
|
||||
|
||||
### Talos Linux Foundation
|
||||
**Command**: `wild-setup-cluster`
|
||||
|
||||
**Talos Configuration**:
|
||||
- **Version**: v1.11.0 (configurable)
|
||||
- **Immutable OS**: Designed specifically for Kubernetes
|
||||
- **System Extensions**:
|
||||
- Intel microcode updates
|
||||
- iSCSI tools for storage
|
||||
- gVisor container runtime
|
||||
- NVIDIA GPU support (optional)
|
||||
- Additional system utilities
|
||||
|
||||
### Cluster Setup Process
|
||||
|
||||
#### 1. Configuration Generation
|
||||
**Script**: `wild-cluster-config-generate`
|
||||
|
||||
- Generates base Talos configurations (`controlplane.yaml`, `worker.yaml`)
|
||||
- Creates cluster secrets using `talosctl gen config`
|
||||
- Establishes foundation for all node configurations
|
||||
|
||||
#### 2. Node Setup (Atomic Operations)
|
||||
**Script**: `wild-node-setup <node-name> [options]`
|
||||
|
||||
**Complete Node Lifecycle Management**:
|
||||
- **Hardware Detection**: Discovers network interfaces and storage devices
|
||||
- **Configuration Generation**: Creates node-specific patches and final configs
|
||||
- **Deployment**: Applies Talos configuration to the node
|
||||
|
||||
**Options**:
|
||||
- `--detect`: Force hardware re-detection
|
||||
- `--no-deploy`: Generate configuration only, skip deployment
|
||||
|
||||
**Integration with Cluster Setup**:
|
||||
- `wild-setup-cluster` automatically calls `wild-node-setup` for each node
|
||||
- Individual node failures don't break cluster setup
|
||||
- Clear retry instructions for failed nodes
|
||||
|
||||
### Cluster Architecture
|
||||
|
||||
**Control Plane**:
|
||||
- 3 nodes for high availability
|
||||
- Virtual IP (VIP) for load balancing
|
||||
- etcd distributed across all control plane nodes
|
||||
|
||||
**Worker Nodes**:
|
||||
- Variable count (configured during setup)
|
||||
- Dedicated workload execution
|
||||
- Storage participation via Longhorn
|
||||
|
||||
**Networking**:
|
||||
- All nodes on same LAN segment
|
||||
- Sequential IP assignment
|
||||
- MetalLB integration for load balancing
|
||||
|
||||
## Phase 4: Cluster Services Installation
|
||||
|
||||
### Services Deployment Process
|
||||
**Command**: `wild-setup-services [options]`
|
||||
- **`--fetch`**: Fetch fresh templates before setup
|
||||
- **`--no-deploy`**: Configure only, skip deployment
|
||||
|
||||
**New Architecture**: Per-service atomic operations
|
||||
- Uses `wild-service-setup <service>` for each service in dependency order
|
||||
- Each service handles complete lifecycle: fetch → configure → deploy
|
||||
- Dependency validation before each service deployment
|
||||
- Fail-fast with clear recovery instructions
|
||||
|
||||
**Individual Service Management**: `wild-service-setup <service> [options]`
|
||||
- **Default**: Configure and deploy using existing templates
|
||||
- **`--fetch`**: Fetch fresh templates before setup
|
||||
- **`--no-deploy`**: Configure only, skip deployment
|
||||
|
||||
### Core Services (Installed in Order)
|
||||
|
||||
#### 1. MetalLB Load Balancer
|
||||
**Location**: `setup/cluster-services/metallb/`
|
||||
|
||||
- **Purpose**: Provides load balancing for bare metal clusters
|
||||
- **Functionality**: Assigns external IPs to Kubernetes services
|
||||
- **Configuration**: IP address pool from local network range
|
||||
- **Integration**: Foundation for ingress traffic routing
|
||||
|
||||
#### 2. Longhorn Distributed Storage
|
||||
**Location**: `setup/cluster-services/longhorn/`
|
||||
|
||||
- **Purpose**: Distributed block storage for persistent volumes
|
||||
- **Features**:
|
||||
- Cross-node data replication
|
||||
- Snapshot and backup capabilities
|
||||
- Volume expansion and management
|
||||
- Web-based management interface
|
||||
- **Storage**: Uses local disks from all cluster nodes
|
||||
|
||||
#### 3. Traefik Ingress Controller
|
||||
**Location**: `setup/cluster-services/traefik/`
|
||||
|
||||
- **Purpose**: HTTP/HTTPS reverse proxy and ingress controller
|
||||
- **Features**:
|
||||
- Automatic service discovery
|
||||
- TLS termination
|
||||
- Load balancing and routing
|
||||
- Gateway API support
|
||||
- **Integration**: Works with MetalLB for external traffic
|
||||
|
||||
#### 4. CoreDNS
|
||||
**Location**: `setup/cluster-services/coredns/`
|
||||
|
||||
- **Purpose**: DNS resolution for cluster services
|
||||
- **Integration**: Connects with external DNS providers
|
||||
- **Functionality**: Service discovery and DNS forwarding
|
||||
|
||||
#### 5. cert-manager
|
||||
**Location**: `setup/cluster-services/cert-manager/`
|
||||
|
||||
- **Purpose**: Automatic TLS certificate management
|
||||
- **Features**:
|
||||
- Let's Encrypt integration
|
||||
- Automatic certificate issuance and renewal
|
||||
- Multiple certificate authorities support
|
||||
- Certificate lifecycle management
|
||||
|
||||
#### 6. ExternalDNS
|
||||
**Location**: `setup/cluster-services/externaldns/`
|
||||
|
||||
- **Purpose**: Automatic DNS record management
|
||||
- **Functionality**:
|
||||
- Syncs Kubernetes services with DNS providers
|
||||
- Automatic A/CNAME record creation
|
||||
- Supports multiple DNS providers (Cloudflare, Route53, etc.)
|
||||
|
||||
#### 7. Kubernetes Dashboard
|
||||
**Location**: `setup/cluster-services/kubernetes-dashboard/`
|
||||
|
||||
- **Purpose**: Web UI for cluster management
|
||||
- **Access**: `https://dashboard.internal.domain.com`
|
||||
- **Authentication**: Token-based access via `wild-dashboard-token`
|
||||
- **Features**: Resource management, monitoring, troubleshooting
|
||||
|
||||
#### 8. NFS Storage (Optional)
|
||||
**Location**: `setup/cluster-services/nfs/`
|
||||
|
||||
- **Purpose**: Network file system for shared storage
|
||||
- **Use Cases**: Media storage, backups, shared data
|
||||
- **Integration**: Mounted as persistent volumes in applications
|
||||
|
||||
#### 9. Docker Registry
|
||||
**Location**: `setup/cluster-services/docker-registry/`
|
||||
|
||||
- **Purpose**: Private container registry
|
||||
- **Features**: Store custom images locally
|
||||
- **Integration**: Used by applications and CI/CD pipelines
|
||||
|
||||
## Infrastructure Components Deep Dive
|
||||
|
||||
### DNS and Domain Architecture
|
||||
|
||||
```
|
||||
Internet → External DNS → MetalLB LoadBalancer → Traefik → Kubernetes Services
|
||||
↑
|
||||
Internal DNS (dnsmasq)
|
||||
↑
|
||||
Internal Network
|
||||
```
|
||||
|
||||
**Domain Types**:
|
||||
- **External**: `app.domain.com` - Public-facing services
|
||||
- **Internal**: `app.internal.domain.com` - Admin interfaces only
|
||||
- **Resolution**: dnsmasq forwards all domain traffic to cluster
|
||||
|
||||
### Certificate and TLS Management
|
||||
|
||||
**Automatic Certificate Flow**:
|
||||
1. Service deployed with ingress annotation
|
||||
2. cert-manager detects certificate requirement
|
||||
3. Let's Encrypt challenge initiated
|
||||
4. Certificate issued and stored in Kubernetes secret
|
||||
5. Traefik uses certificate for TLS termination
|
||||
6. Automatic renewal before expiration
|
||||
|
||||
### Storage Architecture
|
||||
|
||||
**Longhorn Distributed Storage**:
|
||||
- Block-level replication across nodes
|
||||
- Default 3-replica policy for data durability
|
||||
- Automatic failover and recovery
|
||||
- Snapshot and backup capabilities
|
||||
- Web UI for management and monitoring
|
||||
|
||||
**Storage Classes**:
|
||||
- `longhorn` - Default replicated storage
|
||||
- `longhorn-single` - Single replica for non-critical data
|
||||
- `nfs` - Shared network storage (if configured)
|
||||
|
||||
### Network Traffic Flow
|
||||
|
||||
**External Request Flow**:
|
||||
1. DNS resolution via dnsmasq → cluster IP
|
||||
2. Traffic hits MetalLB load balancer
|
||||
3. MetalLB forwards to Traefik ingress
|
||||
4. Traefik terminates TLS and routes to service
|
||||
5. Service forwards to appropriate pod
|
||||
6. Response follows reverse path
|
||||
|
||||
### High Availability Features
|
||||
|
||||
**Control Plane HA**:
|
||||
- 3 control plane nodes with leader election
|
||||
- Virtual IP for API server access
|
||||
- etcd cluster with automatic failover
|
||||
- Distributed workload scheduling
|
||||
|
||||
**Storage HA**:
|
||||
- Longhorn 3-way replication
|
||||
- Automatic replica placement across nodes
|
||||
- Node failure recovery
|
||||
- Data integrity verification
|
||||
|
||||
**Networking HA**:
|
||||
- MetalLB speaker pods on all nodes
|
||||
- Automatic load balancer failover
|
||||
- Multiple ingress controller replicas
|
||||
|
||||
## Hardware Requirements
|
||||
|
||||
### Minimum Specifications
|
||||
- **Nodes**: 3 control plane + optional workers
|
||||
- **RAM**: 8GB minimum per node (16GB+ recommended)
|
||||
- **CPU**: 4 cores minimum per node
|
||||
- **Storage**: 100GB+ local storage per node
|
||||
- **Network**: Gigabit ethernet connectivity
|
||||
|
||||
### Network Requirements
|
||||
- All nodes on same LAN segment
|
||||
- Static IP assignments or DHCP reservations
|
||||
- dnsmasq server accessible by all nodes
|
||||
- Internet connectivity for image pulls and Let's Encrypt
|
||||
|
||||
### Recommended Hardware
|
||||
- **Control Plane**: 16GB RAM, 8 cores, 200GB NVMe SSD
|
||||
- **Workers**: 32GB RAM, 16 cores, 500GB NVMe SSD
|
||||
- **Network**: Dedicated VLAN or network segment
|
||||
- **Redundancy**: UPS protection, dual network interfaces
|
||||
|
||||
## Configuration Management
|
||||
|
||||
### Configuration Files
|
||||
- `config.yaml` - Main configuration (domains, network, apps)
|
||||
- `secrets.yaml` - Sensitive data (passwords, API keys, certificates)
|
||||
- `.wildcloud/` - Cache and temporary files
|
||||
|
||||
### Template System
|
||||
**gomplate Integration**:
|
||||
- All configurations processed as templates
|
||||
- Access to config and secrets via template variables
|
||||
- Dynamic configuration generation
|
||||
- Environment-specific customization
|
||||
|
||||
### Configuration Commands
|
||||
```bash
|
||||
# Read configuration values
|
||||
wild-config cluster.name
|
||||
wild-config apps.ghost.domain
|
||||
|
||||
# Set configuration values
|
||||
wild-config-set cloud.domain "example.com"
|
||||
wild-config-set cluster.nodeCount 5
|
||||
|
||||
# Secret management
|
||||
wild-secret apps.database.password
|
||||
wild-secret-set apps.api.key "secret-value"
|
||||
```
|
||||
|
||||
## Setup Commands Reference
|
||||
|
||||
### Complete Setup
|
||||
```bash
|
||||
wild-init # Initialize project
|
||||
wild-setup # Complete automated setup
|
||||
```
|
||||
|
||||
### Phase-by-Phase Setup
|
||||
```bash
|
||||
wild-setup-cluster # Cluster infrastructure only
|
||||
wild-setup-services # Cluster services only
|
||||
```
|
||||
|
||||
### Individual Operations
|
||||
```bash
|
||||
wild-cluster-config-generate # Generate base configs
|
||||
wild-node-setup <node-name> # Complete node setup (detect → configure → deploy)
|
||||
wild-node-setup <node-name> --detect # Force hardware re-detection
|
||||
wild-node-setup <node-name> --no-deploy # Configuration only
|
||||
wild-dashboard-token # Get dashboard access
|
||||
wild-health # System health check
|
||||
```
|
||||
|
||||
## Troubleshooting and Validation
|
||||
|
||||
### Health Checks
|
||||
```bash
|
||||
wild-health # Overall system status
|
||||
kubectl get nodes # Node status
|
||||
kubectl get pods -A # All pod status
|
||||
talosctl health # Talos cluster health
|
||||
```
|
||||
|
||||
### Service Validation
|
||||
```bash
|
||||
kubectl get svc -n metallb-system # MetalLB status
|
||||
kubectl get pods -n longhorn-system # Storage status
|
||||
kubectl get pods -n traefik # Ingress status
|
||||
kubectl get certificates -A # Certificate status
|
||||
```
|
||||
|
||||
### Log Analysis
|
||||
```bash
|
||||
talosctl logs -f machined # Talos system logs
|
||||
kubectl logs -n traefik deployment/traefik # Ingress logs
|
||||
kubectl logs -n cert-manager deployment/cert-manager # Certificate logs
|
||||
```
|
||||
|
||||
This comprehensive setup process creates a production-ready personal cloud infrastructure with enterprise-grade reliability, security, and management capabilities.
|
||||
Reference in New Issue
Block a user