Update content previously from wild-cloud repo.

This commit is contained in:
2025-08-31 14:32:02 -07:00
parent 037ca24e05
commit 03284765d2
7 changed files with 493 additions and 24 deletions

View File

@@ -6,6 +6,9 @@ series:
series_order: 5
---
{{<go-deeper>}}
This section is mandatory and, we admit, not accessible to non-technical audience. We have plans for this before releasing to early-adopters. These instructions are a placeholder suitable for Wild Cloud devs and will be replaced before the Early-Adopter release.
{{</go-deeper>}}
## Get your hardware
@@ -13,12 +16,13 @@ Make sure you have a dedicated machine for your DNS server. It can be tiny, like
## Install your OS
- Install another Debian-based Linux machine on your LAN (e.g. Debian or Ubuntu).
- Install another Debian-based Linux machine on your LAN (e.g. Debian or Ubuntu).
- Make sure you can SSH in from your operator machine.
- Record it's IP address.
## Install the DNS software
- Run `wild-dnsmasq-install` from your operator machine.
- Run `wild-dnsmasq-install` from your Wild Cloud Home on your operator machine.
## Update your LAN router

View File

@@ -10,13 +10,17 @@ series_order: 4
## Your operator machine
Your operator machine is your main workspace (I think of it more as play, actually) for managing your wild cloud. You will install the Wild Cloud software on your operator machine and use it to setup and manage everything related to your wild cloud.
Your operator machine is the computer you manage your wild cloud with. Below, we will install the Wild Cloud software on your operator machine.
## Download the Wild Cloud software onto your operator machine
Your operator machine should be a [Linux](/learning/linux/) machine on your LAN. It's helpful if it has a nice big hard drive you can use for backing up your wild cloud data.
Your operator machine should be a [Linux](/learning/linux/) machine on your LAN. It's helpful if it has a nice big hard drive you can use for backing up your cloud data.
## Install the Wild Cloud software
{{< gitea server="https://git.civilsociety.dev" repo="CSTF/wild-cloud" showThumbnail=true >}}
{{< go-deeper >}}
We recognize that this part of the guide requires more knowledge of `git` and `bash`. We plan to create a `wild` CLI and perhaps even a GUI setup before wider release. For now, we are prototyping our (fully functional) POC by using scripts from this Wild Cloud repo.
{{< /go-deeper >}}
Download the Wild Cloud software using git:
```bash
git clone https://git.civilsociety.dev/CSTF/wild-cloud.git
@@ -32,3 +36,21 @@ Install dependencies:
```bash
scripts/setup-utils.sh
```
## Create your Wild Cloud Home
Now that you have the Wild Cloud software installed, we are going to create a directory that will hold everything about your personal wild cloud. We call this your "Wild Cloud Home".
{{< definition >}}
Your **Wild Cloud Home** is the directory where all of your wild cloud data will be stored. This includes your configuration files, data files, and any other files related to your wild cloud.
{{< /definition >}}
You can put it in any directory. Here we show an example if you want to make a directory named `my-wild-cloud` in your home directory.
```bash
mkdir -p ~/my-wild-cloud
cd ~/my-wild-cloud
wild-cloud-scaffold
```
That's it! Your wild cloud operator machine is ready to go! Most of the rest of the instructions in this guide will assume you are working within this directory.

View File

@@ -6,43 +6,101 @@ series:
series_order: 6
---
## Set up your own Wild Cloud!
This section of the guide will walk you through setting up your wild cloud cluster. When you are done, you will have a complete wild cloud, all set up, ready for your apps!
That's it! Now you can start setting up your wild cloud!
## Prepare your USB key(s)
First, make a directory for your wild cloud home. For example `my-wild-cloud` in your home directory.
### Download the correct Talos ISO
You will be using a USB key to boot each of your cluster machines. Let's create the bootable USB key now.
From your wild cloud home, run:
```bash
mkdir ~/my-wild-cloud
cd ~/my-wild-cloud
# Upload schematic configuration to get schematic ID from the Talos ISO service.
wild-talos-schema
# Download custom ISO with system extensions.
wild-cluster-node-boot-assets-download
```
Now, you can start the setup process! Just run...
This will download all the Talos ISOs for your wild cloud configuration.
The custom ISO includes system extensions needed for a wild cloud cluster and is saved to `.wildcloud/iso/talos-v<VERSION>-metal-amd64.iso`.
### Copy ISO to a USB drive
```bash
# Find your USB device (be careful to select the right device!)
lsblk
sudo dmesg | tail # Check for recently connected USB devices
# Create bootable USB (replace /dev/sdX with your USB device and set the version you donwloaded.
sudo dd if=.wildcloud/iso/talos-v<VERSION>-metal-amd64.iso of=/dev/sdX bs=4M status=progress sync
# Verify the write completed
sync
```
**⚠️ Warning**: Double-check the device path (`/dev/sdX`). Writing to the wrong device will destroy data!
TO DO: Look into some utilities (Balena Etcher, Rufus, etc.) to make this simpler/safer.
## How to install Talos OS on a machine
To add a machine to your Wild Cloud, you will need to install Talos OS on it. The first time the machine boots, it will be in "maintenance mode" which makes it available to be configured as part of your wild cloud.
To install Talos OS:
Prepare the machine:
- The machine should have a ~100GB drive that the Talos OS will be installed to. Talos will never use more space than this, so any larger of a drive will be wasted space.
- If the machine is going to be a worker node, any number of additional drives can be inserted.
- Connect the machine to your wild cloud network switch.
1. Insert your Talos USB key and boot the machine.
2. Enter EUFI settings (usually F2, F12, DEL, or ESC during startup) and configure the machine to boot from whatever you are selecting as your boot drive.
3. For this time only, choose to boot from the USB key.
That's all you need to do. The machine should boot into maintenance mode and you will see an IP address you will use during setup.
## Setup!
Ok, with the Talos USB key and knowing how to boot a machine with it, we're ready to set up our cluster. In your Wild Cloud Home, start the setup process by running...
```bash
wild-setup
```
The rest of this page walks you through a few of the setup details.
The setup script will walk you through the process of installing your cluster nodes (part 1), and your cluster services (part 2).
### Install your control nodes
### Part 1: Setup your cluster nodes
- Make a Talos USB key.
`wild-cluster-node-boot-assets`
- Instal Talos OS with your USB key.
- It will boot into "maintenance mode". Use the IP address it displays in `wild-setup`.
- Repeat for the other two control nodes.
_To add a node, it is expected to be in "maintenance mode"--as described above_
### Install your worker nodes
Your cluster will be "brought up" after your first control node is added. Each additional control node will be added to the cluster until you have all three nodes running.
- Install Talos OS with your USB key.
- It will boot into "maintenance mode". Use the IP address it displays in `wild-setup`.
- Repeat for as many additional worker nodes as you'd like (three minimum).
After your three control nodes are setup, you will have the opportunity to add as many worker nodes as you'd like (three minimum).
### Install cluster services
If you ever want to set up more cluster nodes, you don't need to run the full setup. To get back into this part of the setup, you can just run:
```bash
wild-setup-cluster
```
### Part 2: Install your cluster services
After your control and worker nodes are installed, `wild-setup` will automatically install all of your wild cloud's cluster services.
The first time through this part of the setup, you will be asked for your preferences as needed and they will be captured in your `config.yaml`. Once your preferences are recorded, the services will be automatically deployed to your cluster.
If you ever need to change your cluster services, you don't have to re-run the full setup. You can just run:
```bash
wild-setup-services
```
## Check your installation

View File

@@ -0,0 +1,22 @@
---
title: "Glossary"
date: 2025-08-31
summary: "A comprehensive glossary of key terms and concepts related to Wild Cloud."
draft: true
---
## Cluster
- LAN
- cluster
### LAN
- router
### Cluster
- nameserver
- node
- master
- load balancer

View File

@@ -0,0 +1,338 @@
---
title: "App Visibility"
date: 2025-08-31
summary: "Understanding how applications achieve visibility in a Kubernetes environment, from deployment to external access."
draft: true
---
# Understanding Network Visibility in Kubernetes
This guide explains how applications deployed on our Kubernetes cluster become accessible from both internal and external networks. Whether you're deploying a public-facing website or an internal admin panel, this document will help you understand the journey from deployment to accessibility.
## The Visibility Pipeline
When you deploy an application to the cluster, making it accessible involves several coordinated components working together:
1. **Kubernetes Services** - Direct traffic to your application pods
2. **Ingress Controllers** - Route external HTTP/HTTPS traffic to services
3. **Load Balancers** - Assign external IPs to services
4. **DNS Management** - Map domain names to IPs
5. **TLS Certificates** - Secure connections with HTTPS
Let's walk through how each part works and how they interconnect.
## From Deployment to Visibility
### 1. Application Deployment
Your journey begins with deploying your application on Kubernetes. This typically involves:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: myapp:latest
ports:
- containerPort: 80
```
This creates pods running your application, but they're not yet accessible outside their namespace.
### 2. Kubernetes Service: Internal Connectivity
A Kubernetes Service provides a stable endpoint to access your pods:
```yaml
apiVersion: v1
kind: Service
metadata:
name: my-app
namespace: my-namespace
spec:
selector:
app: my-app
ports:
- port: 80
targetPort: 80
type: ClusterIP
```
With this `ClusterIP` service, your application is accessible within the cluster at `my-app.my-namespace.svc.cluster.local`, but not from outside.
### 3. Ingress: Defining HTTP Routes
For HTTP/HTTPS traffic, an Ingress resource defines routing rules:
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
namespace: my-namespace
annotations:
kubernetes.io/ingress.class: "traefik"
external-dns.alpha.kubernetes.io/target: "CLOUD_DOMAIN"
external-dns.alpha.kubernetes.io/ttl: "60"
spec:
rules:
- host: my-app.CLOUD_DOMAIN
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app
port:
number: 80
tls:
- hosts:
- my-app.CLOUD_DOMAIN
secretName: wildcard-wild-cloud-tls
```
This Ingress tells the cluster to route requests for `my-app.CLOUD_DOMAIN` to your service. The annotations provide hints to other systems like ExternalDNS.
### 4. Traefik: The Ingress Controller
Our cluster uses Traefik as the ingress controller. Traefik watches for Ingress resources and configures itself to handle the routing rules. It acts as a reverse proxy and edge router, handling:
- HTTP/HTTPS routing
- TLS termination
- Load balancing
- Path-based routing
- Host-based routing
Traefik runs as a service in the cluster with its own external IP (provided by MetalLB).
### 5. MetalLB: Assigning External IPs
Since we're running on-premises (not in a cloud that provides load balancers), we use MetalLB to assign external IPs to services. MetalLB manages a pool of IP addresses from our local network:
```yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default
namespace: metallb-system
spec:
addresses:
- 192.168.8.240-192.168.8.250
```
This allows Traefik and any other LoadBalancer services to receive a real IP address from our network.
### 6. ExternalDNS: Automated DNS Management
ExternalDNS automatically creates and updates DNS records in our CloudFlare DNS zone:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
namespace: externaldns
spec:
# ...
template:
spec:
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns
args:
- --source=service
- --source=ingress
- --provider=cloudflare
- --txt-owner-id=wild-cloud
```
ExternalDNS watches Kubernetes Services and Ingresses with appropriate annotations, then creates corresponding DNS records in CloudFlare, making your applications discoverable by domain name.
### 7. Cert-Manager: TLS Certificate Automation
To secure connections with HTTPS, we use cert-manager to automatically obtain and renew TLS certificates:
```yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: wildcard-wild-cloud-io
namespace: default
spec:
secretName: wildcard-wild-cloud-tls
dnsNames:
- "*.CLOUD_DOMAIN"
- "CLOUD_DOMAIN"
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
```
Cert-manager handles:
- Certificate request and issuance
- DNS validation (for wildcard certificates)
- Automatic renewal
- Secret storage of certificates
## The Two Visibility Paths
In our infrastructure, we support two primary visibility paths:
### Public Services (External Access)
Public services are those meant to be accessible from the public internet:
1. **Service**: Kubernetes ClusterIP service (internal)
2. **Ingress**: Defines routing with hostname like `service-name.CLOUD_DOMAIN`
3. **DNS**: ExternalDNS creates a CNAME record pointing to `CLOUD_DOMAIN`
4. **TLS**: Uses wildcard certificate for `*.CLOUD_DOMAIN`
5. **IP Addressing**: Traffic reaches the MetalLB-assigned IP for Traefik
6. **Network**: Traffic flows from external internet → router → MetalLB IP → Traefik → Kubernetes Service → Application Pods
**Deploy a public service with:**
```bash
./bin/deploy-service --type public --name myservice
```
### Internal Services (Private Access)
Internal services are restricted to the internal network:
1. **Service**: Kubernetes ClusterIP service (internal)
2. **Ingress**: Defines routing with hostname like `service-name.internal.CLOUD_DOMAIN`
3. **DNS**: ExternalDNS creates an A record pointing to the internal load balancer IP
4. **TLS**: Uses wildcard certificate for `*.internal.CLOUD_DOMAIN`
5. **IP Addressing**: Traffic reaches the MetalLB-assigned IP for Traefik
6. **Network**: Traffic flows from internal network → MetalLB IP → Traefik → Service → Pods
7. **Security**: Traefik middleware restricts access to internal network IPs
**Deploy an internal service with:**
```bash
./bin/deploy-service --type internal --name adminpanel
```
## How It All Works Together
1. **You deploy** an application using our deploy-service script
2. **Kubernetes** schedules and runs your application pods
3. **Services** provide a stable endpoint for your pods
4. **Traefik** configures routing based on Ingress definitions
5. **MetalLB** assigns real network IPs to LoadBalancer services
6. **ExternalDNS** creates DNS records for your services
7. **Cert-Manager** ensures valid TLS certificates for HTTPS
### Network Flow Diagram
```mermaid
flowchart TD
subgraph Internet["Internet"]
User("User Browser")
CloudDNS("CloudFlare DNS")
end
subgraph Cluster["Cluster"]
Router("Router")
MetalLB("MetalLB")
Traefik("Traefik Ingress")
IngSvc("Service")
IngPods("Application Pods")
Ingress("Ingress")
CertManager("cert-manager")
WildcardCert("Wildcard Certificate")
ExtDNS("ExternalDNS")
end
User -- "1\. DNS Query" --> CloudDNS
CloudDNS -- "2\. IP Address" --> User
User -- "3\. HTTPS Request" --> Router
Router -- "4\. Forward" --> MetalLB
MetalLB -- "5\. Route" --> Traefik
Traefik -- "6\. Route" --> Ingress
Ingress -- "7\. Forward" --> IngSvc
IngSvc -- "8\. Balance" --> IngPods
ExtDNS -- "A. Update DNS" --> CloudDNS
Ingress -- "B. Configure" --> ExtDNS
CertManager -- "C. Issue Cert" --> WildcardCert
Ingress -- "D. Use" --> WildcardCert
User:::internet
CloudDNS:::internet
Router:::cluster
MetalLB:::cluster
Traefik:::cluster
IngSvc:::cluster
IngPods:::cluster
Ingress:::cluster
CertManager:::cluster
WildcardCert:::cluster
ExtDNS:::cluster
classDef internet fill:#fcfcfc,stroke:#333
classDef cluster fill:#a6f3ff,stroke:#333
style User fill:#C8E6C9
style CloudDNS fill:#C8E6C9
style Router fill:#C8E6C9
style MetalLB fill:#C8E6C9
style Traefik fill:#C8E6C9
style IngSvc fill:#C8E6C9
style IngPods fill:#C8E6C9
style Ingress fill:#C8E6C9
style CertManager fill:#C8E6C9
style WildcardCert fill:#C8E6C9
style ExtDNS fill:#C8E6C9
```
A successful deployment creates a chain of connections:
```
Internet → DNS (domain name) → External IP → Traefik → Kubernetes Service → Application Pod
```
## Behind the Scenes: The Technical Magic
When you use our `deploy-service` script, several things happen:
1. **Template Processing**: The script processes a YAML template for your service type, using environment variables to customize it
2. **Namespace Management**: Creates or uses your service's namespace
3. **Resource Application**: Applies the generated YAML to create/update all Kubernetes resources
4. **DNS Configuration**: ExternalDNS detects the new resources and creates DNS records
5. **Certificate Management**: Cert-manager ensures TLS certificates exist or creates new ones
6. **Secret Distribution**: For internal services, certificates are copied to the appropriate namespaces
## Troubleshooting Visibility Issues
When services aren't accessible, the issue usually lies in one of these areas:
1. **DNS Resolution**: Domain not resolving to the correct IP
2. **Certificate Problems**: Invalid, expired, or missing TLS certificates
3. **Ingress Configuration**: Incorrect routing rules or annotations
4. **Network Issues**: Firewall rules or internal/external network segregation
Our [Visibility Troubleshooting Guide](/docs/troubleshooting/VISIBILITY.md) provides detailed steps for diagnosing these issues.
## Conclusion
The visibility layer in our infrastructure represents a sophisticated interplay of multiple systems working together. While complex under the hood, it provides a streamlined experience for developers to deploy applications with proper networking, DNS, and security.
By understanding these components and their relationships, you'll be better equipped to deploy applications and diagnose any visibility issues that arise.
## Further Reading
- [Traefik Documentation](https://doc.traefik.io/traefik/)
- [ExternalDNS Project](https://github.com/kubernetes-sigs/external-dns)
- [Cert-Manager Documentation](https://cert-manager.io/docs/)
- [MetalLB Project](https://metallb.universe.tf/)

View File

View File

@@ -0,0 +1,25 @@
---
title: Welcome!
description: Welcome to the Wild Cloud tutorial!
date: 2025-07-13
tags:
- tutorial
---
Hi! I'm Paul.
Welcome! I am SO excited you're here!
Why am I so excited?? When I was an eight year old kid, I had a computer named the Commodore64. One of the coolest things about it was that it came with a User Manual that told you all about how to not just use that computer, but to actually _use computers_. It taught me how to write my own programs and run them! That experience of wonder, that I could write something and have it do something, is the single biggest reason why I have spent the last 40 years working with computers.
When I was 12, I found out I could plug a cartridge into the back of my Commodore, plug a telephone line into it (maybe some of you don't even know what that is anymore!), and _actually call_ other people's computers in my city. We developed such a sense of community, connecting our computers together and leaving each other messages about the things we were thinking. It was a tiny taste of the early Internet.
I had a similar experience when I was 19 and installed something called the "World Wide Web" on the computers I managed in a computer lab at college. My heart skipped a beat when I clicked on a few "links" and actually saw an image from a computer in Europe just magically appear on my screen! It felt like I was teleported to the other side of the world. Pretty amazing for a kid who had rarely been out of Nebraska!
Everything in those days was basically free. My Commodore cost $200, people connected to each other out of pure curiosity. If you wanted to be a presence on the Internet, you could just connect your computer to it and people around the world could visit you! _All_ of the early websites were entirely non-commercial. No ads! No sign-ups! No monthly subscription fees! It felt like the whole world was coming together to build something amazing for everyone.
Of course, as we all know, it didn't stay that way. After college, I had to figure out ways to pay for Internet connections myself. At some point search engines decided to make money by selling ads on their pages... and then providing ad services to other pages--"monetize" they called it. Then commercial companies found out about it and wanted to sell books and shoes to other people, and the government decided they wanted to capture that tax money. Instead of making the free and open software better, and the open communities stronger, and encouraging people to participate by running their own computers and software, companies started offering people to connect _inside_ their controlled computers. "Hey! You don't have to do all that stuff" they would say, "You can just jump on our servers for free!".
So people stopped being curious about what we could do with our computers together, and they got a login name, and they couldn't do their own things on their own computers anymore, and their data became the property of the company whose computer they were using, and those companies started working together to make it faster to go to their own computers, and to make it go very, very, slow if you wanted to let people come to your computer, or even to forbid having people come to your computer entirely. So now, we are _safe_ and _simple_ and _secure_ and we get whatever the companies want to give us, which seems to usually be ads (so many ads) or monthly fee increases, and they really, really, love getting our attention and putting it where they want it. Mostly, it's just all so... boring. So boring.
So, why am I excited you're here? Because with this project, this Wild Cloud project, I think I just might be able to pass on some of that sense of wonder that captured me so many years ago!