38 Commits

Author SHA1 Message Date
Paul Payne
61460b63a3 Adds more to wild-backup. 2025-08-31 15:02:35 -07:00
Paul Payne
9f302e0e29 Update backup/restore docs. 2025-08-31 15:00:00 -07:00
Paul Payne
f83763b070 Fix immich secret key ref. 2025-08-31 14:49:35 -07:00
Paul Payne
c9ab5a31d8 Discourse app fixes. 2025-08-31 14:46:42 -07:00
Paul Payne
1aa9f1050d Update docs. 2025-08-31 14:30:09 -07:00
Paul Payne
3b8b6de338 Removes PXE booting from dnsmasq setup. 2025-08-31 12:44:53 -07:00
Paul Payne
fd1ba7fbe0 Provide disk size for select of disks during setup of cluster nodes. Setup NEXT worker #. 2025-08-31 11:56:26 -07:00
Paul Payne
5edf14695f Sets password in REDIS app. 2025-08-31 11:54:18 -07:00
Paul Payne
9a7b2fec72 Merge branch 'main' of https://git.civilsociety.dev/CSTF/wild-cloud 2025-08-23 06:15:08 -07:00
Paul Payne
9321e54bce Backup and restore. 2025-08-23 05:46:48 -07:00
Paul Payne
73c0de1f67 Cleanup. 2025-08-23 05:46:18 -07:00
Paul Payne
3a9bd7c6b3 Add link to Wild Cloud forum to README. 2025-08-21 14:04:55 +00:00
Paul Payne
476f319acc Updates tests. 2025-08-16 08:08:35 -07:00
Paul Payne
6e3b50c217 Removes jellyfin app (to be updated in a branch). 2025-08-16 08:04:25 -07:00
Paul Payne
c2f8f7e6a0 Default value created when wild-secret-set is run. 2025-08-15 03:32:55 -07:00
Paul Payne
22d4dc15ce Backup env config setup. Add --check flag to wild-config and wild-set. 2025-08-15 03:32:18 -07:00
Paul Payne
4966bd05f2 Add backup config to wild-setup-scaffold 2025-08-08 09:57:07 -07:00
Paul Payne
6be0260673 Remove config wrappers. 2025-08-08 09:56:36 -07:00
Paul Payne
97e25a5f08 Update wild-app-add to copy all template files. Clean up script output. 2025-08-08 09:30:31 -07:00
Paul Payne
360069a8e8 Adds redis password. Adds tlsSecretName config to manifests. 2025-08-08 09:27:48 -07:00
Paul Payne
33dce80116 Adds discourse app. 2025-08-08 09:26:02 -07:00
Paul Payne
9ae248d5f7 Updates secret handling in wild-app-deploy. 2025-08-05 17:40:56 -07:00
Paul Payne
333e215b3b Add note about using helm charts in apps/README.md. 2025-08-05 17:39:31 -07:00
Paul Payne
b69344fae6 Updates keila app with new tls smtp config. 2025-08-05 17:39:03 -07:00
Paul Payne
03c631d67f Adds listmonk app. 2025-08-05 17:38:28 -07:00
Paul Payne
f4f2bfae1c Remove defaultConfig from manifest when adding an app. 2025-08-04 16:02:37 -07:00
Paul Payne
fef238db6d Get SMTP connection sercurity config during service setup. 2025-08-04 16:01:32 -07:00
Paul Payne
b1b14d8d80 Fix manifest templating in wild-cloud-add 2025-08-04 16:00:53 -07:00
Paul Payne
4c14744a32 Adds keila app. 2025-08-04 13:58:05 -07:00
Paul Payne
5ca8c010e5 Use full secret paths. 2025-08-04 13:57:52 -07:00
Paul Payne
22537da98e Update spelling dictionary. 2025-08-03 12:34:27 -07:00
Paul Payne
f652a82319 Add db details to app README. 2025-08-03 12:34:08 -07:00
Paul Payne
c7c71a203f SMTP service setup. 2025-08-03 12:27:02 -07:00
Paul Payne
7bbc4cc52d Remove test artifact. 2025-08-03 12:26:14 -07:00
Paul Payne
d3260d352a Fix wild-app-add secret generation. 2025-08-03 12:24:21 -07:00
Paul Payne
800f6da0d9 Updates mysql app to follow new wild-app patterns. 2025-08-03 12:23:55 -07:00
Paul Payne
39b174e857 Simplify environment in gitea app. 2025-08-03 12:23:33 -07:00
Paul Payne
23f1cb7b32 Updates ghost app to follow new wild-app patterns. 2025-08-03 12:23:04 -07:00
143 changed files with 4217 additions and 2577 deletions

View File

@@ -10,6 +10,7 @@ dnsmasq
envsubst
externaldns
ftpd
gitea
glddns
gomplate
IMMICH
@@ -34,6 +35,7 @@ OVERWRITEWEBROOT
PGDATA
pgvector
rcode
restic
SAMEORIGIN
traefik
USEPATH

8
.gitignore vendored
View File

@@ -7,9 +7,9 @@ secrets.yaml
CLAUDE.md
**/.claude/settings.local.json
# Wild Cloud
**/config/secrets.env
**/config/config.env
# Test directory - ignore temporary files
# Test directory
test/tmp
test/test-cloud/*
!test/test-cloud/README.md

View File

@@ -2,7 +2,7 @@
Welcome! So excited you're here!
_This project is massively in progress. It's not ready to be used yet (even though I am using it as I develop it). This is published publicly for transparency. If you want to help out, please get in touch._
_This project is massively in progress. It's not ready to be used yet (even though I am using it as I develop it). This is published publicly for transparency. If you want to help out, please [get in touch](https://forum.civilsociety.dev/c/wild-cloud/5)._
## Why Build Your Own Cloud?

View File

@@ -130,9 +130,77 @@ To reference operator configuration in the configuration files, use gomplate var
When `wild-app-add` is run, the app's Kustomize files will be compiled with the operator's Wild Cloud configuration and secrets resulting in standard Kustomize files being placed in the Wild Cloud home directory.
#### External DNS Configuration
Wild Cloud apps use external-dns annotations in their ingress resources to automatically manage DNS records:
- `external-dns.alpha.kubernetes.io/target: {{ .cloud.domain }}` - Creates a CNAME record pointing the app subdomain to the main cluster domain (e.g., `ghost.cloud.payne.io``cloud.payne.io`)
- `external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"` - Disables Cloudflare proxy for direct DNS resolution
#### Database Initialization Jobs
Apps that rely on PostgreSQL or MySQL databases typically need a database initialization job to create the required database and user before the main application starts. These jobs:
- Run as Kubernetes Jobs that execute once and complete
- Create the application database if it doesn't exist
- Create the application user with appropriate permissions
- Should be included in the app's `kustomization.yaml` resources list
- Use the same database connection settings as the main application
Examples of apps with db-init jobs: `gitea`, `codimd`, `immich`, `openproject`
##### Database URL Configuration
**Important:** When apps require database URLs with embedded credentials, always use a separate `dbUrl` secret instead of trying to construct the URL with environment variable substitution in Kustomize templates.
**Wrong** (Kustomize cannot process runtime env var substitution):
```yaml
- name: DB_URL
value: "postgresql://user:$(DB_PASSWORD)@host/db"
```
**Correct** (Use a dedicated secret):
```yaml
- name: DB_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: apps.appname.dbUrl
```
Add `apps.appname.dbUrl` to the manifest's `requiredSecrets` and the `wild-app-add` script will generate the complete URL with embedded credentials.
##### Security Context Requirements
Pods must comply with Pod Security Standards. All pods should include proper security contexts to avoid deployment warnings:
```yaml
spec:
template:
spec:
securityContext:
runAsNonRoot: true
runAsUser: 999 # Use appropriate non-root user ID
runAsGroup: 999 # Use appropriate group ID
seccompProfile:
type: RuntimeDefault
containers:
- name: container-name
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: false # Set to true when possible
```
For PostgreSQL init jobs, use `runAsUser: 999` (postgres user). For other database types, use the appropriate non-root user ID for that database container.
#### Secrets
Secrets are managed in the `secrets.yaml` file in the Wild Cloud home directory. The app's `manifest.yaml` should list any required secrets under `requiredSecrets`. When the app is added, default secret values will be generated and stored in the `secrets.yaml` file. Secrets are always stored and referenced in the `apps.<app-name>.<secret-name>` yaml path. When `wild-app-deploy` is run, a Secret resource will be created in the Kubernetes cluster with the name `<app-name>-secrets`, containing all secrets defined in the manifest's `requiredSecrets` key. These secrets can then be referenced in the app's Kustomize files using a `secretKeyRef`. For example, to mount a secret in an environment variable, you would use:
Secrets are managed in the `secrets.yaml` file in the Wild Cloud home directory. The app's `manifest.yaml` should list any required secrets under `requiredSecrets`. When the app is added, default secret values will be generated and stored in the `secrets.yaml` file. Secrets are always stored and referenced in the `apps.<app-name>.<secret-name>` yaml path. When `wild-app-deploy` is run, a Secret resource will be created in the Kubernetes cluster with the name `<app-name>-secrets`, containing all secrets defined in the manifest's `requiredSecrets` key. These secrets can then be referenced in the app's Kustomize files using a `secretKeyRef`.
**Important:** Always use the full dotted path from the manifest as the secret key, not just the last segment. For example, to mount a secret in an environment variable, you would use:
```yaml
env:
@@ -140,9 +208,11 @@ env:
valueFrom:
secretKeyRef:
name: immich-secrets
key: dbPassword
key: apps.immich.dbPassword # Use full dotted path, not just "dbPassword"
```
This approach prevents naming conflicts between apps and makes secret keys more descriptive and consistent with the `secrets.yaml` structure.
`secrets.yaml` files should not be checked in to a git repository and are ignored by default in Wild Cloud home directories. Checked in kustomize files should only reference secrets, not compile them.
## App Lifecycle
@@ -162,7 +232,11 @@ If you would like to contribute an app to the Wild Cloud, issue a pull request w
### Converting from Helm Charts
Wild Cloud apps use Kustomize as kustomize files are simpler, more transparent, and easier to manage in a Git repository than Helm charts. If you have a Helm chart that you want to convert to a Wild Cloud app, the following example steps can simplify the process for you:
Wild Cloud apps use Kustomize as kustomize files are simpler, more transparent, and easier to manage in a Git repository than Helm charts.
IMPORTANT! If an official Helm chart is available for an app, it is recommended to convert that chart to a Wild Cloud app rather than creating a new app from scratch.
If you have a Helm chart that you want to convert to a Wild Cloud app, the following example steps can simplify the process for you:
```bash
helm fetch --untar --untardir charts nginx-stable/nginx-ingress
@@ -187,3 +261,15 @@ After running these commands against your own Helm chart, you will have a Kustom
- Use `component: web`, `component: worker`, etc. in selectors and pod template labels
- Let Kustomize handle the common labels (`app`, `managedBy`, `partOf`) automatically
- remove any Helm-specific labels from the Kustomize files, as Wild Cloud apps do not use Helm labels.
## Notice: Third-Party Software
The Kubernetes manifests and Kustomize files in this directory are designed to deploy **third-party software**.
Unless otherwise stated, the software deployed by these manifests **is not authored or maintained** by this project. All copyrights, licenses, and responsibilities for that software remain with the respective upstream authors.
These files are provided solely for convenience and automation. Users are responsible for reviewing and complying with the licenses of the software they deploy.
This project is licensed under the GNU AGPLv3 or later, but this license does **not apply** to the third-party software being deployed.
See individual deployment directories for upstream project links and container sources.

View File

@@ -0,0 +1,31 @@
---
# Source: discourse/templates/configmaps.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: discourse
namespace: discourse
data:
DISCOURSE_HOSTNAME: "{{ .apps.discourse.domain }}"
DISCOURSE_SKIP_INSTALL: "no"
DISCOURSE_SITE_NAME: "{{ .apps.discourse.siteName }}"
DISCOURSE_USERNAME: "{{ .apps.discourse.adminUsername }}"
DISCOURSE_EMAIL: "{{ .apps.discourse.adminEmail }}"
DISCOURSE_REDIS_HOST: "{{ .apps.discourse.redisHostname }}"
DISCOURSE_REDIS_PORT_NUMBER: "6379"
DISCOURSE_DATABASE_HOST: "{{ .apps.discourse.dbHostname }}"
DISCOURSE_DATABASE_PORT_NUMBER: "5432"
DISCOURSE_DATABASE_NAME: "{{ .apps.discourse.dbName }}"
DISCOURSE_DATABASE_USER: "{{ .apps.discourse.dbUsername }}"
DISCOURSE_SMTP_HOST: "{{ .apps.discourse.smtp.host }}"
DISCOURSE_SMTP_PORT: "{{ .apps.discourse.smtp.port }}"
DISCOURSE_SMTP_USER: "{{ .apps.discourse.smtp.user }}"
DISCOURSE_SMTP_PROTOCOL: "tls"
DISCOURSE_SMTP_AUTH: "login"
# DISCOURSE_PRECOMPILE_ASSETS: "false"
# DISCOURSE_SKIP_INSTALL: "no"
# DISCOURSE_SKIP_BOOTSTRAP: "yes"

View File

@@ -0,0 +1,77 @@
apiVersion: batch/v1
kind: Job
metadata:
name: discourse-db-init
namespace: discourse
spec:
template:
metadata:
labels:
component: db-init
spec:
securityContext:
runAsNonRoot: true
runAsUser: 999
runAsGroup: 999
seccompProfile:
type: RuntimeDefault
restartPolicy: OnFailure
containers:
- name: db-init
image: postgres:16-alpine
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: false
env:
- name: PGHOST
value: "{{ .apps.discourse.dbHostname }}"
- name: PGPORT
value: "5432"
- name: PGUSER
value: postgres
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: discourse-secrets
key: apps.postgres.password
- name: DISCOURSE_DB_USER
value: "{{ .apps.discourse.dbUsername }}"
- name: DISCOURSE_DB_NAME
value: "{{ .apps.discourse.dbName }}"
- name: DISCOURSE_DB_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-secrets
key: apps.discourse.dbPassword
command:
- /bin/sh
- -c
- |
echo "Initializing Discourse database..."
# Create database if it doesn't exist
if ! psql -lqt | cut -d \| -f 1 | grep -qw "$DISCOURSE_DB_NAME"; then
echo "Creating database $DISCOURSE_DB_NAME..."
createdb "$DISCOURSE_DB_NAME"
else
echo "Database $DISCOURSE_DB_NAME already exists."
fi
# Create user if it doesn't exist and grant permissions
psql -d "$DISCOURSE_DB_NAME" -c "
DO \$\$
BEGIN
IF NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = '$DISCOURSE_DB_USER') THEN
CREATE USER $DISCOURSE_DB_USER WITH PASSWORD '$DISCOURSE_DB_PASSWORD';
END IF;
END
\$\$;
GRANT ALL PRIVILEGES ON DATABASE $DISCOURSE_DB_NAME TO $DISCOURSE_DB_USER;
GRANT ALL ON SCHEMA public TO $DISCOURSE_DB_USER;
GRANT USAGE ON SCHEMA public TO $DISCOURSE_DB_USER;
"
echo "Database initialization completed."

View File

@@ -0,0 +1,236 @@
---
# Source: discourse/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: discourse
namespace: discourse
spec:
replicas: 1
selector:
matchLabels:
component: web
strategy:
type: Recreate
template:
metadata:
labels:
component: web
spec:
automountServiceAccountToken: false
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
component: web
topologyKey: kubernetes.io/hostname
weight: 1
serviceAccountName: discourse
securityContext:
fsGroup: 0
fsGroupChangePolicy: Always
supplementalGroups: []
sysctls: []
initContainers:
containers:
- name: discourse
image: docker.io/bitnami/discourse:3.4.7-debian-12-r0
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- CHOWN
- SYS_CHROOT
- FOWNER
- SETGID
- SETUID
- DAC_OVERRIDE
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
seLinuxOptions: {}
seccompProfile:
type: RuntimeDefault
env:
- name: BITNAMI_DEBUG
value: "false"
- name: DISCOURSE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-secrets
key: apps.discourse.adminPassword
- name: DISCOURSE_PORT_NUMBER
value: "8080"
- name: DISCOURSE_EXTERNAL_HTTP_PORT_NUMBER
value: "80"
- name: DISCOURSE_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-secrets
key: apps.discourse.dbPassword
- name: POSTGRESQL_CLIENT_CREATE_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-secrets
key: apps.discourse.dbPassword
- name: DISCOURSE_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-secrets
key: apps.redis.password
- name: DISCOURSE_SECRET_KEY_BASE
valueFrom:
secretKeyRef:
name: discourse-secrets
key: apps.discourse.secretKeyBase
- name: DISCOURSE_SMTP_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-secrets
key: apps.discourse.smtpPassword
- name: SMTP_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-secrets
key: apps.discourse.smtpPassword
envFrom:
- configMapRef:
name: discourse
ports:
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
tcpSocket:
port: http
initialDelaySeconds: 500
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
httpGet:
path: /srv/status
port: http
initialDelaySeconds: 180
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
resources:
limits:
cpu: 1
ephemeral-storage: 2Gi
memory: 8Gi # for precompiling assets!
requests:
cpu: 750m
ephemeral-storage: 50Mi
memory: 1Gi
volumeMounts:
- name: discourse-data
mountPath: /bitnami/discourse
subPath: discourse
- name: sidekiq
image: docker.io/bitnami/discourse:3.4.7-debian-12-r0
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- CHOWN
- SYS_CHROOT
- FOWNER
- SETGID
- SETUID
- DAC_OVERRIDE
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
seLinuxOptions: {}
seccompProfile:
type: RuntimeDefault
command:
- /opt/bitnami/scripts/discourse/entrypoint.sh
args:
- /opt/bitnami/scripts/discourse-sidekiq/run.sh
env:
- name: BITNAMI_DEBUG
value: "false"
- name: DISCOURSE_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-secrets
key: apps.discourse.adminPassword
- name: DISCOURSE_POSTGRESQL_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-secrets
key: apps.discourse.dbPassword
- name: DISCOURSE_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-secrets
key: apps.redis.password
- name: DISCOURSE_SECRET_KEY_BASE
valueFrom:
secretKeyRef:
name: discourse-secrets
key: apps.discourse.secretKeyBase
- name: DISCOURSE_SMTP_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-secrets
key: apps.discourse.smtpPassword
- name: SMTP_PASSWORD
valueFrom:
secretKeyRef:
name: discourse-secrets
key: apps.discourse.smtpPassword
envFrom:
- configMapRef:
name: discourse
livenessProbe:
exec:
command: ["/bin/sh", "-c", "pgrep -f ^sidekiq"]
initialDelaySeconds: 500
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
exec:
command: ["/bin/sh", "-c", "pgrep -f ^sidekiq"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
resources:
limits:
cpu: 500m
ephemeral-storage: 2Gi
memory: 768Mi
requests:
cpu: 375m
ephemeral-storage: 50Mi
memory: 512Mi
volumeMounts:
- name: discourse-data
mountPath: /bitnami/discourse
subPath: discourse
volumes:
- name: discourse-data
persistentVolumeClaim:
claimName: discourse

View File

@@ -0,0 +1,26 @@
---
# Source: discourse/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: discourse
namespace: "discourse"
annotations:
external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
external-dns.alpha.kubernetes.io/target: "{{ .cloud.domain }}"
spec:
rules:
- host: "{{ .apps.discourse.domain }}"
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: discourse
port:
name: http
tls:
- hosts:
- "{{ .apps.discourse.domain }}"
secretName: wildcard-external-wild-cloud-tls

View File

@@ -0,0 +1,18 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: discourse
labels:
- includeSelectors: true
pairs:
app: discourse
managedBy: kustomize
partOf: wild-cloud
resources:
- namespace.yaml
- serviceaccount.yaml
- configmap.yaml
- pvc.yaml
- deployment.yaml
- service.yaml
- ingress.yaml
- db-init-job.yaml

View File

@@ -0,0 +1,36 @@
name: discourse
description: Discourse is a modern, open-source discussion platform designed for online communities and forums.
version: 3.4.7
icon: https://www.discourse.org/img/icon.png
requires:
- name: postgres
- name: redis
defaultConfig:
timezone: UTC
port: 8080
storage: 10Gi
adminEmail: admin@{{ .cloud.domain }}
adminUsername: admin
siteName: "Community"
domain: discourse.{{ .cloud.domain }}
dbHostname: postgres.postgres.svc.cluster.local
dbUsername: discourse
dbName: discourse
redisHostname: redis.redis.svc.cluster.local
tlsSecretName: wildcard-wild-cloud-tls
smtp:
enabled: false
host: "{{ .cloud.smtp.host }}"
port: "{{ .cloud.smtp.port }}"
user: "{{ .cloud.smtp.user }}"
from: "{{ .cloud.smtp.from }}"
tls: {{ .cloud.smtp.tls }}
startTls: {{ .cloud.smtp.startTls }}
requiredSecrets:
- apps.discourse.adminPassword
- apps.discourse.dbPassword
- apps.discourse.dbUrl
- apps.redis.password
- apps.discourse.secretKeyBase
- apps.discourse.smtpPassword
- apps.postgres.password

View File

@@ -1,5 +1,4 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: jellyfin
name: discourse

13
apps/discourse/pvc.yaml Normal file
View File

@@ -0,0 +1,13 @@
---
# Source: discourse/templates/pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: discourse
namespace: discourse
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "{{ .apps.discourse.storage }}"

View File

@@ -0,0 +1,18 @@
---
# Source: discourse/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: discourse
namespace: discourse
spec:
type: ClusterIP
sessionAffinity: None
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
nodePort: null
selector:
component: web

View File

@@ -0,0 +1,8 @@
---
# Source: discourse/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: discourse
namespace: discourse
automountServiceAccountToken: false

View File

@@ -2,3 +2,5 @@ name: example-admin
install: true
description: An example application that is deployed with internal-only access.
version: 1.0.0
defaultConfig:
tlsSecretName: wildcard-internal-wild-cloud-tls

View File

@@ -2,3 +2,5 @@ name: example-app
install: true
description: An example application that is deployed with public access.
version: 1.0.0
defaultConfig:
tlsSecretName: wildcard-wild-cloud-tls

View File

@@ -1,36 +0,0 @@
# Future Apps to be added to Wild Cloud
## Productivity
- [Affine](https://docs.affine.pro/self-host-affine): A collaborative document editor with a focus on real-time collaboration and rich media support.
- [Vaultwarden](https://github.com/dani-garcia/vaultwarden): A lightweight, self-hosted password manager that is compatible with Bitwarden clients.
## Automation
- [Home Assistant](https://www.home-assistant.io/installation/linux): A powerful home automation platform that focuses on privacy and local control.
## Social
- [Mastodon](https://docs.joinmastodon.org/admin/install/): A decentralized social network server that allows users to create their own instances.
## Development
- [Gitea](https://docs.gitea.io/en-us/install-from-binary/): A self-hosted Git service that is lightweight and easy to set up.
## Media
- [Jellyfin](https://jellyfin.org/downloads/server): A free software media system that allows you to organize, manage, and share your media files.
- [Glance](https://github.com/glanceapp/glance): RSS aggregator.
## Collaboration
- [Discourse](https://github.com/discourse/discourse): A modern forum software that is designed for community engagement and discussion.
- [Mattermost](https://docs.mattermost.com/guides/install.html): An open-source messaging platform that provides team collaboration features similar to Slack.
- [Outline](https://docs.getoutline.com/install): A collaborative knowledge base and wiki platform that allows teams to create and share documentation.
- [Rocket.Chat](https://rocket.chat/docs/installation/manual-installation/): An open-source team communication platform that provides real-time messaging, video conferencing, and file sharing.
## Infrastructure
- [Umami](https://umami.is/docs/installation): A self-hosted web analytics solution that provides insights into website traffic and user behavior.
- [Authelia](https://authelia.com/docs/): A self-hosted authentication and authorization server that provides two-factor authentication and single sign-on capabilities.

View File

@@ -1,13 +0,0 @@
GHOST_NAMESPACE=ghost
GHOST_HOST=blog.${DOMAIN}
GHOST_TITLE="My Blog"
GHOST_EMAIL=
GHOST_STORAGE_SIZE=10Gi
GHOST_MARIADB_STORAGE_SIZE=8Gi
GHOST_DATABASE_HOST=mariadb.mariadb.svc.cluster.local
GHOST_DATABASE_USER=ghost
GHOST_DATABASE_NAME=ghost
# Secrets
GHOST_PASSWORD=
GHOST_DATABASE_PASSWORD=

View File

@@ -0,0 +1,44 @@
apiVersion: batch/v1
kind: Job
metadata:
name: ghost-db-init
labels:
component: db-init
spec:
template:
metadata:
labels:
component: db-init
spec:
containers:
- name: db-init
image: {{ .apps.mysql.image }}
command: ["/bin/bash", "-c"]
args:
- |
mysql -h ${DB_HOSTNAME} -P ${DB_PORT} -u root -p${MYSQL_ROOT_PASSWORD} <<EOF
CREATE DATABASE IF NOT EXISTS ${DB_DATABASE_NAME};
CREATE USER IF NOT EXISTS '${DB_USERNAME}'@'%' IDENTIFIED BY '${DB_PASSWORD}';
GRANT ALL PRIVILEGES ON ${DB_DATABASE_NAME}.* TO '${DB_USERNAME}'@'%';
FLUSH PRIVILEGES;
EOF
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secrets
key: apps.mysql.rootPassword
- name: DB_HOSTNAME
value: "{{ .apps.ghost.dbHost }}"
- name: DB_PORT
value: "{{ .apps.ghost.dbPort }}"
- name: DB_DATABASE_NAME
value: "{{ .apps.ghost.dbName }}"
- name: DB_USERNAME
value: "{{ .apps.ghost.dbUser }}"
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: ghost-secrets
key: apps.ghost.dbPassword
restartPolicy: OnFailure

View File

@@ -1,111 +1,26 @@
kind: Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: ghost
namespace: ghost
uid: d01c62ff-68a6-456a-a630-77c730bffc9b
resourceVersion: "2772014"
generation: 1
creationTimestamp: "2025-04-27T02:01:30Z"
labels:
app.kubernetes.io/component: ghost
app.kubernetes.io/instance: ghost
app.kubernetes.io/managed-by: Wild
app.kubernetes.io/name: ghost
app.kubernetes.io/version: 5.118.1
annotations:
deployment.kubernetes.io/revision: "1"
meta.helm.sh/release-name: ghost
meta.helm.sh/release-namespace: ghost
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app.kubernetes.io/instance: ghost
app.kubernetes.io/name: ghost
component: web
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: ghost
app.kubernetes.io/instance: ghost
app.kubernetes.io/managed-by: Wild
app.kubernetes.io/name: ghost
app.kubernetes.io/version: 5.118.1
annotations:
checksum/secrets: b1cef92e7f73650dddfb455a7519d7b2bcf051c9cb9136b34f504ee120c63ae6
component: web
spec:
volumes:
- name: empty-dir
emptyDir: {}
- name: ghost-secrets
projected:
sources:
- secret:
name: ghost-mysql
- secret:
name: ghost
defaultMode: 420
- name: ghost-data
persistentVolumeClaim:
claimName: ghost
initContainers:
- name: prepare-base-dir
image: docker.io/bitnami/ghost:5.118.1-debian-12-r0
command:
- /bin/bash
args:
- "-ec"
- >
#!/bin/bash
. /opt/bitnami/scripts/liblog.sh
info "Copying base dir to empty dir"
# In order to not break the application functionality (such as
upgrades or plugins) we need
# to make the base directory writable, so we need to copy it to an
empty dir volume
cp -r --preserve=mode /opt/bitnami/ghost /emptydir/app-base-dir
resources:
limits:
cpu: 375m
ephemeral-storage: 2Gi
memory: 384Mi
requests:
cpu: 250m
ephemeral-storage: 50Mi
memory: 256Mi
volumeMounts:
- name: empty-dir
mountPath: /emptydir
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
drop:
- ALL
privileged: false
seLinuxOptions: {}
runAsUser: 1001
runAsGroup: 1001
runAsNonRoot: true
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
seccompProfile:
type: RuntimeDefault
containers:
- name: ghost
image: docker.io/bitnami/ghost:5.118.1-debian-12-r0
image: {{ .apps.ghost.image }}
ports:
- name: https
containerPort: 2368
- name: http
containerPort: {{ .apps.ghost.port }}
protocol: TCP
env:
- name: BITNAMI_DEBUG
@@ -113,27 +28,33 @@ spec:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
- name: GHOST_DATABASE_HOST
value: ghost-mysql
value: {{ .apps.ghost.dbHost }}
- name: GHOST_DATABASE_PORT_NUMBER
value: "3306"
value: "{{ .apps.ghost.dbPort }}"
- name: GHOST_DATABASE_NAME
value: ghost
value: {{ .apps.ghost.dbName }}
- name: GHOST_DATABASE_USER
value: ghost
- name: GHOST_DATABASE_PASSWORD_FILE
value: /opt/bitnami/ghost/secrets/mysql-password
value: {{ .apps.ghost.dbUser }}
- name: GHOST_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: ghost-secrets
key: apps.ghost.dbPassword
- name: GHOST_HOST
value: blog.cloud.payne.io/
value: {{ .apps.ghost.domain }}
- name: GHOST_PORT_NUMBER
value: "2368"
value: "{{ .apps.ghost.port }}"
- name: GHOST_USERNAME
value: admin
- name: GHOST_PASSWORD_FILE
value: /opt/bitnami/ghost/secrets/ghost-password
value: {{ .apps.ghost.adminUser }}
- name: GHOST_PASSWORD
valueFrom:
secretKeyRef:
name: ghost-secrets
key: apps.ghost.adminPassword
- name: GHOST_EMAIL
value: paul@payne.io
value: {{ .apps.ghost.adminEmail }}
- name: GHOST_BLOG_TITLE
value: User's Blog
value: {{ .apps.ghost.blogTitle }}
- name: GHOST_ENABLE_HTTPS
value: "yes"
- name: GHOST_EXTERNAL_HTTP_PORT_NUMBER
@@ -142,6 +63,21 @@ spec:
value: "443"
- name: GHOST_SKIP_BOOTSTRAP
value: "no"
- name: GHOST_SMTP_SERVICE
value: SMTP
- name: GHOST_SMTP_HOST
value: {{ .apps.ghost.smtp.host }}
- name: GHOST_SMTP_PORT
value: "{{ .apps.ghost.smtp.port }}"
- name: GHOST_SMTP_USER
value: {{ .apps.ghost.smtp.user }}
- name: GHOST_SMTP_PASSWORD
valueFrom:
secretKeyRef:
name: ghost-secrets
key: apps.ghost.smtpPassword
- name: GHOST_SMTP_FROM_ADDRESS
value: {{ .apps.ghost.smtp.from }}
resources:
limits:
cpu: 375m
@@ -152,22 +88,11 @@ spec:
ephemeral-storage: 50Mi
memory: 256Mi
volumeMounts:
- name: empty-dir
mountPath: /opt/bitnami/ghost
subPath: app-base-dir
- name: empty-dir
mountPath: /.ghost
subPath: app-tmp-dir
- name: empty-dir
mountPath: /tmp
subPath: tmp-dir
- name: ghost-data
mountPath: /bitnami/ghost
- name: ghost-secrets
mountPath: /opt/bitnami/ghost/secrets
livenessProbe:
tcpSocket:
port: 2368
port: {{ .apps.ghost.port }}
initialDelaySeconds: 120
timeoutSeconds: 5
periodSeconds: 10
@@ -176,7 +101,7 @@ spec:
readinessProbe:
httpGet:
path: /
port: https
port: http
scheme: HTTP
httpHeaders:
- name: x-forwarded-proto
@@ -186,64 +111,22 @@ spec:
periodSeconds: 5
successThreshold: 1
failureThreshold: 6
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
drop:
- ALL
privileged: false
seLinuxOptions: {}
runAsUser: 1001
runAsGroup: 1001
runAsNonRoot: true
readOnlyRootFilesystem: true
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
seccompProfile:
type: RuntimeDefault
volumes:
- name: ghost-data
persistentVolumeClaim:
claimName: ghost-data
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: ghost
serviceAccount: ghost
automountServiceAccountToken: false
securityContext:
fsGroup: 1001
fsGroupChangePolicy: Always
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/instance: ghost
app.kubernetes.io/name: ghost
topologyKey: kubernetes.io/hostname
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
status:
observedGeneration: 1
replicas: 1
updatedReplicas: 1
unavailableReplicas: 1
conditions:
- type: Available
status: "False"
lastUpdateTime: "2025-04-27T02:01:30Z"
lastTransitionTime: "2025-04-27T02:01:30Z"
reason: MinimumReplicasUnavailable
message: Deployment does not have minimum availability.
- type: Progressing
status: "False"
lastUpdateTime: "2025-04-27T02:11:32Z"
lastTransitionTime: "2025-04-27T02:11:32Z"
reason: ProgressDeadlineExceeded
message: ReplicaSet "ghost-586bbc6ddd" has timed out progressing.

View File

@@ -1,19 +1,18 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ghost
namespace: {{ .Values.namespace }}
namespace: ghost
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
external-dns.alpha.kubernetes.io/target: "cloud.payne.io"
external-dns.alpha.kubernetes.io/target: {{ .cloud.domain }}
external-dns.alpha.kubernetes.io/ttl: "60"
traefik.ingress.kubernetes.io/redirect-entry-point: https
spec:
rules:
- host: {{ .Values.ghost.host }}
- host: {{ .apps.ghost.domain }}
http:
paths:
- path: /
@@ -25,5 +24,5 @@ spec:
number: 80
tls:
- hosts:
- {{ .Values.ghost.host }}
secretName: ghost-tls
- {{ .apps.ghost.domain }}
secretName: {{ .apps.ghost.tlsSecretName }}

View File

@@ -0,0 +1,16 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ghost
labels:
- includeSelectors: true
pairs:
app: ghost
managedBy: kustomize
partOf: wild-cloud
resources:
- namespace.yaml
- db-init-job.yaml
- deployment.yaml
- service.yaml
- ingress.yaml
- pvc.yaml

30
apps/ghost/manifest.yaml Normal file
View File

@@ -0,0 +1,30 @@
name: ghost
description: Ghost is a powerful app for new-media creators to publish, share, and grow a business around their content.
version: 5.118.1
icon: https://ghost.org/images/logos/ghost-logo-orb.png
requires:
- name: mysql
defaultConfig:
image: docker.io/bitnami/ghost:5.118.1-debian-12-r0
domain: ghost.{{ .cloud.domain }}
tlsSecretName: wildcard-wild-cloud-tls
port: 2368
storage: 10Gi
dbHost: mysql.mysql.svc.cluster.local
dbPort: 3306
dbName: ghost
dbUser: ghost
adminUser: admin
adminEmail: "admin@{{ .cloud.domain }}"
blogTitle: "My Blog"
timezone: UTC
tlsSecretName: wildcard-wild-cloud-tls
smtp:
host: "{{ .cloud.smtp.host }}"
port: "{{ .cloud.smtp.port }}"
from: "{{ .cloud.smtp.from }}"
user: "{{ .cloud.smtp.user }}"
requiredSecrets:
- apps.ghost.adminPassword
- apps.ghost.dbPassword
- apps.ghost.smtpPassword

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: ghost

View File

@@ -1,26 +0,0 @@
---
# Source: ghost/templates/networkpolicy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: ghost
namespace: "default"
labels:
app.kubernetes.io/instance: ghost
app.kubernetes.io/managed-by: Wild
app.kubernetes.io/name: ghost
app.kubernetes.io/version: 5.118.1
spec:
podSelector:
matchLabels:
app.kubernetes.io/instance: ghost
app.kubernetes.io/name: ghost
policyTypes:
- Ingress
- Egress
egress:
- {}
ingress:
- ports:
- port: 2368
- port: 2368

View File

@@ -1,20 +0,0 @@
---
# Source: ghost/templates/pdb.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: ghost
namespace: "default"
labels:
app.kubernetes.io/instance: ghost
app.kubernetes.io/managed-by: Wild
app.kubernetes.io/name: ghost
app.kubernetes.io/version: 5.118.1
app.kubernetes.io/component: ghost
spec:
maxUnavailable: 1
selector:
matchLabels:
app.kubernetes.io/instance: ghost
app.kubernetes.io/name: ghost
app.kubernetes.io/component: ghost

11
apps/ghost/pvc.yaml Normal file
View File

@@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ghost-data
namespace: ghost
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .apps.ghost.storage }}

View File

@@ -1,14 +0,0 @@
---
# Source: ghost/templates/service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: ghost
namespace: "default"
labels:
app.kubernetes.io/instance: ghost
app.kubernetes.io/managed-by: Wild
app.kubernetes.io/name: ghost
app.kubernetes.io/version: 5.118.1
app.kubernetes.io/component: ghost
automountServiceAccountToken: false

14
apps/ghost/service.yaml Normal file
View File

@@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
name: ghost
namespace: ghost
spec:
type: ClusterIP
ports:
- name: http
port: 80
protocol: TCP
targetPort: {{ .apps.ghost.port }}
selector:
component: web

View File

@@ -1,26 +0,0 @@
---
# Source: ghost/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
name: ghost
namespace: "default"
labels:
app.kubernetes.io/instance: ghost
app.kubernetes.io/managed-by: Wild
app.kubernetes.io/name: ghost
app.kubernetes.io/version: 5.118.1
app.kubernetes.io/component: ghost
spec:
type: LoadBalancer
externalTrafficPolicy: "Cluster"
sessionAffinity: None
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app.kubernetes.io/instance: ghost
app.kubernetes.io/name: ghost
app.kubernetes.io/component: ghost

View File

@@ -36,7 +36,7 @@ spec:
valueFrom:
secretKeyRef:
name: postgres-secrets
key: password
key: apps.postgres.password
- name: DB_HOSTNAME
value: "{{ .apps.gitea.dbHost }}"
- name: DB_DATABASE_NAME
@@ -47,5 +47,5 @@ spec:
valueFrom:
secretKeyRef:
name: gitea-secrets
key: dbPassword
key: apps.gitea.dbPassword
restartPolicy: OnFailure

View File

@@ -33,27 +33,27 @@ spec:
valueFrom:
secretKeyRef:
name: gitea-secrets
key: adminPassword
key: apps.gitea.adminPassword
- name: GITEA__security__SECRET_KEY
valueFrom:
secretKeyRef:
name: gitea-secrets
key: secretKey
key: apps.gitea.secretKey
- name: GITEA__security__INTERNAL_TOKEN
valueFrom:
secretKeyRef:
name: gitea-secrets
key: jwtSecret
key: apps.gitea.jwtSecret
- name: GITEA__database__PASSWD
valueFrom:
secretKeyRef:
name: gitea-secrets
key: dbPassword
key: apps.gitea.dbPassword
- name: GITEA__mailer__PASSWD
valueFrom:
secretKeyRef:
name: gitea-secrets
key: smtpPassword
key: apps.gitea.smtpPassword
ports:
- name: ssh
containerPort: 2222

View File

@@ -1,19 +1,15 @@
SSH_LISTEN_PORT=2222
SSH_PORT=22
GITEA_APP_INI=/data/gitea/conf/app.ini
GITEA_CUSTOM=/data/gitea
GITEA_WORK_DIR=/data
GITEA_TEMP=/tmp/gitea
TMPDIR=/tmp/gitea
HOME=/data/gitea/git
GITEA_ADMIN_USERNAME={{ .apps.gitea.adminUser }}
GITEA_ADMIN_PASSWORD_MODE=keepUpdated
# Core app settings
APP_NAME={{ .apps.gitea.appName }}
RUN_MODE={{ .apps.gitea.runMode }}
RUN_USER=git
WORK_PATH=/data
GITEA____APP_NAME={{ .apps.gitea.appName }}
GITEA____RUN_MODE={{ .apps.gitea.runMode }}
GITEA____RUN_USER=git
# Security settings
GITEA__security__INSTALL_LOCK=true

View File

@@ -52,8 +52,8 @@ spec:
- name: POSTGRES_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secrets
key: password
name: immich-secrets
key: apps.postgres.password
- name: DB_HOSTNAME
value: "{{ .apps.immich.dbHostname }}"
- name: DB_DATABASE_NAME
@@ -64,5 +64,5 @@ spec:
valueFrom:
secretKeyRef:
name: immich-secrets
key: dbPassword
key: apps.immich.dbPassword
restartPolicy: OnFailure

View File

@@ -25,6 +25,11 @@ spec:
env:
- name: REDIS_HOSTNAME
value: "{{ .apps.immich.redisHostname }}"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: immich-secrets
key: apps.redis.password
- name: DB_HOSTNAME
value: "{{ .apps.immich.dbHostname }}"
- name: DB_USERNAME
@@ -33,7 +38,7 @@ spec:
valueFrom:
secretKeyRef:
name: immich-secrets
key: dbPassword
key: apps.immich.dbPassword
- name: TZ
value: "{{ .apps.immich.timezone }}"
- name: IMMICH_WORKERS_EXCLUDE
@@ -46,3 +51,11 @@ spec:
- name: immich-storage
persistentVolumeClaim:
claimName: immich-pvc
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: immich
component: server
topologyKey: kubernetes.io/hostname

View File

@@ -28,6 +28,11 @@ spec:
env:
- name: REDIS_HOSTNAME
value: "{{ .apps.immich.redisHostname }}"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: immich-secrets
key: apps.redis.password
- name: DB_HOSTNAME
value: "{{ .apps.immich.dbHostname }}"
- name: DB_USERNAME
@@ -36,7 +41,7 @@ spec:
valueFrom:
secretKeyRef:
name: immich-secrets
key: dbPassword
key: apps.immich.dbPassword
- name: TZ
value: "{{ .apps.immich.timezone }}"
- name: IMMICH_WORKERS_EXCLUDE

View File

@@ -18,6 +18,8 @@ defaultConfig:
dbHostname: postgres.postgres.svc.cluster.local
dbUsername: immich
domain: immich.{{ .cloud.domain }}
tlsSecretName: wildcard-wild-cloud-tls
requiredSecrets:
- apps.immich.dbPassword
- apps.postgres.password
- apps.redis.password

View File

@@ -1,12 +0,0 @@
# Config
JELLYFIN_DOMAIN=jellyfin.$DOMAIN
JELLYFIN_CONFIG_STORAGE=1Gi
JELLYFIN_CACHE_STORAGE=10Gi
JELLYFIN_MEDIA_STORAGE=100Gi
TZ=UTC
# Docker Images
JELLYFIN_IMAGE=jellyfin/jellyfin:latest
# Jellyfin Configuration
JELLYFIN_PublishedServerUrl=https://jellyfin.$DOMAIN

View File

@@ -1,49 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jellyfin
spec:
replicas: 1
selector:
matchLabels:
app: jellyfin
strategy:
type: Recreate
template:
metadata:
labels:
app: jellyfin
spec:
containers:
- image: jellyfin/jellyfin:latest
name: jellyfin
ports:
- containerPort: 8096
protocol: TCP
envFrom:
- configMapRef:
name: config
env:
- name: TZ
valueFrom:
configMapKeyRef:
key: TZ
name: config
volumeMounts:
- mountPath: /config
name: jellyfin-config
- mountPath: /cache
name: jellyfin-cache
- mountPath: /media
name: jellyfin-media
volumes:
- name: jellyfin-config
persistentVolumeClaim:
claimName: jellyfin-config-pvc
- name: jellyfin-cache
persistentVolumeClaim:
claimName: jellyfin-cache-pvc
- name: jellyfin-media
persistentVolumeClaim:
claimName: jellyfin-media-pvc

View File

@@ -1,24 +0,0 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jellyfin-public
annotations:
external-dns.alpha.kubernetes.io/target: your.jellyfin.domain
external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
spec:
rules:
- host: your.jellyfin.domain
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jellyfin
port:
number: 8096
tls:
- secretName: wildcard-internal-wild-cloud-tls
hosts:
- your.jellyfin.domain

View File

@@ -1,82 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: jellyfin
labels:
- includeSelectors: true
pairs:
app: jellyfin
managedBy: kustomize
partOf: wild-cloud
resources:
- deployment.yaml
- ingress.yaml
- namespace.yaml
- pvc.yaml
- service.yaml
configMapGenerator:
- name: config
envs:
- config/config.env
replacements:
- source:
kind: ConfigMap
name: config
fieldPath: data.DOMAIN
targets:
- select:
kind: Ingress
name: jellyfin-public
fieldPaths:
- metadata.annotations.[external-dns.alpha.kubernetes.io/target]
- source:
kind: ConfigMap
name: config
fieldPath: data.JELLYFIN_DOMAIN
targets:
- select:
kind: Ingress
name: jellyfin-public
fieldPaths:
- spec.rules.0.host
- spec.tls.0.hosts.0
- source:
kind: ConfigMap
name: config
fieldPath: data.JELLYFIN_CONFIG_STORAGE
targets:
- select:
kind: PersistentVolumeClaim
name: jellyfin-config-pvc
fieldPaths:
- spec.resources.requests.storage
- source:
kind: ConfigMap
name: config
fieldPath: data.JELLYFIN_CACHE_STORAGE
targets:
- select:
kind: PersistentVolumeClaim
name: jellyfin-cache-pvc
fieldPaths:
- spec.resources.requests.storage
- source:
kind: ConfigMap
name: config
fieldPath: data.JELLYFIN_MEDIA_STORAGE
targets:
- select:
kind: PersistentVolumeClaim
name: jellyfin-media-pvc
fieldPaths:
- spec.resources.requests.storage
- source:
kind: ConfigMap
name: config
fieldPath: data.JELLYFIN_IMAGE
targets:
- select:
kind: Deployment
name: jellyfin
fieldPaths:
- spec.template.spec.containers.0.image

View File

@@ -1,37 +0,0 @@
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jellyfin-config-pvc
namespace: jellyfin
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jellyfin-cache-pvc
namespace: jellyfin
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jellyfin-media-pvc
namespace: jellyfin
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs
resources:
requests:
storage: 100Gi

View File

@@ -1,15 +0,0 @@
---
apiVersion: v1
kind: Service
metadata:
name: jellyfin
namespace: jellyfin
labels:
app: jellyfin
spec:
ports:
- port: 8096
targetPort: 8096
protocol: TCP
selector:
app: jellyfin

View File

@@ -0,0 +1,63 @@
apiVersion: batch/v1
kind: Job
metadata:
name: keila-db-init
spec:
template:
metadata:
labels:
component: db-init
spec:
restartPolicy: OnFailure
securityContext:
runAsNonRoot: true
runAsUser: 999
runAsGroup: 999
seccompProfile:
type: RuntimeDefault
containers:
- name: postgres-init
image: postgres:15
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: false
env:
- name: PGHOST
value: {{ .apps.keila.dbHostname }}
- name: PGUSER
value: postgres
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: keila-secrets
key: apps.postgres.password
- name: DB_NAME
value: {{ .apps.keila.dbName }}
- name: DB_USER
value: {{ .apps.keila.dbUsername }}
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: keila-secrets
key: apps.keila.dbPassword
command:
- /bin/bash
- -c
- |
set -e
echo "Waiting for PostgreSQL to be ready..."
until pg_isready; do
echo "PostgreSQL is not ready - sleeping"
sleep 2
done
echo "PostgreSQL is ready"
echo "Creating database and user for Keila..."
psql -c "CREATE DATABASE ${DB_NAME};" || echo "Database ${DB_NAME} already exists"
psql -c "CREATE USER ${DB_USER} WITH PASSWORD '${DB_PASSWORD}';" || echo "User ${DB_USER} already exists"
psql -c "GRANT ALL PRIVILEGES ON DATABASE ${DB_NAME} TO ${DB_USER};"
psql -d ${DB_NAME} -c "GRANT ALL ON SCHEMA public TO ${DB_USER};"
echo "Database initialization complete"

View File

@@ -0,0 +1,85 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: keila
spec:
replicas: 1
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: keila
image: {{ .apps.keila.image }}
ports:
- containerPort: {{ .apps.keila.port }}
env:
- name: DB_URL
valueFrom:
secretKeyRef:
name: keila-secrets
key: apps.keila.dbUrl
- name: URL_HOST
value: {{ .apps.keila.domain }}
- name: URL_SCHEMA
value: https
- name: URL_PORT
value: "443"
- name: PORT
value: "{{ .apps.keila.port }}"
- name: SECRET_KEY_BASE
valueFrom:
secretKeyRef:
name: keila-secrets
key: apps.keila.secretKeyBase
- name: MAILER_SMTP_HOST
value: {{ .apps.keila.smtp.host }}
- name: MAILER_SMTP_PORT
value: "{{ .apps.keila.smtp.port }}"
- name: MAILER_ENABLE_SSL
value: "{{ .apps.keila.smtp.tls }}"
- name: MAILER_ENABLE_STARTTLS
value: "{{ .apps.keila.smtp.startTls }}"
- name: MAILER_SMTP_USER
value: {{ .apps.keila.smtp.user }}
- name: MAILER_SMTP_PASSWORD
valueFrom:
secretKeyRef:
name: keila-secrets
key: apps.keila.smtpPassword
- name: MAILER_SMTP_FROM_EMAIL
value: {{ .apps.keila.smtp.from }}
- name: DISABLE_REGISTRATION
value: "{{ .apps.keila.disableRegistration }}"
- name: KEILA_USER
value: "{{ .apps.keila.adminUser }}"
- name: KEILA_PASSWORD
valueFrom:
secretKeyRef:
name: keila-secrets
key: apps.keila.adminPassword
- name: USER_CONTENT_DIR
value: /var/lib/keila/uploads
volumeMounts:
- name: uploads
mountPath: /var/lib/keila/uploads
livenessProbe:
httpGet:
path: /
port: {{ .apps.keila.port }}
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: {{ .apps.keila.port }}
initialDelaySeconds: 5
periodSeconds: 5
volumes:
- name: uploads
persistentVolumeClaim:
claimName: keila-uploads

26
apps/keila/ingress.yaml Normal file
View File

@@ -0,0 +1,26 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: keila
annotations:
traefik.ingress.kubernetes.io/router.tls: "true"
traefik.ingress.kubernetes.io/router.tls.certresolver: letsencrypt
external-dns.alpha.kubernetes.io/target: {{ .cloud.domain }}
external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
traefik.ingress.kubernetes.io/router.middlewares: keila-cors@kubernetescrd
spec:
rules:
- host: {{ .apps.keila.domain }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: keila
port:
number: 80
tls:
- secretName: "wildcard-wild-cloud-tls"
hosts:
- "{{ .apps.keila.domain }}"

View File

@@ -0,0 +1,17 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: keila
labels:
- includeSelectors: true
pairs:
app: keila
managedBy: kustomize
partOf: wild-cloud
resources:
- namespace.yaml
- deployment.yaml
- service.yaml
- ingress.yaml
- pvc.yaml
- db-init-job.yaml
- middleware-cors.yaml

31
apps/keila/manifest.yaml Normal file
View File

@@ -0,0 +1,31 @@
name: keila
description: Keila is an open-source email marketing platform that allows you to send newsletters and manage mailing lists with privacy and control.
version: 1.0.0
icon: https://www.keila.io/images/logo.svg
requires:
- name: postgres
defaultConfig:
image: pentacent/keila:latest
port: 4000
storage: 1Gi
domain: keila.{{ .cloud.domain }}
dbHostname: postgres.postgres.svc.cluster.local
dbName: keila
dbUsername: keila
disableRegistration: "true"
adminUser: admin@{{ .cloud.domain }}
tlsSecretName: wildcard-wild-cloud-tls
smtp:
host: "{{ .cloud.smtp.host }}"
port: "{{ .cloud.smtp.port }}"
from: "{{ .cloud.smtp.from }}"
user: "{{ .cloud.smtp.user }}"
tls: {{ .cloud.smtp.tls }}
startTls: {{ .cloud.smtp.startTls }}
requiredSecrets:
- apps.keila.secretKeyBase
- apps.keila.dbPassword
- apps.keila.dbUrl
- apps.keila.adminPassword
- apps.keila.smtpPassword
- apps.postgres.password

View File

@@ -0,0 +1,28 @@
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: cors
spec:
headers:
accessControlAllowCredentials: true
accessControlAllowHeaders:
- "Content-Type"
- "Authorization"
- "X-Requested-With"
- "Accept"
- "Origin"
- "Cache-Control"
- "X-File-Name"
accessControlAllowMethods:
- "GET"
- "POST"
- "PUT"
- "DELETE"
- "OPTIONS"
accessControlAllowOriginList:
- "http://localhost:1313"
- "https://*.{{ .cloud.domain }}"
- "https://{{ .cloud.domain }}"
accessControlExposeHeaders:
- "*"
accessControlMaxAge: 86400

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: keila

10
apps/keila/pvc.yaml Normal file
View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: keila-uploads
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .apps.keila.storage }}

11
apps/keila/service.yaml Normal file
View File

@@ -0,0 +1,11 @@
apiVersion: v1
kind: Service
metadata:
name: keila
spec:
selector:
component: web
ports:
- port: 80
targetPort: {{ .apps.keila.port }}
protocol: TCP

View File

@@ -0,0 +1,65 @@
apiVersion: batch/v1
kind: Job
metadata:
name: listmonk-db-init
labels:
component: db-init
spec:
template:
metadata:
labels:
component: db-init
spec:
restartPolicy: OnFailure
securityContext:
runAsNonRoot: true
runAsUser: 999
runAsGroup: 999
seccompProfile:
type: RuntimeDefault
containers:
- name: postgres-init
image: postgres:15
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: false
env:
- name: PGHOST
value: {{ .apps.listmonk.dbHost }}
- name: PGUSER
value: postgres
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: listmonk-secrets
key: apps.postgres.password
- name: DB_NAME
value: {{ .apps.listmonk.dbName }}
- name: DB_USER
value: {{ .apps.listmonk.dbUser }}
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: listmonk-secrets
key: apps.listmonk.dbPassword
command:
- /bin/bash
- -c
- |
set -e
echo "Waiting for PostgreSQL to be ready..."
until pg_isready; do
echo "PostgreSQL is not ready - sleeping"
sleep 2
done
echo "PostgreSQL is ready"
echo "Creating database and user for Listmonk..."
psql -c "CREATE DATABASE ${DB_NAME};" || echo "Database ${DB_NAME} already exists"
psql -c "CREATE USER ${DB_USER} WITH PASSWORD '${DB_PASSWORD}';" || echo "User ${DB_USER} already exists"
psql -c "GRANT ALL PRIVILEGES ON DATABASE ${DB_NAME} TO ${DB_USER};"
psql -d ${DB_NAME} -c "GRANT ALL ON SCHEMA public TO ${DB_USER};"
echo "Database initialization complete"

View File

@@ -0,0 +1,86 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: listmonk
namespace: listmonk
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: listmonk
image: listmonk/listmonk:v5.0.3
command: ["/bin/sh"]
args: ["-c", "./listmonk --install --idempotent --yes && ./listmonk --upgrade --yes && ./listmonk"]
ports:
- name: http
containerPort: 9000
protocol: TCP
env:
- name: LISTMONK_app__address
value: "0.0.0.0:9000"
- name: LISTMONK_db__host
value: {{ .apps.listmonk.dbHost }}
- name: LISTMONK_db__port
value: "{{ .apps.listmonk.dbPort }}"
- name: LISTMONK_db__user
value: {{ .apps.listmonk.dbUser }}
- name: LISTMONK_db__database
value: {{ .apps.listmonk.dbName }}
- name: LISTMONK_db__ssl_mode
value: {{ .apps.listmonk.dbSSLMode }}
- name: LISTMONK_db__password
valueFrom:
secretKeyRef:
name: listmonk-secrets
key: apps.listmonk.dbPassword
resources:
limits:
cpu: 500m
ephemeral-storage: 1Gi
memory: 512Mi
requests:
cpu: 100m
ephemeral-storage: 50Mi
memory: 128Mi
volumeMounts:
- name: listmonk-data
mountPath: /listmonk/data
livenessProbe:
tcpSocket:
port: 9000
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
tcpSocket:
port: 9000
initialDelaySeconds: 10
timeoutSeconds: 3
periodSeconds: 5
successThreshold: 1
failureThreshold: 3
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: false
volumes:
- name: listmonk-data
persistentVolumeClaim:
claimName: listmonk-data
restartPolicy: Always

View File

@@ -0,0 +1,27 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: listmonk
namespace: listmonk
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: {{ .cloud.domain }}
external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
spec:
ingressClassName: traefik
tls:
- hosts:
- {{ .apps.listmonk.domain }}
secretName: {{ .apps.listmonk.tlsSecretName }}
rules:
- host: {{ .apps.listmonk.domain }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: listmonk
port:
number: 80

View File

@@ -0,0 +1,16 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: listmonk
labels:
- includeSelectors: true
pairs:
app: listmonk
managedBy: kustomize
partOf: wild-cloud
resources:
- namespace.yaml
- db-init-job.yaml
- deployment.yaml
- service.yaml
- ingress.yaml
- pvc.yaml

View File

@@ -0,0 +1,20 @@
name: listmonk
description: Listmonk is a standalone, self-hosted, newsletter and mailing list manager. It is fast, feature-rich, and packed into a single binary.
version: 5.0.3
icon: https://listmonk.app/static/images/logo.svg
requires:
- name: postgres
defaultConfig:
domain: listmonk.{{ .cloud.domain }}
tlsSecretName: wildcard-wild-cloud-tls
storage: 1Gi
dbHost: postgres.postgres.svc.cluster.local
dbPort: 5432
dbName: listmonk
dbUser: listmonk
dbSSLMode: disable
timezone: UTC
requiredSecrets:
- apps.listmonk.dbPassword
- apps.listmonk.dbUrl
- apps.postgres.password

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: listmonk

11
apps/listmonk/pvc.yaml Normal file
View File

@@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: listmonk-data
namespace: listmonk
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .apps.listmonk.storage }}

View File

@@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
name: listmonk
namespace: listmonk
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
component: web

View File

@@ -1,11 +0,0 @@
MARIADB_NAMESPACE=mariadb
MARIADB_RELEASE_NAME=mariadb
MARIADB_USER=app
MARIADB_DATABASE=app_database
MARIADB_STORAGE=8Gi
MARIADB_TAG=11.4.5
MARIADB_PORT=3306
# Secrets
MARIADB_PASSWORD=
MARIADB_ROOT_PASSWORD=

View File

@@ -1,27 +1,17 @@
---
# Source: ghost/charts/mysql/templates/primary/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ghost-mysql
namespace: "default"
labels:
app.kubernetes.io/instance: ghost
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: mysql
app.kubernetes.io/version: 8.4.5
helm.sh/chart: mysql-12.3.4
app.kubernetes.io/part-of: mysql
app.kubernetes.io/component: primary
name: mysql
namespace: mysql
data:
my.cnf: |-
my.cnf: |
[mysqld]
authentication_policy='* ,,'
skip-name-resolve
explicit_defaults_for_timestamp
basedir=/opt/bitnami/mysql
plugin_dir=/opt/bitnami/mysql/lib/plugin
port=3306
port={{ .apps.mysql.port }}
mysqlx=0
mysqlx_port=33060
socket=/opt/bitnami/mysql/tmp/mysql.sock
@@ -36,12 +26,12 @@ data:
long_query_time=10.0
[client]
port=3306
port={{ .apps.mysql.port }}
socket=/opt/bitnami/mysql/tmp/mysql.sock
default-character-set=UTF8
plugin_dir=/opt/bitnami/mysql/lib/plugin
[manager]
port=3306
port={{ .apps.mysql.port }}
socket=/opt/bitnami/mysql/tmp/mysql.sock
pid-file=/opt/bitnami/mysql/tmp/mysqld.pid

View File

@@ -0,0 +1,15 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: mysql
labels:
- includeSelectors: true
pairs:
app: mysql
managedBy: kustomize
partOf: wild-cloud
resources:
- namespace.yaml
- statefulset.yaml
- service.yaml
- service-headless.yaml
- configmap.yaml

17
apps/mysql/manifest.yaml Normal file
View File

@@ -0,0 +1,17 @@
name: mysql
description: MySQL is an open-source relational database management system
version: 8.4.5
icon: https://www.mysql.com/common/logos/logo-mysql-170x115.png
requires: []
defaultConfig:
image: docker.io/bitnami/mysql:8.4.5-debian-12-r0
port: 3306
storage: 20Gi
dbName: mysql
rootUser: root
user: mysql
timezone: UTC
enableSSL: false
requiredSecrets:
- apps.mysql.rootPassword
- apps.mysql.password

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: mysql

View File

@@ -1,31 +0,0 @@
---
# Source: ghost/charts/mysql/templates/networkpolicy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: ghost-mysql
namespace: "default"
labels:
app.kubernetes.io/instance: ghost
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: mysql
app.kubernetes.io/version: 8.4.5
helm.sh/chart: mysql-12.3.4
app.kubernetes.io/part-of: mysql
spec:
podSelector:
matchLabels:
app.kubernetes.io/instance: ghost
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: mysql
app.kubernetes.io/version: 8.4.5
helm.sh/chart: mysql-12.3.4
policyTypes:
- Ingress
- Egress
egress:
- {}
ingress:
# Allow connection from other cluster pods
- ports:
- port: 3306

View File

@@ -1,23 +0,0 @@
---
# Source: ghost/charts/mysql/templates/primary/pdb.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: ghost-mysql
namespace: "default"
labels:
app.kubernetes.io/instance: ghost
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: mysql
app.kubernetes.io/version: 8.4.5
helm.sh/chart: mysql-12.3.4
app.kubernetes.io/part-of: mysql
app.kubernetes.io/component: primary
spec:
maxUnavailable: 1
selector:
matchLabels:
app.kubernetes.io/instance: ghost
app.kubernetes.io/name: mysql
app.kubernetes.io/part-of: mysql
app.kubernetes.io/component: primary

View File

@@ -1,27 +0,0 @@
---
# Source: ghost/charts/mysql/templates/primary/svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
name: ghost-mysql-headless
namespace: "default"
labels:
app.kubernetes.io/instance: ghost
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: mysql
app.kubernetes.io/version: 8.4.5
helm.sh/chart: mysql-12.3.4
app.kubernetes.io/part-of: mysql
app.kubernetes.io/component: primary
spec:
type: ClusterIP
clusterIP: None
publishNotReadyAddresses: true
ports:
- name: mysql
port: 3306
targetPort: mysql
selector:
app.kubernetes.io/instance: ghost
app.kubernetes.io/name: mysql
app.kubernetes.io/component: primary

View File

@@ -1,29 +0,0 @@
---
# Source: ghost/charts/mysql/templates/primary/svc.yaml
apiVersion: v1
kind: Service
metadata:
name: ghost-mysql
namespace: "default"
labels:
app.kubernetes.io/instance: ghost
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: mysql
app.kubernetes.io/version: 8.4.5
helm.sh/chart: mysql-12.3.4
app.kubernetes.io/part-of: mysql
app.kubernetes.io/component: primary
spec:
type: ClusterIP
sessionAffinity: None
ports:
- name: mysql
port: 3306
protocol: TCP
targetPort: mysql
nodePort: null
selector:
app.kubernetes.io/instance: ghost
app.kubernetes.io/name: mysql
app.kubernetes.io/part-of: mysql
app.kubernetes.io/component: primary

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: mysql-headless
namespace: mysql
spec:
type: ClusterIP
clusterIP: None
publishNotReadyAddresses: true
ports:
- name: mysql
port: {{ .apps.mysql.port }}
protocol: TCP
targetPort: mysql
selector:
component: primary

14
apps/mysql/service.yaml Normal file
View File

@@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
name: mysql
namespace: mysql
spec:
type: ClusterIP
ports:
- name: mysql
port: {{ .apps.mysql.port }}
protocol: TCP
targetPort: mysql
selector:
component: primary

View File

@@ -1,17 +0,0 @@
---
# Source: ghost/charts/mysql/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: ghost-mysql
namespace: "default"
labels:
app.kubernetes.io/instance: ghost
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: mysql
app.kubernetes.io/version: 8.4.5
helm.sh/chart: mysql-12.3.4
app.kubernetes.io/part-of: mysql
automountServiceAccountToken: false
secrets:
- name: ghost-mysql

View File

@@ -1,97 +1,57 @@
---
# Source: ghost/charts/mysql/templates/primary/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ghost-mysql
namespace: "default"
labels:
app.kubernetes.io/instance: ghost
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: mysql
app.kubernetes.io/version: 8.4.5
helm.sh/chart: mysql-12.3.4
app.kubernetes.io/part-of: mysql
app.kubernetes.io/component: primary
name: mysql
namespace: mysql
spec:
replicas: 1
podManagementPolicy: ""
selector:
matchLabels:
app.kubernetes.io/instance: ghost
app.kubernetes.io/name: mysql
app.kubernetes.io/part-of: mysql
app.kubernetes.io/component: primary
serviceName: ghost-mysql-headless
podManagementPolicy: Parallel
serviceName: mysql-headless
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
component: primary
template:
metadata:
annotations:
checksum/configuration: 959b0f76ba7e6be0aaaabf97932398c31b17bc9f86d3839a26a3bbbc48673cd9
labels:
app.kubernetes.io/instance: ghost
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: mysql
app.kubernetes.io/version: 8.4.5
helm.sh/chart: mysql-12.3.4
app.kubernetes.io/part-of: mysql
app.kubernetes.io/component: primary
component: primary
spec:
serviceAccountName: ghost-mysql
serviceAccountName: default
automountServiceAccountToken: false
affinity:
podAffinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/instance: ghost
app.kubernetes.io/name: mysql
topologyKey: kubernetes.io/hostname
weight: 1
nodeAffinity:
securityContext:
fsGroup: 1001
fsGroupChangePolicy: Always
supplementalGroups: []
sysctls: []
initContainers:
- name: preserve-logs-symlinks
image: docker.io/bitnami/mysql:8.4.5-debian-12-r0
imagePullPolicy: "IfNotPresent"
image: {{ .apps.mysql.image }}
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
- ALL
readOnlyRootFilesystem: true
runAsGroup: 1001
runAsNonRoot: true
runAsUser: 1001
seLinuxOptions: {}
seccompProfile:
type: RuntimeDefault
resources:
limits:
cpu: 750m
ephemeral-storage: 2Gi
memory: 768Mi
cpu: 250m
ephemeral-storage: 1Gi
memory: 256Mi
requests:
cpu: 500m
cpu: 100m
ephemeral-storage: 50Mi
memory: 512Mi
memory: 128Mi
command:
- /bin/bash
args:
- -ec
- |
#!/bin/bash
. /opt/bitnami/scripts/libfs.sh
# We copy the logs folder because it has symlinks to stdout and stderr
if ! is_dir_empty /opt/bitnami/mysql/logs; then
@@ -102,39 +62,41 @@ spec:
mountPath: /emptydir
containers:
- name: mysql
image: docker.io/bitnami/mysql:8.4.5-debian-12-r0
imagePullPolicy: "IfNotPresent"
image: {{ .apps.mysql.image }}
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
- ALL
readOnlyRootFilesystem: true
runAsGroup: 1001
runAsNonRoot: true
runAsUser: 1001
seLinuxOptions: {}
seccompProfile:
type: RuntimeDefault
env:
- name: BITNAMI_DEBUG
value: "false"
- name: MYSQL_ROOT_PASSWORD_FILE
value: /opt/bitnami/mysql/secrets/mysql-root-password
- name: MYSQL_ENABLE_SSL
value: "no"
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secrets
key: apps.mysql.rootPassword
- name: MYSQL_USER
value: "bn_ghost"
- name: MYSQL_PASSWORD_FILE
value: /opt/bitnami/mysql/secrets/mysql-password
- name: MYSQL_PORT
value: "3306"
value: {{ .apps.mysql.user }}
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secrets
key: apps.mysql.password
- name: MYSQL_DATABASE
value: "bitnami_ghost"
envFrom:
value: {{ .apps.mysql.dbName }}
- name: MYSQL_PORT
value: "{{ .apps.mysql.port }}"
ports:
- name: mysql
containerPort: 3306
containerPort: {{ .apps.mysql.port }}
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 5
@@ -147,9 +109,6 @@ spec:
- -ec
- |
password_aux="${MYSQL_ROOT_PASSWORD:-}"
if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
fi
mysqladmin status -uroot -p"${password_aux}"
readinessProbe:
failureThreshold: 3
@@ -163,9 +122,6 @@ spec:
- -ec
- |
password_aux="${MYSQL_ROOT_PASSWORD:-}"
if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
fi
mysqladmin ping -uroot -p"${password_aux}" | grep "mysqld is alive"
startupProbe:
failureThreshold: 10
@@ -179,9 +135,6 @@ spec:
- -ec
- |
password_aux="${MYSQL_ROOT_PASSWORD:-}"
if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
fi
mysqladmin ping -uroot -p"${password_aux}" | grep "mysqld is alive"
resources:
limits:
@@ -210,32 +163,18 @@ spec:
- name: config
mountPath: /opt/bitnami/mysql/conf/my.cnf
subPath: my.cnf
- name: mysql-credentials
mountPath: /opt/bitnami/mysql/secrets/
volumes:
- name: config
configMap:
name: ghost-mysql
- name: mysql-credentials
secret:
secretName: ghost-mysql
items:
- key: mysql-root-password
path: mysql-root-password
- key: mysql-password
path: mysql-password
name: mysql
- name: empty-dir
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: data
labels:
app.kubernetes.io/instance: ghost
app.kubernetes.io/name: mysql
app.kubernetes.io/component: primary
spec:
accessModes:
- "ReadWriteOnce"
- ReadWriteOnce
resources:
requests:
storage: "8Gi"
storage: {{ .apps.mysql.storage }}

View File

@@ -36,7 +36,7 @@ spec:
valueFrom:
secretKeyRef:
name: postgres-secrets
key: password
key: apps.postgres.password
- name: DB_HOSTNAME
value: "{{ .apps.openproject.dbHostname }}"
- name: DB_DATABASE_NAME
@@ -47,5 +47,5 @@ spec:
valueFrom:
secretKeyRef:
name: openproject-secrets
key: dbPassword
key: apps.openproject.dbPassword
restartPolicy: OnFailure

View File

@@ -20,10 +20,11 @@ defaultConfig:
hsts: true
seedLocale: en
adminUserName: OpenProject Admin
adminUserEmail: '{{ .operator.email }}'
adminUserEmail: "{{ .operator.email }}"
adminPasswordReset: true
postgresStatementTimeout: 120s
tmpVolumesStorage: 2Gi
tlsSecretName: wildcard-wild-cloud-tls
cacheStore: memcache
railsRelativeUrlRoot: ""
requiredSecrets:

View File

@@ -62,12 +62,12 @@ spec:
valueFrom:
secretKeyRef:
name: openproject-secrets
key: dbPassword
key: apps.openproject.dbPassword
- name: OPENPROJECT_SEED_ADMIN_USER_PASSWORD
valueFrom:
secretKeyRef:
name: openproject-secrets
key: adminPassword
key: apps.openproject.adminPassword
resources:
limits:
memory: 200Mi
@@ -106,12 +106,12 @@ spec:
valueFrom:
secretKeyRef:
name: openproject-secrets
key: dbPassword
key: apps.openproject.dbPassword
- name: OPENPROJECT_SEED_ADMIN_USER_PASSWORD
valueFrom:
secretKeyRef:
name: openproject-secrets
key: adminPassword
key: apps.openproject.adminPassword
resources:
limits:
memory: 512Mi

View File

@@ -84,12 +84,12 @@ spec:
valueFrom:
secretKeyRef:
name: openproject-secrets
key: dbPassword
key: apps.openproject.dbPassword
- name: OPENPROJECT_SEED_ADMIN_USER_PASSWORD
valueFrom:
secretKeyRef:
name: openproject-secrets
key: adminPassword
key: apps.openproject.adminPassword
args:
- /app/docker/prod/wait-for-db
resources:
@@ -127,12 +127,12 @@ spec:
valueFrom:
secretKeyRef:
name: openproject-secrets
key: dbPassword
key: apps.openproject.dbPassword
- name: OPENPROJECT_SEED_ADMIN_USER_PASSWORD
valueFrom:
secretKeyRef:
name: openproject-secrets
key: adminPassword
key: apps.openproject.adminPassword
args:
- /app/docker/prod/web
volumeMounts:

View File

@@ -84,12 +84,12 @@ spec:
valueFrom:
secretKeyRef:
name: openproject-secrets
key: dbPassword
key: apps.openproject.dbPassword
- name: OPENPROJECT_SEED_ADMIN_USER_PASSWORD
valueFrom:
secretKeyRef:
name: openproject-secrets
key: adminPassword
key: apps.openproject.adminPassword
args:
- bash
- /app/docker/prod/wait-for-db
@@ -132,7 +132,7 @@ spec:
valueFrom:
secretKeyRef:
name: openproject-secrets
key: dbPassword
key: apps.openproject.dbPassword
- name: "OPENPROJECT_GOOD_JOB_QUEUES"
value: ""
volumeMounts:

View File

@@ -44,7 +44,7 @@ spec:
valueFrom:
secretKeyRef:
name: postgres-secrets
key: password
key: apps.postgres.password
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data

View File

@@ -73,5 +73,5 @@ spec:
valueFrom:
secretKeyRef:
name: postgres-secrets
key: password
key: apps.postgres.password
restartPolicy: Never

View File

@@ -21,4 +21,13 @@ spec:
env:
- name: TZ
value: "{{ .apps.redis.timezone }}"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis-secrets
key: apps.redis.password
command:
- redis-server
- --requirepass
- $(REDIS_PASSWORD)
restartPolicy: Always

View File

@@ -7,3 +7,5 @@ defaultConfig:
image: redis:alpine
timezone: UTC
port: 6379
requiredSecrets:
- apps.redis.password

View File

@@ -1,23 +0,0 @@
#!/bin/bash
# Simple backup script for your personal cloud
# This is a placeholder for future implementation
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
cd "$SCRIPT_DIR"
if [[ -f "../load-env.sh" ]]; then
source ../load-env.sh
fi
BACKUP_DIR="${PROJECT_DIR}/backups/$(date +%Y-%m-%d)"
mkdir -p "$BACKUP_DIR"
# Back up Kubernetes resources
kubectl get all -A -o yaml > "$BACKUP_DIR/all-resources.yaml"
kubectl get secrets -A -o yaml > "$BACKUP_DIR/secrets.yaml"
kubectl get configmaps -A -o yaml > "$BACKUP_DIR/configmaps.yaml"
# Back up persistent volumes
# TODO: Add logic to back up persistent volume data
echo "Backup completed: $BACKUP_DIR"

View File

@@ -1,85 +0,0 @@
#!/usr/bin/env bash
# This script generates config.env and secrets.env files for an app
# by evaluating variables in the app's .env file and splitting them
# into regular config and secret variables based on the "# Secrets" marker
#
# Usage: bin/generate-config [app-name]
set -e
# Source environment variables from load-env.sh
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_DIR="$(dirname "$SCRIPT_DIR")"
if [ -f "$REPO_DIR/load-env.sh" ]; then
source "$REPO_DIR/load-env.sh"
fi
# Function to process a single app
process_app() {
local APP_NAME="$1"
local APP_DIR="$APPS_DIR/$APP_NAME"
local ENV_FILE="$APP_DIR/config/.env"
local CONFIG_FILE="$APP_DIR/config/config.env"
local SECRETS_FILE="$APP_DIR/config/secrets.env"
# Check if the app exists
if [ ! -d "$APP_DIR" ]; then
echo "Error: App '$APP_NAME' not found"
return 1
fi
# Check if the .env file exists
if [ ! -f "$ENV_FILE" ]; then
echo "Warning: Environment file not found: $ENV_FILE"
return 0
fi
# Process the .env file
echo "Generating config files for $APP_NAME..."
# Create temporary files for processed content
local TMP_FILE="$APP_DIR/config/processed.env"
# Process the file with envsubst to expand variables
envsubst < "$ENV_FILE" > $TMP_FILE
# Initialize header for output files
echo "# Generated by \`generate-config\` on $(date)" > "$CONFIG_FILE"
echo "# Generated by \`generate-config\` on $(date)" > "$SECRETS_FILE"
# Find the line number of the "# Secrets" marker
local SECRETS_LINE=$(grep -n "^# Secrets" $TMP_FILE | cut -d':' -f1)
if [ -n "$SECRETS_LINE" ]; then
# Extract non-comment lines with "=" before the "# Secrets" marker
head -n $((SECRETS_LINE - 1)) $TMP_FILE | grep -v "^#" | grep "=" >> "$CONFIG_FILE"
# Extract non-comment lines with "=" after the "# Secrets" marker
tail -n +$((SECRETS_LINE + 1)) $TMP_FILE | grep -v "^#" | grep "=" >> "$SECRETS_FILE"
else
# No secrets marker found, put everything in config
grep -v "^#" $TMP_FILE | grep "=" >> "$CONFIG_FILE"
fi
# Clean up
rm -f "$TMP_FILE"
echo "Generated:"
echo " - $CONFIG_FILE"
echo " - $SECRETS_FILE"
}
# Process all apps or specific app
if [ $# -lt 1 ]; then
# No app name provided - process all apps
for app_dir in "$APPS_DIR"/*; do
if [ -d "$app_dir" ]; then
APP_NAME="$(basename "$app_dir")"
process_app "$APP_NAME"
fi
done
exit 0
fi
APP_NAME="$1"
process_app "$APP_NAME"

View File

@@ -1,67 +0,0 @@
#!/bin/bash
# This script installs the local CA certificate on Ubuntu systems to avoid
# certificate warnings in browsers when accessing internal cloud services.
# Set up error handling
set -e
# Define colors for better readability
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
RED='\033[0;31m'
NC='\033[0m' # No Color
CA_DIR="/home/payne/repos/cloud.payne.io-setup/ca"
CA_FILE="$CA_DIR/ca.crt"
TARGET_DIR="/usr/local/share/ca-certificates"
TARGET_FILE="cloud-payne-local-ca.crt"
echo -e "${BLUE}=== Installing Local CA Certificate on Ubuntu ===${NC}"
echo
# Check if CA file exists
if [ ! -f "$CA_FILE" ]; then
echo -e "${RED}CA certificate not found at $CA_FILE${NC}"
echo -e "${YELLOW}Please run the create-local-ca script first:${NC}"
echo -e "${BLUE}./bin/create-local-ca${NC}"
exit 1
fi
# Copy to the system certificate directory
echo -e "${YELLOW}Copying CA certificate to $TARGET_DIR/$TARGET_FILE...${NC}"
sudo cp "$CA_FILE" "$TARGET_DIR/$TARGET_FILE"
# Update the CA certificates
echo -e "${YELLOW}Updating system CA certificates...${NC}"
sudo update-ca-certificates
# Update browsers' CA store (optional, for Firefox)
if [ -d "$HOME/.mozilla" ]; then
echo -e "${YELLOW}You may need to manually import the certificate in Firefox:${NC}"
echo -e "1. Open Firefox"
echo -e "2. Go to Preferences > Privacy & Security > Certificates"
echo -e "3. Click 'View Certificates' > 'Authorities' tab"
echo -e "4. Click 'Import' and select $CA_FILE"
echo -e "5. Check 'Trust this CA to identify websites' and click OK"
fi
# Check popular browsers
if command -v google-chrome &> /dev/null; then
echo -e "${YELLOW}For Chrome, the system-wide certificate should now be recognized${NC}"
echo -e "${YELLOW}You may need to restart the browser${NC}"
fi
echo
echo -e "${GREEN}=== CA Certificate Installation Complete ===${NC}"
echo
echo -e "${YELLOW}System-wide CA certificate has been installed.${NC}"
echo -e "${YELLOW}You should now be able to access the Kubernetes Dashboard without certificate warnings:${NC}"
echo -e "${BLUE}https://kubernetes-dashboard.in.cloud.payne.io${NC}"
echo
echo -e "${YELLOW}If you still see certificate warnings, try:${NC}"
echo "1. Restart your browser"
echo "2. Clear your browser's cache and cookies"
echo "3. If using a non-standard browser, you may need to import the certificate manually"
echo

View File

@@ -56,7 +56,7 @@ fi
CONFIG_FILE="${WC_HOME}/config.yaml"
if [ ! -f "${CONFIG_FILE}" ]; then
echo "Creating config file at ${CONFIG_FILE}"
echo "Creating config file at '${CONFIG_FILE}'."
echo "# Wild Cloud Configuration" > "${CONFIG_FILE}"
echo "# This file contains app configurations and should be committed to git" >> "${CONFIG_FILE}"
echo "" >> "${CONFIG_FILE}"
@@ -64,7 +64,7 @@ fi
SECRETS_FILE="${WC_HOME}/secrets.yaml"
if [ ! -f "${SECRETS_FILE}" ]; then
echo "Creating secrets file at ${SECRETS_FILE}"
echo "Creating secrets file at '${SECRETS_FILE}'."
echo "# Wild Cloud Secrets Configuration" > "${SECRETS_FILE}"
echo "# This file contains sensitive data and should NOT be committed to git" >> "${SECRETS_FILE}"
echo "# Add this file to your .gitignore" >> "${SECRETS_FILE}"
@@ -74,8 +74,8 @@ fi
# Check if app is cached, if not fetch it first
CACHE_APP_DIR="${WC_HOME}/.wildcloud/cache/apps/${APP_NAME}"
if [ ! -d "${CACHE_APP_DIR}" ]; then
echo "Cache directory for app '${APP_NAME}' not found at ${CACHE_APP_DIR}"
echo "Please fetch the app first using 'wild-app-fetch ${APP_NAME}'"
echo "Cache directory for app '${APP_NAME}' not found at '${CACHE_APP_DIR}'."
echo "Please fetch the app first using 'wild-app-fetch ${APP_NAME}'."
exit 1
fi
if [ ! -d "${CACHE_APP_DIR}" ]; then
@@ -89,166 +89,116 @@ fi
APPS_DIR="${WC_HOME}/apps"
if [ ! -d "${APPS_DIR}" ]; then
echo "Creating apps directory at ${APPS_DIR}"
echo "Creating apps directory at '${APPS_DIR}'."
mkdir -p "${APPS_DIR}"
fi
DEST_APP_DIR="${WC_HOME}/apps/${APP_NAME}"
if [ -d "${DEST_APP_DIR}" ]; then
if [ "${UPDATE}" = true ]; then
echo "Updating app '${APP_NAME}'"
echo "Updating app '${APP_NAME}'."
rm -rf "${DEST_APP_DIR}"
else
echo "Warning: Destination directory ${DEST_APP_DIR} already exists"
echo "Warning: Destination directory ${DEST_APP_DIR} already exists."
read -p "Do you want to overwrite it? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo "Configuration cancelled"
echo "Configuration cancelled."
exit 1
fi
rm -rf "${DEST_APP_DIR}"
fi
else
echo "Adding app '${APP_NAME}' to '${DEST_APP_DIR}'."
fi
mkdir -p "${DEST_APP_DIR}"
echo "Adding app '${APP_NAME}' from cache to ${DEST_APP_DIR}"
# Step 1: Copy only manifest.yaml from cache first
MANIFEST_FILE="${CACHE_APP_DIR}/manifest.yaml"
if [ -f "${MANIFEST_FILE}" ]; then
echo "Copying manifest.yaml from cache"
cp "${MANIFEST_FILE}" "${DEST_APP_DIR}/manifest.yaml"
# manifest.yaml is allowed to have gomplate variables in the defaultConfig and requiredSecrets sections.
# We need to use gomplate to process these variables before using yq.
echo "Copying app manifest from cache."
DEST_MANIFEST="${DEST_APP_DIR}/manifest.yaml"
if [ -f "${SECRETS_FILE}" ]; then
gomplate_cmd="gomplate -c .=${CONFIG_FILE} -c secrets=${SECRETS_FILE} -f ${MANIFEST_FILE} -o ${DEST_MANIFEST}"
else
gomplate_cmd="gomplate -c .=${CONFIG_FILE} -f ${MANIFEST_FILE} -o ${DEST_MANIFEST}"
fi
if ! eval "${gomplate_cmd}"; then
echo "Error processing manifest.yaml with gomplate"
exit 1
fi
else
echo "Warning: manifest.yaml not found in cache for app '${APP_NAME}'"
echo "Warning: App manifest not found in cache."
exit 1
fi
# Step 2: Add missing config and secret values based on manifest
echo "Processing configuration and secrets from manifest.yaml"
# Check if the app section exists in config.yaml, if not create it
if ! yq eval ".apps.${APP_NAME}" "${CONFIG_FILE}" >/dev/null 2>&1; then
yq eval ".apps.${APP_NAME} = {}" -i "${CONFIG_FILE}"
fi
# Extract defaultConfig from manifest.yaml and merge into config.yaml
if yq eval '.defaultConfig' "${DEST_APP_DIR}/manifest.yaml" | grep -q -v '^null$'; then
echo "Merging defaultConfig from manifest.yaml into .wildcloud/config.yaml"
# Check if apps section exists in the secrets.yaml, if not create it
if ! yq eval ".apps" "${SECRETS_FILE}" >/dev/null 2>&1; then
yq eval ".apps = {}" -i "${SECRETS_FILE}"
fi
# Check if the app config already exists
if yq eval ".apps.${APP_NAME}" "${CONFIG_FILE}" | grep -q '^null$'; then
yq eval ".apps.${APP_NAME} = {}" -i "${CONFIG_FILE}"
fi
# Merge processed defaultConfig into the app config, preserving existing values
# This preserves existing configuration values while adding missing defaults
if yq eval '.defaultConfig' "${DEST_MANIFEST}" | grep -q -v '^null$'; then
# Merge defaultConfig into the app config, preserving nested structure
# This preserves the nested structure for objects like resources.requests.memory
temp_manifest=$(mktemp)
yq eval '.defaultConfig' "${DEST_APP_DIR}/manifest.yaml" > "$temp_manifest"
yq eval ".apps.${APP_NAME} = (.apps.${APP_NAME} // {}) * load(\"$temp_manifest\")" -i "${CONFIG_FILE}"
rm "$temp_manifest"
# Extract defaultConfig from the processed manifest and merge with existing app config
# The * operator merges objects, with the right side taking precedence for conflicting keys
# So (.apps.${APP_NAME} // {}) preserves existing values, defaultConfig adds missing ones
temp_default_config=$(mktemp)
yq eval '.defaultConfig' "${DEST_MANIFEST}" > "$temp_default_config"
yq eval ".apps.${APP_NAME} = load(\"$temp_default_config\") * (.apps.${APP_NAME} // {})" -i "${CONFIG_FILE}"
rm "$temp_default_config"
# Process template variables in the merged config
echo "Processing template variables in app config"
temp_config=$(mktemp)
# Build gomplate command with config context
gomplate_cmd="gomplate -c .=${CONFIG_FILE}"
# Add secrets context if secrets.yaml exists
if [ -f "${SECRETS_FILE}" ]; then
gomplate_cmd="${gomplate_cmd} -c secrets=${SECRETS_FILE}"
fi
# Process the entire config file through gomplate to resolve template variables
${gomplate_cmd} -f "${CONFIG_FILE}" > "$temp_config"
mv "$temp_config" "${CONFIG_FILE}"
echo "Merged defaultConfig for app '${APP_NAME}'"
echo "Merged default configuration from app manifest into '${CONFIG_FILE}'."
# Remove defaultConfig from the copied manifest since it's now in config.yaml.
yq eval 'del(.defaultConfig)' -i "${DEST_MANIFEST}"
fi
# Scaffold required secrets into .wildcloud/secrets.yaml if they don't exist
if yq eval '.requiredSecrets' "${DEST_APP_DIR}/manifest.yaml" | grep -q -v '^null$'; then
echo "Scaffolding required secrets for app '${APP_NAME}'"
if yq eval '.requiredSecrets' "${DEST_MANIFEST}" | grep -q -v '^null$'; then
# Ensure .wildcloud/secrets.yaml exists
if [ ! -f "${SECRETS_FILE}" ]; then
echo "Creating secrets file at '${SECRETS_FILE}'"
echo "# Wild Cloud Secrets Configuration" > "${SECRETS_FILE}"
echo "# This file contains sensitive data and should NOT be committed to git" >> "${SECRETS_FILE}"
echo "# Add this file to your .gitignore" >> "${SECRETS_FILE}"
echo "" >> "${SECRETS_FILE}"
fi
# Check if apps section exists, if not create it
if ! yq eval ".apps" "${SECRETS_FILE}" >/dev/null 2>&1; then
yq eval ".apps = {}" -i "${SECRETS_FILE}"
fi
# Check if app section exists, if not create it
if ! yq eval ".apps.${APP_NAME}" "${SECRETS_FILE}" >/dev/null 2>&1; then
yq eval ".apps.${APP_NAME} = {}" -i "${SECRETS_FILE}"
fi
# Add dummy values for each required secret if not already present
yq eval '.requiredSecrets[]' "${DEST_APP_DIR}/manifest.yaml" | while read -r secret_path; do
# Add random values for each required secret if not already present
while read -r secret_path; do
current_value=$(yq eval ".${secret_path} // \"null\"" "${SECRETS_FILE}")
if [ "${current_value}" = "null" ]; then
echo "Adding random secret: ${secret_path}"
random_secret=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 6)
random_secret=$(openssl rand -base64 32 | tr -d "=+/" | cut -c1-32)
yq eval ".${secret_path} = \"${random_secret}\"" -i "${SECRETS_FILE}"
fi
done
echo "Required secrets scaffolded for app '${APP_NAME}'"
done < <(yq eval '.requiredSecrets[]' "${DEST_MANIFEST}")
echo "Required secrets declared in app manifest added to '${SECRETS_FILE}'."
fi
# Step 3: Copy and compile all other files from cache to app directory
echo "Copying and compiling remaining files from cache"
echo "Copying and compiling remaining files from cache."
# Function to process a file with gomplate if it's a YAML file
process_file() {
local src_file="$1"
local dest_file="$2"
cp -r "${CACHE_APP_DIR}/." "${DEST_APP_DIR}/"
find "${DEST_APP_DIR}" -type f | while read -r dest_file; do
rel_path="${dest_file#${DEST_APP_DIR}/}"
echo "Processing file: ${dest_file}"
# Build gomplate command with config context (enables .config shorthand)
gomplate_cmd="gomplate -c .=${CONFIG_FILE}"
# Add secrets context if secrets.yaml exists (enables .secrets shorthand)
if [ -f "${SECRETS_FILE}" ]; then
gomplate_cmd="${gomplate_cmd} -c secrets=${SECRETS_FILE}"
fi
# Execute gomplate with the file
${gomplate_cmd} -f "${src_file}" > "${dest_file}"
}
# Copy directory structure and process files (excluding manifest.yaml which was already copied)
find "${CACHE_APP_DIR}" -type d | while read -r src_dir; do
rel_path="${src_dir#${CACHE_APP_DIR}}"
rel_path="${rel_path#/}" # Remove leading slash if present
if [ -n "${rel_path}" ]; then
mkdir -p "${DEST_APP_DIR}/${rel_path}"
fi
done
find "${CACHE_APP_DIR}" -type f | while read -r src_file; do
rel_path="${src_file#${CACHE_APP_DIR}}"
rel_path="${rel_path#/}" # Remove leading slash if present
# Skip manifest.yaml since it was already copied in step 1
if [ "${rel_path}" = "manifest.yaml" ]; then
continue
fi
dest_file="${DEST_APP_DIR}/${rel_path}"
# Ensure destination directory exists
dest_dir=$(dirname "${dest_file}")
mkdir -p "${dest_dir}"
process_file "${src_file}" "${dest_file}"
temp_file=$(mktemp)
gomplate -c .=${CONFIG_FILE} -c secrets=${SECRETS_FILE} -f "${dest_file}" > "${temp_file}"
mv "${temp_file}" "${dest_file}"
done
echo "Successfully added app '${APP_NAME}' with template processing"
echo "Added '${APP_NAME}'."

379
bin/wild-app-backup Executable file
View File

@@ -0,0 +1,379 @@
#!/usr/bin/env bash
set -Eeuo pipefail
# wild-app-backup - Generic backup script for wild-cloud apps
# Usage: wild-app-backup <app-name> [--all]
# --- Initialize Wild Cloud environment ---------------------------------------
if [ -z "${WC_ROOT:-}" ]; then
echo "WC_ROOT is not set." >&2
exit 1
else
source "${WC_ROOT}/scripts/common.sh"
init_wild_env
fi
# --- Configuration ------------------------------------------------------------
get_staging_dir() {
if wild-config cloud.backup.staging --check; then
wild-config cloud.backup.staging
else
echo "Staging directory is not set. Configure 'cloud.backup.staging' in config.yaml." >&2
exit 1
fi
}
# --- Helpers ------------------------------------------------------------------
require_k8s() {
if ! command -v kubectl >/dev/null 2>&1; then
echo "kubectl not found." >&2
exit 1
fi
}
require_yq() {
if ! command -v yq >/dev/null 2>&1; then
echo "yq not found. Required for parsing manifest.yaml files." >&2
exit 1
fi
}
get_timestamp() {
date -u +'%Y%m%dT%H%M%SZ'
}
# --- App Discovery ------------------------------------------------------------
discover_database_deps() {
local app_name="$1"
local manifest_file="${WC_HOME}/apps/${app_name}/manifest.yaml"
if [[ -f "$manifest_file" ]]; then
yq eval '.requires[].name' "$manifest_file" 2>/dev/null | grep -E '^(postgres|mysql|redis)$' || true
fi
}
discover_app_pvcs() {
local app_name="$1"
kubectl get pvc -n "$app_name" -l "app=$app_name" --no-headers -o custom-columns=":metadata.name" 2>/dev/null || true
}
get_app_pods() {
local app_name="$1"
kubectl get pods -n "$app_name" -l "app=$app_name" \
-o jsonpath='{.items[?(@.status.phase=="Running")].metadata.name}' 2>/dev/null | \
tr ' ' '\n' | head -1 || true
}
discover_pvc_mount_paths() {
local app_name="$1" pvc_name="$2"
# Find the volume name that uses this PVC
local volume_name
volume_name=$(kubectl get deploy -n "$app_name" -l "app=$app_name" \
-o jsonpath='{.items[*].spec.template.spec.volumes[?(@.persistentVolumeClaim.claimName=="'$pvc_name'")].name}' 2>/dev/null | awk 'NR==1{print; exit}')
if [[ -n "$volume_name" ]]; then
# Find the mount path for this volume (get first mount path)
local mount_path
mount_path=$(kubectl get deploy -n "$app_name" -l "app=$app_name" \
-o jsonpath='{.items[*].spec.template.spec.containers[*].volumeMounts[?(@.name=="'$volume_name'")].mountPath}' 2>/dev/null | \
tr ' ' '\n' | head -1)
if [[ -n "$mount_path" ]]; then
echo "$mount_path"
return 0
fi
fi
# No mount path found
return 1
}
# --- Database Backup Functions -----------------------------------------------
backup_postgres_database() {
local app_name="$1"
local backup_dir="$2"
local timestamp="$3"
local db_name="${app_name}"
local pg_ns="postgres"
local pg_deploy="postgres-deployment"
local db_superuser="postgres"
echo "Backing up PostgreSQL database '$db_name'..." >&2
# Check if postgres is available
if ! kubectl get pods -n "$pg_ns" >/dev/null 2>&1; then
echo "PostgreSQL namespace '$pg_ns' not accessible. Skipping database backup." >&2
return 1
fi
local db_dump="${backup_dir}/database_${timestamp}.dump"
local db_globals="${backup_dir}/globals_${timestamp}.sql"
# Database dump (custom format, compressed)
if ! kubectl exec -n "$pg_ns" deploy/"$pg_deploy" -- bash -lc \
"pg_dump -U ${db_superuser} -Fc -Z 9 ${db_name}" > "$db_dump"
then
echo "Database dump failed for '$app_name'." >&2
return 1
fi
# Verify dump integrity
# if ! kubectl exec -i -n "$pg_ns" deploy/"$pg_deploy" -- bash -lc "pg_restore -l >/dev/null" < "$db_dump"; then
# echo "Database dump integrity check failed for '$app_name'." >&2
# return 1
# fi
# Dump globals (roles, permissions)
if ! kubectl exec -n "$pg_ns" deploy/"$pg_deploy" -- bash -lc \
"pg_dumpall -U ${db_superuser} -g" > "$db_globals"
then
echo "Globals dump failed for '$app_name'." >&2
return 1
fi
echo " Database dump: $db_dump" >&2
echo " Globals dump: $db_globals" >&2
# Return paths for manifest generation
echo "$db_dump $db_globals"
}
backup_mysql_database() {
local app_name="$1"
local backup_dir="$2"
local timestamp="$3"
local db_name="${app_name}"
local mysql_ns="mysql"
local mysql_deploy="mysql-deployment"
local mysql_user="root"
echo "Backing up MySQL database '$db_name'..." >&2
if ! kubectl get pods -n "$mysql_ns" >/dev/null 2>&1; then
echo "MySQL namespace '$mysql_ns' not accessible. Skipping database backup." >&2
return 1
fi
local db_dump="${backup_dir}/database_${timestamp}.sql"
# Get MySQL root password from secret
local mysql_password
if mysql_password=$(kubectl get secret -n "$mysql_ns" mysql-secret -o jsonpath='{.data.password}' 2>/dev/null | base64 -d); then
# MySQL dump with password
if ! kubectl exec -n "$mysql_ns" deploy/"$mysql_deploy" -- bash -c \
"mysqldump -u${mysql_user} -p'${mysql_password}' --single-transaction --routines --triggers ${db_name}" > "$db_dump"
then
echo "MySQL dump failed for '$app_name'." >&2
return 1
fi
else
echo "Could not retrieve MySQL password. Skipping database backup." >&2
return 1
fi
echo " Database dump: $db_dump" >&2
echo "$db_dump"
}
# --- PVC Backup Functions ----------------------------------------------------
backup_pvc() {
local app_name="$1"
local pvc_name="$2"
local backup_dir="$3"
local timestamp="$4"
echo "Backing up PVC '$pvc_name' from namespace '$app_name'..." >&2
# Get a running pod that actually uses this specific PVC
local app_pod
# First try to find a pod that has this exact PVC volume mounted
local pvc_volume_id=$(kubectl get pvc -n "$app_name" "$pvc_name" -o jsonpath='{.spec.volumeName}' 2>/dev/null)
if [[ -n "$pvc_volume_id" ]]; then
# Look for a pod that has a mount from this specific volume
app_pod=$(kubectl get pods -n "$app_name" -l "app=$app_name" -o json 2>/dev/null | \
jq -r '.items[] | select(.status.phase=="Running") | select(.spec.volumes[]?.persistentVolumeClaim.claimName=="'$pvc_name'") | .metadata.name' | head -1)
fi
# Fallback to any running pod
if [[ -z "$app_pod" ]]; then
app_pod=$(get_app_pods "$app_name")
fi
if [[ -z "$app_pod" ]]; then
echo "No running pods found for app '$app_name'. Skipping PVC backup." >&2
return 1
fi
echo "Using pod '$app_pod' for PVC backup" >&2
# Discover mount path for this PVC
local mount_path
mount_path=$(discover_pvc_mount_paths "$app_name" "$pvc_name" | awk 'NR==1{print; exit}')
if [[ -z "$mount_path" ]]; then
echo "Could not determine mount path for PVC '$pvc_name'. Trying to detect..." >&2
# Try to find any volume mount that might be the PVC by looking at df output
mount_path=$(kubectl exec -n "$app_name" "$app_pod" -- sh -c "df | grep longhorn | awk '{print \$6}' | head -1" 2>/dev/null)
if [[ -z "$mount_path" ]]; then
mount_path="/data" # Final fallback
fi
echo "Using detected/fallback mount path: $mount_path" >&2
fi
local pvc_backup_dir="${backup_dir}/${pvc_name}"
mkdir -p "$pvc_backup_dir"
# Stream tar directly from pod to staging directory for restic deduplication
local parent_dir=$(dirname "$mount_path")
local dir_name=$(basename "$mount_path")
echo " Streaming PVC data directly to staging..." >&2
if kubectl exec -n "$app_name" "$app_pod" -- tar -C "$parent_dir" -cf - "$dir_name" | tar -xf - -C "$pvc_backup_dir" 2>/dev/null; then
echo " PVC data streamed successfully" >&2
else
echo "PVC backup failed for '$pvc_name' in '$app_name'." >&2
return 1
fi
echo " PVC backup directory: $pvc_backup_dir" >&2
echo "$pvc_backup_dir"
}
# --- Main Backup Function ----------------------------------------------------
backup_app() {
local app_name="$1"
local staging_dir="$2"
echo "=========================================="
echo "Starting backup of app: $app_name"
echo "=========================================="
local timestamp
timestamp=$(get_timestamp)
local backup_dir="${staging_dir}/apps/${app_name}"
# Clean up any existing backup files for this app
if [[ -d "$backup_dir" ]]; then
echo "Cleaning up existing backup files for '$app_name'..." >&2
rm -rf "$backup_dir"
fi
mkdir -p "$backup_dir"
local backup_files=()
# Check if app has custom backup script first
local custom_backup_script="${WC_HOME}/apps/${app_name}/backup.sh"
if [[ -x "$custom_backup_script" ]]; then
echo "Found custom backup script for '$app_name'. Running..."
"$custom_backup_script"
echo "Custom backup completed for '$app_name'."
return 0
fi
# Generic backup based on manifest discovery
local database_deps
database_deps=$(discover_database_deps "$app_name")
local pvcs
pvcs=$(discover_app_pvcs "$app_name")
if [[ -z "$database_deps" && -z "$pvcs" ]]; then
echo "No databases or PVCs found for app '$app_name'. Nothing to backup." >&2
return 0
fi
# Backup databases
for db_type in $database_deps; do
case "$db_type" in
postgres)
if db_files=$(backup_postgres_database "$app_name" "$backup_dir" "$timestamp"); then
read -ra db_file_array <<< "$db_files"
backup_files+=("${db_file_array[@]}")
fi
;;
mysql)
if db_files=$(backup_mysql_database "$app_name" "$backup_dir" "$timestamp"); then
backup_files+=("$db_files")
fi
;;
redis)
echo "Redis backup not implemented yet. Skipping."
;;
esac
done
# Backup PVCs
for pvc in $pvcs; do
if pvc_file=$(backup_pvc "$app_name" "$pvc" "$backup_dir" "$timestamp"); then
backup_files+=("$pvc_file")
fi
done
# Summary
if [[ ${#backup_files[@]} -gt 0 ]]; then
echo "----------------------------------------"
echo "Backup completed for '$app_name'"
echo "Files backed up:"
printf ' - %s\n' "${backup_files[@]}"
echo "----------------------------------------"
else
echo "No files were successfully backed up for '$app_name'." >&2
return 1
fi
}
# --- Main Script Logic -------------------------------------------------------
main() {
if [[ $# -eq 0 || "$1" == "--help" || "$1" == "-h" ]]; then
echo "Usage: $0 <app-name> [app-name2...] | --all"
echo " $0 --list # List available apps"
exit 1
fi
require_k8s
require_yq
local staging_dir
staging_dir=$(get_staging_dir)
mkdir -p "$staging_dir"
echo "Staging backups at: $staging_dir"
if [[ "$1" == "--list" ]]; then
echo "Available apps:"
find "${WC_HOME}/apps" -maxdepth 1 -type d -not -path "${WC_HOME}/apps" -exec basename {} \; | sort
exit 0
fi
if [[ "$1" == "--all" ]]; then
echo "Backing up all apps..."
local apps
mapfile -t apps < <(find "${WC_HOME}/apps" -maxdepth 1 -type d -not -path "${WC_HOME}/apps" -exec basename {} \;)
for app in "${apps[@]}"; do
if ! backup_app "$app" "$staging_dir"; then
echo "Backup failed for '$app', continuing with next app..." >&2
fi
done
else
# Backup specific apps
local failed_apps=()
for app in "$@"; do
if ! backup_app "$app" "$staging_dir"; then
failed_apps+=("$app")
fi
done
if [[ ${#failed_apps[@]} -gt 0 ]]; then
echo "The following app backups failed: ${failed_apps[*]}" >&2
exit 1
fi
fi
echo "All backups completed successfully."
}
main "$@"

View File

@@ -79,42 +79,35 @@ deploy_secrets() {
echo "Deploying secrets for app '${app_name}' in namespace '${namespace}'"
# Create secret data
# Gather data for app secret
local secret_data=""
while IFS= read -r secret_path; do
# Get the secret value using full path
secret_value=$(yq eval ".${secret_path} // \"\"" "${SECRETS_FILE}")
# Extract just the key name for the Kubernetes secret (handle dotted paths)
secret_key="${secret_path##*.}"
if [ -n "${secret_value}" ] && [ "${secret_value}" != "null" ]; then
if [[ "${secret_value}" == CHANGE_ME_* ]]; then
echo "Warning: Secret '${secret_path}' for app '${app_name}' still has dummy value: ${secret_value}"
fi
secret_data="${secret_data} --from-literal=${secret_key}=${secret_value}"
secret_data="${secret_data} --from-literal=${secret_path}=${secret_value}"
else
echo "Error: Required secret '${secret_path}' not found in ${SECRETS_FILE} for app '${app_name}'"
exit 1
fi
done < <(yq eval '.requiredSecrets[]' "${manifest_file}")
# Create the secret if we have data
# Create/update app secret in cluster
if [ -n "${secret_data}" ]; then
echo "Creating/updating secret '${app_name}-secrets' in namespace '${namespace}'"
if [ "${DRY_RUN:-}" = "--dry-run=client" ]; then
echo "DRY RUN: kubectl create secret generic ${app_name}-secrets ${secret_data} --namespace=${namespace} --dry-run=client -o yaml"
else
# Delete existing secret if it exists, then create new one
kubectl delete secret "${app_name}-secrets" --namespace="${namespace}" --ignore-not-found=true
kubectl create secret generic "${app_name}-secrets" ${secret_data} --namespace="${namespace}"
fi
fi
}
# Step 1: Create namespaces first (dependencies and main app)
# Step 1: Create namespaces first
echo "Creating namespaces..."
MANIFEST_FILE="apps/${APP_NAME}/manifest.yaml"
# Create dependency namespaces.
if [ -f "${MANIFEST_FILE}" ]; then
if yq eval '.requires' "${MANIFEST_FILE}" | grep -q -v '^null$'; then
yq eval '.requires[].name' "${MANIFEST_FILE}" | while read -r required_app; do
@@ -152,32 +145,6 @@ if [ -f "apps/${APP_NAME}/namespace.yaml" ]; then
wild-cluster-secret-copy cert-manager:wildcard-wild-cloud-tls "$NAMESPACE" || echo "Warning: Failed to copy external wildcard certificate"
fi
# Step 2: Deploy secrets (dependencies and main app)
echo "Deploying secrets..."
if [ -f "${MANIFEST_FILE}" ]; then
if yq eval '.requires' "${MANIFEST_FILE}" | grep -q -v '^null$'; then
echo "Deploying secrets for required dependencies..."
yq eval '.requires[].name' "${MANIFEST_FILE}" | while read -r required_app; do
if [ -z "${required_app}" ] || [ "${required_app}" = "null" ]; then
echo "Warning: Empty or null dependency found, skipping"
continue
fi
if [ ! -d "apps/${required_app}" ]; then
echo "Error: Required dependency '${required_app}' not found in apps/ directory"
exit 1
fi
echo "Deploying secrets for dependency: ${required_app}"
# Deploy secrets in dependency's own namespace
deploy_secrets "${required_app}"
# Also deploy dependency secrets in consuming app's namespace
echo "Copying dependency secrets to app namespace: ${APP_NAME}"
deploy_secrets "${required_app}" "${APP_NAME}"
done
fi
fi
# Deploy secrets for this app
deploy_secrets "${APP_NAME}"

602
bin/wild-app-restore Executable file
View File

@@ -0,0 +1,602 @@
#!/usr/bin/env bash
set -Eeuo pipefail
# wild-app-restore - Generic restore script for wild-cloud apps
# Usage: wild-app-restore <app-name> [snapshot-id] [--db-only|--pvc-only] [--skip-globals]
# --- Initialize Wild Cloud environment ---------------------------------------
if [ -z "${WC_ROOT:-}" ]; then
echo "WC_ROOT is not set." >&2
exit 1
else
source "${WC_ROOT}/scripts/common.sh"
init_wild_env
fi
# --- Configuration ------------------------------------------------------------
get_staging_dir() {
if wild-config cloud.backup.staging --check; then
wild-config cloud.backup.staging
else
echo "Staging directory is not set. Configure 'cloud.backup.staging' in config.yaml." >&2
exit 1
fi
}
get_restic_config() {
if wild-config cloud.backup.root --check; then
export RESTIC_REPOSITORY="$(wild-config cloud.backup.root)"
else
echo "WARNING: Could not get cloud backup root." >&2
exit 1
fi
if wild-secret cloud.backupPassword --check; then
export RESTIC_PASSWORD="$(wild-secret cloud.backupPassword)"
else
echo "WARNING: Could not get cloud backup secret." >&2
exit 1
fi
}
# --- Helpers ------------------------------------------------------------------
require_k8s() {
if ! command -v kubectl >/dev/null 2>&1; then
echo "kubectl not found." >&2
exit 1
fi
}
require_yq() {
if ! command -v yq >/dev/null 2>&1; then
echo "yq not found. Required for parsing manifest.yaml files." >&2
exit 1
fi
}
require_restic() {
if ! command -v restic >/dev/null 2>&1; then
echo "restic not found. Required for snapshot operations." >&2
exit 1
fi
}
show_help() {
echo "Usage: $0 <app-name> [snapshot-id] [OPTIONS]"
echo "Restore application data from restic snapshots"
echo ""
echo "Arguments:"
echo " app-name Name of the application to restore"
echo " snapshot-id Specific snapshot ID to restore (optional, uses latest if not provided)"
echo ""
echo "Options:"
echo " --db-only Restore only database data"
echo " --pvc-only Restore only PVC data"
echo " --skip-globals Skip restoring database globals (roles, permissions)"
echo " --list List available snapshots for the app"
echo " -h, --help Show this help message"
echo ""
echo "Examples:"
echo " $0 discourse # Restore latest discourse snapshot (all data)"
echo " $0 discourse abc123 --db-only # Restore specific snapshot, database only"
echo " $0 discourse --list # List available discourse snapshots"
}
# --- App Discovery Functions (from wild-app-backup) --------------------------
discover_database_deps() {
local app_name="$1"
local manifest_file="${WC_HOME}/apps/${app_name}/manifest.yaml"
if [[ -f "$manifest_file" ]]; then
yq eval '.requires[].name' "$manifest_file" 2>/dev/null | grep -E '^(postgres|mysql|redis)$' || true
fi
}
discover_app_pvcs() {
local app_name="$1"
kubectl get pvc -n "$app_name" -l "app=$app_name" --no-headers -o custom-columns=":metadata.name" 2>/dev/null || true
}
get_app_pods() {
local app_name="$1"
kubectl get pods -n "$app_name" -l "app=$app_name" \
-o jsonpath='{.items[?(@.status.phase=="Running")].metadata.name}' 2>/dev/null | \
tr ' ' '\n' | head -1 || true
}
# --- Restic Snapshot Functions -----------------------------------------------
list_app_snapshots() {
local app_name="$1"
echo "Available snapshots for app '$app_name':"
restic snapshots --tag "$app_name" --json | jq -r '.[] | "\(.short_id) \(.time) \(.hostname) \(.paths | join(" "))"' | \
sort -k2 -r | head -20
}
get_latest_snapshot() {
local app_name="$1"
restic snapshots --tag "$app_name" --json | jq -r '.[0].short_id' 2>/dev/null || echo ""
}
restore_from_snapshot() {
local app_name="$1"
local snapshot_id="$2"
local staging_dir="$3"
local restore_dir="$staging_dir/restore/$app_name"
mkdir -p "$restore_dir"
echo "Restoring snapshot $snapshot_id to $restore_dir..."
if ! restic restore "$snapshot_id" --target "$restore_dir"; then
echo "Failed to restore snapshot $snapshot_id" >&2
return 1
fi
echo "$restore_dir"
}
# --- Database Restore Functions ----------------------------------------------
restore_postgres_database() {
local app_name="$1"
local restore_dir="$2"
local skip_globals="$3"
local pg_ns="postgres"
local pg_deploy="postgres-deployment"
local db_superuser="postgres"
local db_name="$app_name"
local db_role="$app_name"
echo "Restoring PostgreSQL database '$db_name'..."
# Check if postgres is available
if ! kubectl get pods -n "$pg_ns" >/dev/null 2>&1; then
echo "PostgreSQL namespace '$pg_ns' not accessible. Cannot restore database." >&2
return 1
fi
# Find database dump file
local db_dump
db_dump=$(find "$restore_dir" -name "database_*.dump" -o -name "*_db_*.dump" | head -1)
if [[ -z "$db_dump" ]]; then
echo "No database dump found for '$app_name'" >&2
return 1
fi
# Find globals file
local globals_file
globals_file=$(find "$restore_dir" -name "globals_*.sql" | head -1)
# Helper functions for postgres operations
pg_exec() {
kubectl exec -n "$pg_ns" deploy/"$pg_deploy" -- bash -lc "$*"
}
pg_exec_i() {
kubectl exec -i -n "$pg_ns" deploy/"$pg_deploy" -- bash -lc "$*"
}
# Restore globals first if available and not skipped
if [[ "$skip_globals" != "true" && -n "$globals_file" && -f "$globals_file" ]]; then
echo "Restoring database globals..."
pg_exec_i "psql -v ON_ERROR_STOP=1 -U ${db_superuser} -d postgres" < "$globals_file"
fi
# Ensure role exists
pg_exec "psql -v ON_ERROR_STOP=1 -U ${db_superuser} -d postgres -c \"
DO \$\$
BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname='${db_role}') THEN
CREATE ROLE ${db_role} LOGIN;
END IF;
END
\$\$;\""
# Terminate existing connections
pg_exec "psql -v ON_ERROR_STOP=1 -U ${db_superuser} -d postgres -c \"
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE datname='${db_name}' AND pid <> pg_backend_pid();\""
# Drop and recreate database
pg_exec "psql -v ON_ERROR_STOP=1 -U ${db_superuser} -d postgres -c \"
DROP DATABASE IF EXISTS ${db_name};
CREATE DATABASE ${db_name} OWNER ${db_role};\""
# Restore database from dump
echo "Restoring database from $db_dump..."
if ! pg_exec_i "pg_restore -v -j 4 -U ${db_superuser} --clean --if-exists --no-owner --role=${db_role} -d ${db_name}" < "$db_dump"; then
echo "Database restore failed for '$app_name'" >&2
return 1
fi
# Ensure proper ownership
pg_exec "psql -v ON_ERROR_STOP=1 -U ${db_superuser} -d postgres -c \"ALTER DATABASE ${db_name} OWNER TO ${db_role};\""
echo "Database restore completed for '$app_name'"
}
restore_mysql_database() {
local app_name="$1"
local restore_dir="$2"
local mysql_ns="mysql"
local mysql_deploy="mysql-deployment"
local mysql_user="root"
local db_name="$app_name"
echo "Restoring MySQL database '$db_name'..."
if ! kubectl get pods -n "$mysql_ns" >/dev/null 2>&1; then
echo "MySQL namespace '$mysql_ns' not accessible. Cannot restore database." >&2
return 1
fi
# Find database dump file
local db_dump
db_dump=$(find "$restore_dir" -name "database_*.sql" -o -name "*_db_*.sql" | head -1)
if [[ -z "$db_dump" ]]; then
echo "No database dump found for '$app_name'" >&2
return 1
fi
# Get MySQL root password from secret
local mysql_password
if ! mysql_password=$(kubectl get secret -n "$mysql_ns" mysql-secret -o jsonpath='{.data.password}' 2>/dev/null | base64 -d); then
echo "Could not retrieve MySQL password. Cannot restore database." >&2
return 1
fi
# Drop and recreate database
kubectl exec -n "$mysql_ns" deploy/"$mysql_deploy" -- bash -c \
"mysql -u${mysql_user} -p'${mysql_password}' -e 'DROP DATABASE IF EXISTS ${db_name}; CREATE DATABASE ${db_name};'"
# Restore database from dump
echo "Restoring database from $db_dump..."
if ! kubectl exec -i -n "$mysql_ns" deploy/"$mysql_deploy" -- bash -c \
"mysql -u${mysql_user} -p'${mysql_password}' ${db_name}" < "$db_dump"; then
echo "Database restore failed for '$app_name'" >&2
return 1
fi
echo "Database restore completed for '$app_name'"
}
# --- PVC Restore Functions ---------------------------------------------------
scale_app() {
local app_name="$1"
local replicas="$2"
echo "Scaling app '$app_name' to $replicas replicas..."
# Find deployments for this app and scale them
local deployments
deployments=$(kubectl get deploy -n "$app_name" -l "app=$app_name" -o name 2>/dev/null || true)
if [[ -z "$deployments" ]]; then
echo "No deployments found for app '$app_name'" >&2
return 1
fi
for deploy in $deployments; do
kubectl scale "$deploy" -n "$app_name" --replicas="$replicas"
if [[ "$replicas" -gt 0 ]]; then
kubectl rollout status "$deploy" -n "$app_name"
fi
done
}
restore_app_pvc() {
local app_name="$1"
local pvc_name="$2"
local restore_dir="$3"
echo "Restoring PVC '$pvc_name' for app '$app_name'..."
# Find the PVC backup directory in the restore directory
local pvc_backup_dir
pvc_backup_dir=$(find "$restore_dir" -type d -name "$pvc_name" | head -1)
if [[ -z "$pvc_backup_dir" || ! -d "$pvc_backup_dir" ]]; then
echo "No backup directory found for PVC '$pvc_name'" >&2
return 1
fi
# Get the Longhorn volume name for this PVC
local pv_name
pv_name=$(kubectl get pvc -n "$app_name" "$pvc_name" -o jsonpath='{.spec.volumeName}')
if [[ -z "$pv_name" ]]; then
echo "Could not find PersistentVolume for PVC '$pvc_name'" >&2
return 1
fi
local longhorn_volume
longhorn_volume=$(kubectl get pv "$pv_name" -o jsonpath='{.spec.csi.volumeHandle}' 2>/dev/null)
if [[ -z "$longhorn_volume" ]]; then
echo "Could not find Longhorn volume for PV '$pv_name'" >&2
return 1
fi
# Create safety snapshot before destructive restore
local safety_snapshot="restore-safety-$(date +%s)"
echo "Creating safety snapshot '$safety_snapshot' for volume '$longhorn_volume'..."
kubectl apply -f - <<EOF
apiVersion: longhorn.io/v1beta2
kind: Snapshot
metadata:
name: $safety_snapshot
namespace: longhorn-system
labels:
app: wild-app-restore
volume: $longhorn_volume
pvc: $pvc_name
original-app: $app_name
spec:
volume: $longhorn_volume
EOF
# Wait for snapshot to be ready
echo "Waiting for safety snapshot to be ready..."
local snapshot_timeout=60
local elapsed=0
while [[ $elapsed -lt $snapshot_timeout ]]; do
local snapshot_ready
snapshot_ready=$(kubectl get snapshot.longhorn.io -n longhorn-system "$safety_snapshot" -o jsonpath='{.status.readyToUse}' 2>/dev/null || echo "false")
if [[ "$snapshot_ready" == "true" ]]; then
echo "Safety snapshot created successfully"
break
fi
sleep 2
elapsed=$((elapsed + 2))
done
if [[ $elapsed -ge $snapshot_timeout ]]; then
echo "Warning: Safety snapshot may not be ready, but proceeding with restore..."
fi
# Scale app down to avoid conflicts during restore
scale_app "$app_name" 0
# Wait for pods to terminate and PVC to be unmounted
echo "Waiting for pods to terminate and PVC to be released..."
sleep 10
# Get PVC details for node affinity
local pv_name
pv_name=$(kubectl get pvc -n "$app_name" "$pvc_name" -o jsonpath='{.spec.volumeName}')
if [[ -z "$pv_name" ]]; then
echo "Could not find PersistentVolume for PVC '$pvc_name'" >&2
return 1
fi
# Get the node where this Longhorn volume is available
local target_node
target_node=$(kubectl get pv "$pv_name" -o jsonpath='{.metadata.annotations.volume\.kubernetes\.io/selected-node}' 2>/dev/null || \
kubectl get nodes --no-headers -o custom-columns=NAME:.metadata.name | head -1)
echo "Creating restore utility pod on node: $target_node"
# Create temporary pod with node affinity and PVC mounted
local temp_pod="restore-util-$(date +%s)"
kubectl apply -n "$app_name" -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: $temp_pod
labels:
app: restore-utility
spec:
nodeSelector:
kubernetes.io/hostname: $target_node
containers:
- name: restore-util
image: alpine:latest
command: ["/bin/sh", "-c", "sleep 3600"]
volumeMounts:
- name: data
mountPath: /restore-target
securityContext:
runAsUser: 0
fsGroup: 0
volumes:
- name: data
persistentVolumeClaim:
claimName: $pvc_name
restartPolicy: Never
tolerations:
- operator: Exists
EOF
# Wait for pod to be ready with longer timeout
echo "Waiting for restore utility pod to be ready..."
if ! kubectl wait --for=condition=Ready pod/"$temp_pod" -n "$app_name" --timeout=120s; then
echo "Restore utility pod failed to start. Checking status..."
kubectl describe pod -n "$app_name" "$temp_pod"
kubectl delete pod -n "$app_name" "$temp_pod" --force --grace-period=0 || true
echo "ERROR: Restore failed. Safety snapshot '$safety_snapshot' has been preserved for manual recovery." >&2
echo "To recover from safety snapshot, use: kubectl get snapshot.longhorn.io -n longhorn-system $safety_snapshot" >&2
return 1
fi
echo "Clearing existing PVC data..."
kubectl exec -n "$app_name" "$temp_pod" -- sh -c "rm -rf /restore-target/* /restore-target/.*" 2>/dev/null || true
echo "Copying backup data to PVC..."
# Use tar to stream data into the pod, preserving permissions
if ! tar -C "$pvc_backup_dir" -cf - . | kubectl exec -i -n "$app_name" "$temp_pod" -- tar -C /restore-target -xf -; then
echo "Failed to copy data to PVC. Cleaning up..." >&2
kubectl delete pod -n "$app_name" "$temp_pod" --force --grace-period=0 || true
echo "ERROR: Restore failed. Safety snapshot '$safety_snapshot' has been preserved for manual recovery." >&2
echo "To recover from safety snapshot, use: kubectl get snapshot.longhorn.io -n longhorn-system $safety_snapshot" >&2
return 1
fi
echo "Verifying restored data..."
kubectl exec -n "$app_name" "$temp_pod" -- sh -c "ls -la /restore-target | head -10"
# Clean up temporary pod
kubectl delete pod -n "$app_name" "$temp_pod"
# Scale app back up
scale_app "$app_name" 1
# Clean up safety snapshot if restore was successful
echo "Cleaning up safety snapshot '$safety_snapshot'..."
if kubectl delete snapshot.longhorn.io -n longhorn-system "$safety_snapshot" 2>/dev/null; then
echo "Safety snapshot cleaned up successfully"
else
echo "Warning: Could not clean up safety snapshot '$safety_snapshot'. You may need to delete it manually."
fi
echo "PVC '$pvc_name' restore completed successfully"
}
# --- Main Restore Function ---------------------------------------------------
restore_app() {
local app_name="$1"
local snapshot_id="$2"
local mode="$3"
local skip_globals="$4"
local staging_dir="$5"
echo "=========================================="
echo "Starting restore of app: $app_name"
echo "Snapshot: $snapshot_id"
echo "Mode: $mode"
echo "=========================================="
# Restore snapshot to staging directory
local restore_dir
restore_dir=$(restore_from_snapshot "$app_name" "$snapshot_id" "$staging_dir")
if [[ ! -d "$restore_dir" ]]; then
echo "Failed to restore snapshot for '$app_name'" >&2
return 1
fi
# Discover what components this app has
local database_deps
database_deps=$(discover_database_deps "$app_name")
local pvcs
pvcs=$(discover_app_pvcs "$app_name")
# Restore database components
if [[ "$mode" == "all" || "$mode" == "db" ]]; then
for db_type in $database_deps; do
case "$db_type" in
postgres)
restore_postgres_database "$app_name" "$restore_dir" "$skip_globals"
;;
mysql)
restore_mysql_database "$app_name" "$restore_dir"
;;
redis)
echo "Redis restore not implemented yet. Skipping."
;;
esac
done
fi
# Restore PVC components
if [[ "$mode" == "all" || "$mode" == "pvc" ]]; then
for pvc in $pvcs; do
restore_app_pvc "$app_name" "$pvc" "$restore_dir"
done
fi
# Clean up restore directory
rm -rf "$restore_dir"
echo "=========================================="
echo "Restore completed for app: $app_name"
echo "=========================================="
}
# --- Main Script Logic -------------------------------------------------------
main() {
require_k8s
require_yq
require_restic
get_restic_config
local staging_dir
staging_dir=$(get_staging_dir)
mkdir -p "$staging_dir/restore"
# Parse arguments
if [[ $# -eq 0 || "$1" == "--help" || "$1" == "-h" ]]; then
show_help
exit 0
fi
local app_name="$1"
shift
local snapshot_id=""
local mode="all"
local skip_globals="false"
local list_snapshots="false"
# Parse remaining arguments
while [[ $# -gt 0 ]]; do
case "$1" in
--db-only)
mode="db"
shift
;;
--pvc-only)
mode="pvc"
shift
;;
--skip-globals)
skip_globals="true"
shift
;;
--list)
list_snapshots="true"
shift
;;
-h|--help)
show_help
exit 0
;;
*)
if [[ -z "$snapshot_id" ]]; then
snapshot_id="$1"
else
echo "Unknown option: $1" >&2
show_help
exit 1
fi
shift
;;
esac
done
# List snapshots if requested
if [[ "$list_snapshots" == "true" ]]; then
list_app_snapshots "$app_name"
exit 0
fi
# Get latest snapshot if none specified
if [[ -z "$snapshot_id" ]]; then
snapshot_id=$(get_latest_snapshot "$app_name")
if [[ -z "$snapshot_id" ]]; then
echo "No snapshots found for app '$app_name'" >&2
exit 1
fi
echo "Using latest snapshot: $snapshot_id"
fi
# Perform the restore
restore_app "$app_name" "$snapshot_id" "$mode" "$skip_globals" "$staging_dir"
echo "Restore operation completed successfully."
}
main "$@"

245
bin/wild-backup Executable file
View File

@@ -0,0 +1,245 @@
#!/bin/bash
# Simple backup script for your personal cloud
set -e
set -o pipefail
# Parse command line flags
BACKUP_HOME=true
BACKUP_APPS=true
BACKUP_CLUSTER=true
show_help() {
echo "Usage: $0 [OPTIONS]"
echo "Backup components of your wild-cloud infrastructure"
echo ""
echo "Options:"
echo " --home-only Backup only WC_HOME (wild-cloud configuration)"
echo " --apps-only Backup only applications (databases and PVCs)"
echo " --cluster-only Backup only Kubernetes cluster resources"
echo " --no-home Skip WC_HOME backup"
echo " --no-apps Skip application backups"
echo " --no-cluster Skip cluster resource backup"
echo " -h, --help Show this help message"
echo ""
echo "Default: Backup all components (home, apps, cluster)"
}
# Process command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
--home-only)
BACKUP_HOME=true
BACKUP_APPS=false
BACKUP_CLUSTER=false
shift
;;
--apps-only)
BACKUP_HOME=false
BACKUP_APPS=true
BACKUP_CLUSTER=false
shift
;;
--cluster-only)
BACKUP_HOME=false
BACKUP_APPS=false
BACKUP_CLUSTER=true
shift
;;
--no-home)
BACKUP_HOME=false
shift
;;
--no-apps)
BACKUP_APPS=false
shift
;;
--no-cluster)
BACKUP_CLUSTER=false
shift
;;
-h|--help)
show_help
exit 0
;;
*)
echo "Unknown option: $1"
show_help
exit 1
;;
esac
done
# Initialize Wild Cloud environment
if [ -z "${WC_ROOT}" ]; then
echo "WC_ROOT is not set."
exit 1
else
source "${WC_ROOT}/scripts/common.sh"
init_wild_env
fi
if `wild-config cloud.backup.root --check`; then
export RESTIC_REPOSITORY="$(wild-config cloud.backup.root)"
else
echo "WARNING: Could not get cloud backup root."
exit 1
fi
if `wild-secret cloud.backupPassword --check`; then
export RESTIC_PASSWORD="$(wild-secret cloud.backupPassword)"
else
echo "WARNING: Could not get cloud backup secret."
exit 1
fi
if `wild-config cloud.backup.staging --check`; then
STAGING_DIR="$(wild-config cloud.backup.staging)"
else
echo "WARNING: Could not get cloud backup staging directory."
exit 1
fi
echo "Backup at '$RESTIC_REPOSITORY'."
# Initialize the repository if needed.
echo "Checking if restic repository exists..."
if restic cat config >/dev/null 2>&1; then
echo "Using existing backup repository."
else
echo "No existing backup repository found. Initializing restic repository..."
restic init
echo "Repository initialized successfully."
fi
# Backup entire WC_HOME
if [ "$BACKUP_HOME" = true ]; then
echo "Backing up WC_HOME..."
restic --verbose --tag wild-cloud --tag wc-home --tag "$(date +%Y-%m-%d)" backup $WC_HOME
echo "WC_HOME backup completed."
# TODO: Ignore wild cloud cache?
else
echo "Skipping WC_HOME backup."
fi
mkdir -p "$STAGING_DIR"
# Run backup for all apps at once
if [ "$BACKUP_APPS" = true ]; then
echo "Running backup for all apps..."
wild-app-backup --all
# Upload each app's backup to restic individually
for app_dir in "$STAGING_DIR"/apps/*; do
if [ ! -d "$app_dir" ]; then
continue
fi
app="$(basename "$app_dir")"
echo "Uploading backup for app: $app"
restic --verbose --tag wild-cloud --tag "$app" --tag "$(date +%Y-%m-%d)" backup "$app_dir"
echo "Backup for app '$app' completed."
done
else
echo "Skipping application backups."
fi
# --- etcd Backup Function ----------------------------------------------------
backup_etcd() {
local cluster_backup_dir="$1"
local etcd_backup_file="$cluster_backup_dir/etcd-snapshot.db"
echo "Creating etcd snapshot..."
# For Talos, we use talosctl to create etcd snapshots
if command -v talosctl >/dev/null 2>&1; then
# Try to get etcd snapshot via talosctl (works for Talos clusters)
local control_plane_nodes
control_plane_nodes=$(kubectl get nodes -l node-role.kubernetes.io/control-plane -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}' | tr ' ' '\n' | head -1)
if [[ -n "$control_plane_nodes" ]]; then
echo "Using talosctl to backup etcd from control plane node: $control_plane_nodes"
if talosctl --nodes "$control_plane_nodes" etcd snapshot "$etcd_backup_file"; then
echo " etcd backup created: $etcd_backup_file"
return 0
else
echo " talosctl etcd snapshot failed, trying alternative method..."
fi
else
echo " No control plane nodes found for talosctl method"
fi
fi
# Alternative: Try to backup via etcd pod if available
local etcd_pod
etcd_pod=$(kubectl get pods -n kube-system -l component=etcd -o jsonpath='{.items[0].metadata.name}' 2>/dev/null || true)
if [[ -n "$etcd_pod" ]]; then
echo "Using etcd pod: $etcd_pod"
# Create snapshot using etcdctl inside the etcd pod
if kubectl exec -n kube-system "$etcd_pod" -- etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /tmp/etcd-snapshot.db; then
# Copy snapshot out of pod
kubectl cp -n kube-system "$etcd_pod:/tmp/etcd-snapshot.db" "$etcd_backup_file"
# Clean up temporary file in pod
kubectl exec -n kube-system "$etcd_pod" -- rm -f /tmp/etcd-snapshot.db
echo " etcd backup created: $etcd_backup_file"
return 0
else
echo " etcd pod snapshot failed"
fi
else
echo " No etcd pod found in kube-system namespace"
fi
# Final fallback: Try direct etcdctl if available on local system
if command -v etcdctl >/dev/null 2>&1; then
echo "Attempting local etcdctl backup..."
# This would need proper certificates and endpoints configured
echo " Local etcdctl backup not implemented (requires certificate configuration)"
fi
echo " Warning: Could not create etcd backup - no working method found"
echo " Consider installing talosctl or ensuring etcd pods are accessible"
return 1
}
# Back up Kubernetes cluster resources
if [ "$BACKUP_CLUSTER" = true ]; then
echo "Backing up Kubernetes cluster resources..."
CLUSTER_BACKUP_DIR="$STAGING_DIR/cluster"
# Clean up any existing cluster backup files
if [[ -d "$CLUSTER_BACKUP_DIR" ]]; then
echo "Cleaning up existing cluster backup files..."
rm -rf "$CLUSTER_BACKUP_DIR"
fi
mkdir -p "$CLUSTER_BACKUP_DIR"
kubectl get all -A -o yaml > "$CLUSTER_BACKUP_DIR/all-resources.yaml"
kubectl get secrets -A -o yaml > "$CLUSTER_BACKUP_DIR/secrets.yaml"
kubectl get configmaps -A -o yaml > "$CLUSTER_BACKUP_DIR/configmaps.yaml"
kubectl get persistentvolumes -o yaml > "$CLUSTER_BACKUP_DIR/persistentvolumes.yaml"
kubectl get persistentvolumeclaims -A -o yaml > "$CLUSTER_BACKUP_DIR/persistentvolumeclaims.yaml"
kubectl get storageclasses -o yaml > "$CLUSTER_BACKUP_DIR/storageclasses.yaml"
echo "Backing up etcd..."
backup_etcd "$CLUSTER_BACKUP_DIR"
echo "Cluster resources backed up to $CLUSTER_BACKUP_DIR"
# Upload cluster backup to restic
echo "Uploading cluster backup to restic..."
restic --verbose --tag wild-cloud --tag cluster --tag "$(date +%Y-%m-%d)" backup "$CLUSTER_BACKUP_DIR"
echo "Cluster backup completed."
else
echo "Skipping cluster backup."
fi
echo "Backup completed: $BACKUP_DIR"

245
bin/wild-backup copy Executable file
View File

@@ -0,0 +1,245 @@
#!/bin/bash
# Simple backup script for your personal cloud
set -e
set -o pipefail
# Parse command line flags
BACKUP_HOME=true
BACKUP_APPS=true
BACKUP_CLUSTER=true
show_help() {
echo "Usage: $0 [OPTIONS]"
echo "Backup components of your wild-cloud infrastructure"
echo ""
echo "Options:"
echo " --home-only Backup only WC_HOME (wild-cloud configuration)"
echo " --apps-only Backup only applications (databases and PVCs)"
echo " --cluster-only Backup only Kubernetes cluster resources"
echo " --no-home Skip WC_HOME backup"
echo " --no-apps Skip application backups"
echo " --no-cluster Skip cluster resource backup"
echo " -h, --help Show this help message"
echo ""
echo "Default: Backup all components (home, apps, cluster)"
}
# Process command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
--home-only)
BACKUP_HOME=true
BACKUP_APPS=false
BACKUP_CLUSTER=false
shift
;;
--apps-only)
BACKUP_HOME=false
BACKUP_APPS=true
BACKUP_CLUSTER=false
shift
;;
--cluster-only)
BACKUP_HOME=false
BACKUP_APPS=false
BACKUP_CLUSTER=true
shift
;;
--no-home)
BACKUP_HOME=false
shift
;;
--no-apps)
BACKUP_APPS=false
shift
;;
--no-cluster)
BACKUP_CLUSTER=false
shift
;;
-h|--help)
show_help
exit 0
;;
*)
echo "Unknown option: $1"
show_help
exit 1
;;
esac
done
# Initialize Wild Cloud environment
if [ -z "${WC_ROOT}" ]; then
echo "WC_ROOT is not set."
exit 1
else
source "${WC_ROOT}/scripts/common.sh"
init_wild_env
fi
if `wild-config cloud.backup.root --check`; then
export RESTIC_REPOSITORY="$(wild-config cloud.backup.root)"
else
echo "WARNING: Could not get cloud backup root."
exit 1
fi
if `wild-secret cloud.backupPassword --check`; then
export RESTIC_PASSWORD="$(wild-secret cloud.backupPassword)"
else
echo "WARNING: Could not get cloud backup secret."
exit 1
fi
if `wild-config cloud.backup.staging --check`; then
STAGING_DIR="$(wild-config cloud.backup.staging)"
else
echo "WARNING: Could not get cloud backup staging directory."
exit 1
fi
echo "Backup at '$RESTIC_REPOSITORY'."
# Initialize the repository if needed.
echo "Checking if restic repository exists..."
if restic cat config >/dev/null 2>&1; then
echo "Using existing backup repository."
else
echo "No existing backup repository found. Initializing restic repository..."
restic init
echo "Repository initialized successfully."
fi
# Backup entire WC_HOME
if [ "$BACKUP_HOME" = true ]; then
echo "Backing up WC_HOME..."
restic --verbose --tag wild-cloud --tag wc-home --tag "$(date +%Y-%m-%d)" backup $WC_HOME
echo "WC_HOME backup completed."
# TODO: Ignore wild cloud cache?
else
echo "Skipping WC_HOME backup."
fi
mkdir -p "$STAGING_DIR"
# Run backup for all apps at once
if [ "$BACKUP_APPS" = true ]; then
echo "Running backup for all apps..."
wild-app-backup --all
# Upload each app's backup to restic individually
for app_dir in "$STAGING_DIR"/apps/*; do
if [ ! -d "$app_dir" ]; then
continue
fi
app="$(basename "$app_dir")"
echo "Uploading backup for app: $app"
restic --verbose --tag wild-cloud --tag "$app" --tag "$(date +%Y-%m-%d)" backup "$app_dir"
echo "Backup for app '$app' completed."
done
else
echo "Skipping application backups."
fi
# --- etcd Backup Function ----------------------------------------------------
backup_etcd() {
local cluster_backup_dir="$1"
local etcd_backup_file="$cluster_backup_dir/etcd-snapshot.db"
echo "Creating etcd snapshot..."
# For Talos, we use talosctl to create etcd snapshots
if command -v talosctl >/dev/null 2>&1; then
# Try to get etcd snapshot via talosctl (works for Talos clusters)
local control_plane_nodes
control_plane_nodes=$(kubectl get nodes -l node-role.kubernetes.io/control-plane -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}' | tr ' ' '\n' | head -1)
if [[ -n "$control_plane_nodes" ]]; then
echo "Using talosctl to backup etcd from control plane node: $control_plane_nodes"
if talosctl --nodes "$control_plane_nodes" etcd snapshot "$etcd_backup_file"; then
echo " etcd backup created: $etcd_backup_file"
return 0
else
echo " talosctl etcd snapshot failed, trying alternative method..."
fi
else
echo " No control plane nodes found for talosctl method"
fi
fi
# Alternative: Try to backup via etcd pod if available
local etcd_pod
etcd_pod=$(kubectl get pods -n kube-system -l component=etcd -o jsonpath='{.items[0].metadata.name}' 2>/dev/null || true)
if [[ -n "$etcd_pod" ]]; then
echo "Using etcd pod: $etcd_pod"
# Create snapshot using etcdctl inside the etcd pod
if kubectl exec -n kube-system "$etcd_pod" -- etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /tmp/etcd-snapshot.db; then
# Copy snapshot out of pod
kubectl cp -n kube-system "$etcd_pod:/tmp/etcd-snapshot.db" "$etcd_backup_file"
# Clean up temporary file in pod
kubectl exec -n kube-system "$etcd_pod" -- rm -f /tmp/etcd-snapshot.db
echo " etcd backup created: $etcd_backup_file"
return 0
else
echo " etcd pod snapshot failed"
fi
else
echo " No etcd pod found in kube-system namespace"
fi
# Final fallback: Try direct etcdctl if available on local system
if command -v etcdctl >/dev/null 2>&1; then
echo "Attempting local etcdctl backup..."
# This would need proper certificates and endpoints configured
echo " Local etcdctl backup not implemented (requires certificate configuration)"
fi
echo " Warning: Could not create etcd backup - no working method found"
echo " Consider installing talosctl or ensuring etcd pods are accessible"
return 1
}
# Back up Kubernetes cluster resources
if [ "$BACKUP_CLUSTER" = true ]; then
echo "Backing up Kubernetes cluster resources..."
CLUSTER_BACKUP_DIR="$STAGING_DIR/cluster"
# Clean up any existing cluster backup files
if [[ -d "$CLUSTER_BACKUP_DIR" ]]; then
echo "Cleaning up existing cluster backup files..."
rm -rf "$CLUSTER_BACKUP_DIR"
fi
mkdir -p "$CLUSTER_BACKUP_DIR"
kubectl get all -A -o yaml > "$CLUSTER_BACKUP_DIR/all-resources.yaml"
kubectl get secrets -A -o yaml > "$CLUSTER_BACKUP_DIR/secrets.yaml"
kubectl get configmaps -A -o yaml > "$CLUSTER_BACKUP_DIR/configmaps.yaml"
kubectl get persistentvolumes -o yaml > "$CLUSTER_BACKUP_DIR/persistentvolumes.yaml"
kubectl get persistentvolumeclaims -A -o yaml > "$CLUSTER_BACKUP_DIR/persistentvolumeclaims.yaml"
kubectl get storageclasses -o yaml > "$CLUSTER_BACKUP_DIR/storageclasses.yaml"
echo "Backing up etcd..."
backup_etcd "$CLUSTER_BACKUP_DIR"
echo "Cluster resources backed up to $CLUSTER_BACKUP_DIR"
# Upload cluster backup to restic
echo "Uploading cluster backup to restic..."
restic --verbose --tag wild-cloud --tag cluster --tag "$(date +%Y-%m-%d)" backup "$CLUSTER_BACKUP_DIR"
echo "Cluster backup completed."
else
echo "Skipping cluster backup."
fi
echo "Backup completed: $BACKUP_DIR"

View File

@@ -52,7 +52,7 @@ else
fi
# Check for required configuration
if [ -z "$(get_current_config "cluster.nodes.talos.version")" ] || [ -z "$(get_current_config "cluster.nodes.talos.schematicId")" ]; then
if [ -z "$(wild-config "cluster.nodes.talos.version")" ] || [ -z "$(wild-config "cluster.nodes.talos.schematicId")" ]; then
print_header "Talos Configuration Required"
print_error "Missing required Talos configuration"
print_info "Please run 'wild-setup' first to configure your cluster"
@@ -69,8 +69,8 @@ fi
print_header "Talos Installer Image Generation and Asset Download"
# Get Talos version and schematic ID from config
TALOS_VERSION=$(get_current_config cluster.nodes.talos.version)
SCHEMATIC_ID=$(get_current_config cluster.nodes.talos.schematicId)
TALOS_VERSION=$(wild-config cluster.nodes.talos.version)
SCHEMATIC_ID=$(wild-config cluster.nodes.talos.schematicId)
print_info "Creating custom Talos installer image..."
print_info "Talos version: $TALOS_VERSION"

View File

@@ -5,7 +5,7 @@ set -o pipefail
# Usage function
usage() {
echo "Usage: wild-config <yaml_key_path>"
echo "Usage: wild-config [--check] <yaml_key_path>"
echo ""
echo "Read a value from \$WC_HOME/config.yaml using a YAML key path."
echo ""
@@ -13,18 +13,25 @@ usage() {
echo " wild-config 'cluster.name' # Get cluster name"
echo " wild-config 'apps.myapp.replicas' # Get app replicas count"
echo " wild-config 'services[0].name' # Get first service name"
echo " wild-config --check 'cluster.name' # Exit 1 if key doesn't exist"
echo ""
echo "Options:"
echo " --check Exit 1 if key doesn't exist (no output)"
echo " -h, --help Show this help message"
}
# Parse arguments
CHECK_MODE=false
while [[ $# -gt 0 ]]; do
case $1 in
-h|--help)
usage
exit 0
;;
--check)
CHECK_MODE=true
shift
;;
-*)
echo "Unknown option $1"
usage
@@ -61,17 +68,30 @@ fi
CONFIG_FILE="${WC_HOME}/config.yaml"
if [ ! -f "${CONFIG_FILE}" ]; then
if [ "${CHECK_MODE}" = true ]; then
exit 1
fi
echo "Error: config file not found at ${CONFIG_FILE}" >&2
exit 1
echo ""
exit 0
fi
# Use yq to extract the value from the YAML file
result=$(yq eval ".${KEY_PATH}" "${CONFIG_FILE}") 2>/dev/null
result=$(yq eval ".${KEY_PATH}" "${CONFIG_FILE}" 2>/dev/null || echo "null")
# Check if result is null (key not found)
if [ "${result}" = "null" ]; then
if [ "${CHECK_MODE}" = true ]; then
exit 1
fi
echo "Error: Key path '${KEY_PATH}' not found in ${CONFIG_FILE}" >&2
exit 1
echo ""
exit 0
fi
# In check mode, exit 0 if key exists (don't output value)
if [ "${CHECK_MODE}" = true ]; then
exit 0
fi
echo "${result}"

View File

@@ -68,85 +68,92 @@ fi
# Create setup bundle.
# Copy iPXE bootloader to ipxe-web from cached assets.
echo "Copying Talos PXE assets from cache..."
PXE_WEB_ROOT="${BUNDLE_DIR}/ipxe-web"
mkdir -p "${PXE_WEB_ROOT}/amd64"
cp "${DNSMASQ_SETUP_DIR}/boot.ipxe" "${PXE_WEB_ROOT}/boot.ipxe"
# The following was a completely fine process for making your dnsmasq server
# also serve PXE boot assets for the cluster. However, after using it for a bit,
# it seems to be more complexity for no additional benefit when the operators
# can just use USB keys.
# Get schematic ID from override or config
if [ -n "$SCHEMATIC_ID_OVERRIDE" ]; then
SCHEMATIC_ID="$SCHEMATIC_ID_OVERRIDE"
echo "Using schematic ID from command line: $SCHEMATIC_ID"
else
SCHEMATIC_ID=$(wild-config cluster.nodes.talos.schematicId)
if [ -z "$SCHEMATIC_ID" ] || [ "$SCHEMATIC_ID" = "null" ]; then
echo "Error: No schematic ID found in config"
echo "Please run 'wild-setup' first to configure your cluster"
echo "Or specify one with --schematic-id option"
exit 1
fi
echo "Using schematic ID from config: $SCHEMATIC_ID"
fi
## Setup PXE boot assets
# Define cache directories using new structure
CACHE_DIR="${WC_HOME}/.wildcloud"
SCHEMATIC_CACHE_DIR="${CACHE_DIR}/node-boot-assets/${SCHEMATIC_ID}"
PXE_CACHE_DIR="${SCHEMATIC_CACHE_DIR}/pxe"
IPXE_CACHE_DIR="${SCHEMATIC_CACHE_DIR}/ipxe"
# # Copy iPXE bootloader to ipxe-web from cached assets.
# echo "Copying Talos PXE assets from cache..."
# PXE_WEB_ROOT="${BUNDLE_DIR}/ipxe-web"
# mkdir -p "${PXE_WEB_ROOT}/amd64"
# cp "${DNSMASQ_SETUP_DIR}/boot.ipxe" "${PXE_WEB_ROOT}/boot.ipxe"
# Check if cached assets exist
KERNEL_CACHE_PATH="${PXE_CACHE_DIR}/amd64/vmlinuz"
INITRAMFS_CACHE_PATH="${PXE_CACHE_DIR}/amd64/initramfs.xz"
# # Get schematic ID from override or config
# if [ -n "$SCHEMATIC_ID_OVERRIDE" ]; then
# SCHEMATIC_ID="$SCHEMATIC_ID_OVERRIDE"
# echo "Using schematic ID from command line: $SCHEMATIC_ID"
# else
# SCHEMATIC_ID=$(wild-config cluster.nodes.talos.schematicId)
# if [ -z "$SCHEMATIC_ID" ] || [ "$SCHEMATIC_ID" = "null" ]; then
# echo "Error: No schematic ID found in config"
# echo "Please run 'wild-setup' first to configure your cluster"
# echo "Or specify one with --schematic-id option"
# exit 1
# fi
# echo "Using schematic ID from config: $SCHEMATIC_ID"
# fi
if [ ! -f "${KERNEL_CACHE_PATH}" ] || [ ! -f "${INITRAMFS_CACHE_PATH}" ]; then
echo "Error: Talos PXE assets not found in cache for schematic ID: ${SCHEMATIC_ID}"
echo "Expected locations:"
echo " Kernel: ${KERNEL_CACHE_PATH}"
echo " Initramfs: ${INITRAMFS_CACHE_PATH}"
echo ""
echo "Please run 'wild-cluster-node-boot-assets-download' first to download and cache the assets."
exit 1
fi
# # Define cache directories using new structure
# CACHE_DIR="${WC_HOME}/.wildcloud"
# SCHEMATIC_CACHE_DIR="${CACHE_DIR}/node-boot-assets/${SCHEMATIC_ID}"
# PXE_CACHE_DIR="${SCHEMATIC_CACHE_DIR}/pxe"
# IPXE_CACHE_DIR="${SCHEMATIC_CACHE_DIR}/ipxe"
# Copy Talos PXE assets from cache
echo "Copying Talos kernel from cache..."
cp "${KERNEL_CACHE_PATH}" "${PXE_WEB_ROOT}/amd64/vmlinuz"
echo "✅ Talos kernel copied from cache"
# # Check if cached assets exist
# KERNEL_CACHE_PATH="${PXE_CACHE_DIR}/amd64/vmlinuz"
# INITRAMFS_CACHE_PATH="${PXE_CACHE_DIR}/amd64/initramfs.xz"
echo "Copying Talos initramfs from cache..."
cp "${INITRAMFS_CACHE_PATH}" "${PXE_WEB_ROOT}/amd64/initramfs.xz"
echo "✅ Talos initramfs copied from cache"
# if [ ! -f "${KERNEL_CACHE_PATH}" ] || [ ! -f "${INITRAMFS_CACHE_PATH}" ]; then
# echo "Error: Talos PXE assets not found in cache for schematic ID: ${SCHEMATIC_ID}"
# echo "Expected locations:"
# echo " Kernel: ${KERNEL_CACHE_PATH}"
# echo " Initramfs: ${INITRAMFS_CACHE_PATH}"
# echo ""
# echo "Please run 'wild-cluster-node-boot-assets-download' first to download and cache the assets."
# exit 1
# fi
# Copy iPXE bootloader files from cache
echo "Copying iPXE bootloader files from cache..."
FTPD_DIR="${BUNDLE_DIR}/pxe-ftpd"
mkdir -p "${FTPD_DIR}"
# # Copy Talos PXE assets from cache
# echo "Copying Talos kernel from cache..."
# cp "${KERNEL_CACHE_PATH}" "${PXE_WEB_ROOT}/amd64/vmlinuz"
# echo "✅ Talos kernel copied from cache"
# Check if iPXE assets exist in cache
IPXE_EFI_CACHE="${IPXE_CACHE_DIR}/ipxe.efi"
IPXE_BIOS_CACHE="${IPXE_CACHE_DIR}/undionly.kpxe"
IPXE_ARM64_CACHE="${IPXE_CACHE_DIR}/ipxe-arm64.efi"
# echo "Copying Talos initramfs from cache..."
# cp "${INITRAMFS_CACHE_PATH}" "${PXE_WEB_ROOT}/amd64/initramfs.xz"
# echo "✅ Talos initramfs copied from cache"
if [ ! -f "${IPXE_EFI_CACHE}" ] || [ ! -f "${IPXE_BIOS_CACHE}" ] || [ ! -f "${IPXE_ARM64_CACHE}" ]; then
echo "Error: iPXE bootloader assets not found in cache for schematic ID: ${SCHEMATIC_ID}"
echo "Expected locations:"
echo " iPXE EFI: ${IPXE_EFI_CACHE}"
echo " iPXE BIOS: ${IPXE_BIOS_CACHE}"
echo " iPXE ARM64: ${IPXE_ARM64_CACHE}"
echo ""
echo "Please run 'wild-cluster-node-boot-assets-download' first to download and cache the assets."
exit 1
fi
# # Copy iPXE bootloader files from cache
# echo "Copying iPXE bootloader files from cache..."
# FTPD_DIR="${BUNDLE_DIR}/pxe-ftpd"
# mkdir -p "${FTPD_DIR}"
# Copy iPXE assets from cache
cp "${IPXE_EFI_CACHE}" "${FTPD_DIR}/ipxe.efi"
cp "${IPXE_BIOS_CACHE}" "${FTPD_DIR}/undionly.kpxe"
cp "${IPXE_ARM64_CACHE}" "${FTPD_DIR}/ipxe-arm64.efi"
echo "✅ iPXE bootloader files copied from cache"
# # Check if iPXE assets exist in cache
# IPXE_EFI_CACHE="${IPXE_CACHE_DIR}/ipxe.efi"
# IPXE_BIOS_CACHE="${IPXE_CACHE_DIR}/undionly.kpxe"
# IPXE_ARM64_CACHE="${IPXE_CACHE_DIR}/ipxe-arm64.efi"
# if [ ! -f "${IPXE_EFI_CACHE}" ] || [ ! -f "${IPXE_BIOS_CACHE}" ] || [ ! -f "${IPXE_ARM64_CACHE}" ]; then
# echo "Error: iPXE bootloader assets not found in cache for schematic ID: ${SCHEMATIC_ID}"
# echo "Expected locations:"
# echo " iPXE EFI: ${IPXE_EFI_CACHE}"
# echo " iPXE BIOS: ${IPXE_BIOS_CACHE}"
# echo " iPXE ARM64: ${IPXE_ARM64_CACHE}"
# echo ""
# echo "Please run 'wild-cluster-node-boot-assets-download' first to download and cache the assets."
# exit 1
# fi
# # Copy iPXE assets from cache
# cp "${IPXE_EFI_CACHE}" "${FTPD_DIR}/ipxe.efi"
# cp "${IPXE_BIOS_CACHE}" "${FTPD_DIR}/undionly.kpxe"
# cp "${IPXE_ARM64_CACHE}" "${FTPD_DIR}/ipxe-arm64.efi"
# echo "✅ iPXE bootloader files copied from cache"
cp "${DNSMASQ_SETUP_DIR}/nginx.conf" "${BUNDLE_DIR}/nginx.conf"
# cp "${DNSMASQ_SETUP_DIR}/nginx.conf" "${BUNDLE_DIR}/nginx.conf"
cp "${DNSMASQ_SETUP_DIR}/dnsmasq.conf" "${BUNDLE_DIR}/dnsmasq.conf"
cp "${DNSMASQ_SETUP_DIR}/setup.sh" "${BUNDLE_DIR}/setup.sh"

View File

@@ -124,14 +124,14 @@ fi
# Discover available disks
echo "Discovering available disks..." >&2
if [ "$TALOS_MODE" = "insecure" ]; then
AVAILABLE_DISKS_RAW=$(talosctl -n "$NODE_IP" get disks --insecure -o json 2>/dev/null | \
jq -s -r '.[] | select(.spec.size > 10000000000) | .metadata.id')
DISKS_JSON=$(talosctl -n "$NODE_IP" get disks --insecure -o json 2>/dev/null | \
jq -s '[.[] | select(.spec.size > 10000000000) | {path: ("/dev/" + .metadata.id), size: .spec.size}]')
else
AVAILABLE_DISKS_RAW=$(talosctl -n "$NODE_IP" get disks -o json 2>/dev/null | \
jq -s -r '.[] | select(.spec.size > 10000000000) | .metadata.id')
DISKS_JSON=$(talosctl -n "$NODE_IP" get disks -o json 2>/dev/null | \
jq -s '[.[] | select(.spec.size > 10000000000) | {path: ("/dev/" + .metadata.id), size: .spec.size}]')
fi
if [ -z "$AVAILABLE_DISKS_RAW" ]; then
if [ "$(echo "$DISKS_JSON" | jq 'length')" -eq 0 ]; then
echo "Error: No suitable disks found (must be >10GB)" >&2
echo "Available disks:" >&2
if [ "$TALOS_MODE" = "insecure" ]; then
@@ -142,11 +142,11 @@ if [ -z "$AVAILABLE_DISKS_RAW" ]; then
exit 1
fi
# Convert to JSON array
AVAILABLE_DISKS=$(echo "$AVAILABLE_DISKS_RAW" | jq -R -s 'split("\n") | map(select(length > 0)) | map("/dev/" + .)')
# Use the disks with size info directly
AVAILABLE_DISKS="$DISKS_JSON"
# Select the first disk as default (largest first)
SELECTED_DISK=$(echo "$AVAILABLE_DISKS" | jq -r '.[0]')
# Select the first disk as default
SELECTED_DISK=$(echo "$AVAILABLE_DISKS" | jq -r '.[0].path')
echo "✅ Discovered $(echo "$AVAILABLE_DISKS" | jq -r 'length') suitable disks" >&2
echo "✅ Selected disk: $SELECTED_DISK" >&2

View File

@@ -5,7 +5,7 @@ set -o pipefail
# Usage function
usage() {
echo "Usage: wild-secret <yaml_key_path>"
echo "Usage: wild-secret [--check] <yaml_key_path>"
echo ""
echo "Read a value from ./secrets.yaml using a YAML key path."
echo ""
@@ -13,18 +13,25 @@ usage() {
echo " wild-secret 'database.password' # Get database password"
echo " wild-secret 'api.keys.github' # Get GitHub API key"
echo " wild-secret 'credentials[0].token' # Get first credential token"
echo " wild-secret --check 'api.keys.github' # Exit 1 if key doesn't exist"
echo ""
echo "Options:"
echo " --check Exit 1 if key doesn't exist (no output)"
echo " -h, --help Show this help message"
}
# Parse arguments
CHECK_MODE=false
while [[ $# -gt 0 ]]; do
case $1 in
-h|--help)
usage
exit 0
;;
--check)
CHECK_MODE=true
shift
;;
-*)
echo "Unknown option $1"
usage
@@ -61,17 +68,30 @@ fi
SECRETS_FILE="${WC_HOME}/secrets.yaml"
if [ ! -f "${SECRETS_FILE}" ]; then
if [ "${CHECK_MODE}" = true ]; then
exit 1
fi
echo "Error: secrets file not found at ${SECRETS_FILE}" >&2
exit 1
echo ""
exit 0
fi
# Use yq to extract the value from the YAML file
result=$(yq eval ".${KEY_PATH}" "${SECRETS_FILE}" 2>/dev/null)
result=$(yq eval ".${KEY_PATH}" "${SECRETS_FILE}" 2>/dev/null || echo "null")
# Check if result is null (key not found)
if [ "${result}" = "null" ]; then
if [ "${CHECK_MODE}" = true ]; then
exit 1
fi
echo "Error: Key path '${KEY_PATH}' not found in ${SECRETS_FILE}" >&2
exit 1
echo ""
exit 0
fi
# In check mode, exit 0 if key exists (don't output value)
if [ "${CHECK_MODE}" = true ]; then
exit 0
fi
echo "${result}"

View File

@@ -52,9 +52,8 @@ if [ -z "${KEY_PATH}" ]; then
fi
if [ -z "${VALUE}" ]; then
echo "Error: Value is required"
usage
exit 1
VALUE=$(openssl rand -base64 32)
echo "No value provided. Generated random value: ${VALUE}"
fi
# Initialize Wild Cloud environment

View File

@@ -123,7 +123,7 @@ prompt_if_unset_config "cluster.ipAddressPool" "MetalLB IP address pool" "${SUBN
ip_pool=$(wild-config "cluster.ipAddressPool")
# Load balancer IP (automatically set to first address in the pool if not set)
current_lb_ip=$(get_current_config "cluster.loadBalancerIp")
current_lb_ip=$(wild-config "cluster.loadBalancerIp")
if [ -z "$current_lb_ip" ] || [ "$current_lb_ip" = "null" ]; then
lb_ip=$(echo "${ip_pool}" | cut -d'-' -f1)
wild-config-set "cluster.loadBalancerIp" "${lb_ip}"
@@ -135,14 +135,14 @@ prompt_if_unset_config "cluster.nodes.talos.version" "Talos version" "v1.10.4"
talos_version=$(wild-config "cluster.nodes.talos.version")
# Talos schematic ID
current_schematic_id=$(get_current_config "cluster.nodes.talos.schematicId")
current_schematic_id=$(wild-config "cluster.nodes.talos.schematicId")
if [ -z "$current_schematic_id" ] || [ "$current_schematic_id" = "null" ]; then
echo ""
print_info "Get your Talos schematic ID from: https://factory.talos.dev/"
print_info "This customizes Talos with the drivers needed for your hardware."
# Use current schematic ID from config as default
default_schematic_id=$(get_current_config "cluster.nodes.talos.schematicId")
default_schematic_id=$(wild-config "cluster.nodes.talos.schematicId")
if [ -n "$default_schematic_id" ] && [ "$default_schematic_id" != "null" ]; then
print_info "Using schematic ID from config for Talos $talos_version"
else
@@ -154,7 +154,7 @@ if [ -z "$current_schematic_id" ] || [ "$current_schematic_id" = "null" ]; then
fi
# External DNS
cluster_name=$(get_current_config "cluster.name")
cluster_name=$(wild-config "cluster.name")
prompt_if_unset_config "cluster.externalDns.ownerId" "External DNS owner ID" "external-dns-${cluster_name}"
@@ -188,18 +188,18 @@ if [ "${SKIP_HARDWARE}" = false ]; then
print_info "Registering control plane node: $NODE_NAME (IP: $TARGET_IP)"
# Initialize the node in cluster.nodes.active if not already present
if [ -z "$(get_current_config "cluster.nodes.active.\"${NODE_NAME}\".role")" ]; then
if [ -z "$(wild-config "cluster.nodes.active.\"${NODE_NAME}\".role")" ]; then
wild-config-set "cluster.nodes.active.\"${NODE_NAME}\".role" "controlplane"
wild-config-set "cluster.nodes.active.\"${NODE_NAME}\".targetIp" "$TARGET_IP"
wild-config-set "cluster.nodes.active.\"${NODE_NAME}\".currentIp" "$TARGET_IP"
fi
# Check if node is already configured
existing_interface=$(get_current_config "cluster.nodes.active.\"${NODE_NAME}\".interface")
existing_interface=$(wild-config "cluster.nodes.active.\"${NODE_NAME}\".interface")
if [ -n "$existing_interface" ] && [ "$existing_interface" != "null" ]; then
print_success "Node $NODE_NAME already configured"
print_info " - Interface: $existing_interface"
print_info " - Disk: $(get_current_config "cluster.nodes.active.\"${NODE_NAME}\".disk")"
print_info " - Disk: $(wild-config "cluster.nodes.active.\"${NODE_NAME}\".disk")"
# Generate machine config patch for this node if necessary.
NODE_SETUP_DIR="${WC_HOME}/setup/cluster-nodes"
@@ -260,7 +260,7 @@ if [ "${SKIP_HARDWARE}" = false ]; then
# Parse JSON response
INTERFACE=$(echo "$NODE_INFO" | jq -r '.interface')
SELECTED_DISK=$(echo "$NODE_INFO" | jq -r '.selected_disk')
AVAILABLE_DISKS=$(echo "$NODE_INFO" | jq -r '.disks | join(", ")')
AVAILABLE_DISKS=$(echo "$NODE_INFO" | jq -r '.disks[] | "\(.path) (\((.size / 1000000000) | floor)GB)"' | paste -sd, -)
print_success "Hardware detected:"
print_info " - Interface: $INTERFACE"
@@ -272,9 +272,9 @@ if [ "${SKIP_HARDWARE}" = false ]; then
read -p "Use selected disk '$SELECTED_DISK'? (Y/n): " -r use_disk
if [[ $use_disk =~ ^[Nn]$ ]]; then
echo "Available disks:"
echo "$NODE_INFO" | jq -r '.disks[]' | nl -w2 -s') '
echo "$NODE_INFO" | jq -r '.disks[] | "\(.path) (\((.size / 1000000000) | floor)GB)"' | nl -w2 -s') '
read -p "Enter disk number: " -r disk_num
SELECTED_DISK=$(echo "$NODE_INFO" | jq -r ".disks[$((disk_num-1))]")
SELECTED_DISK=$(echo "$NODE_INFO" | jq -r ".disks[$((disk_num-1))].path")
if [ "$SELECTED_DISK" = "null" ] || [ -z "$SELECTED_DISK" ]; then
print_error "Invalid disk selection"
continue
@@ -288,8 +288,8 @@ if [ "${SKIP_HARDWARE}" = false ]; then
wild-config-set "cluster.nodes.active.\"${NODE_NAME}\".disk" "$SELECTED_DISK"
# Copy current Talos version and schematic ID to this node
current_talos_version=$(get_current_config "cluster.nodes.talos.version")
current_schematic_id=$(get_current_config "cluster.nodes.talos.schematicId")
current_talos_version=$(wild-config "cluster.nodes.talos.version")
current_schematic_id=$(wild-config "cluster.nodes.talos.schematicId")
if [ -n "$current_talos_version" ] && [ "$current_talos_version" != "null" ]; then
wild-config-set "cluster.nodes.active.\"${NODE_NAME}\".version" "$current_talos_version"
fi
@@ -359,6 +359,11 @@ if [ "${SKIP_HARDWARE}" = false ]; then
read -p "Do you want to register a worker node? (y/N): " -r register_worker
if [[ $register_worker =~ ^[Yy]$ ]]; then
# Find first available worker number
while [ -n "$(wild-config "cluster.nodes.active.\"${HOSTNAME_PREFIX}worker-${WORKER_COUNT}\".role" 2>/dev/null)" ] && [ "$(wild-config "cluster.nodes.active.\"${HOSTNAME_PREFIX}worker-${WORKER_COUNT}\".role" 2>/dev/null)" != "null" ]; do
WORKER_COUNT=$((WORKER_COUNT + 1))
done
NODE_NAME="${HOSTNAME_PREFIX}worker-${WORKER_COUNT}"
read -p "Enter current IP for worker node $NODE_NAME: " -r WORKER_IP
@@ -388,7 +393,7 @@ if [ "${SKIP_HARDWARE}" = false ]; then
# Parse JSON response
INTERFACE=$(echo "$WORKER_INFO" | jq -r '.interface')
SELECTED_DISK=$(echo "$WORKER_INFO" | jq -r '.selected_disk')
AVAILABLE_DISKS=$(echo "$WORKER_INFO" | jq -r '.disks | join(", ")')
AVAILABLE_DISKS=$(echo "$WORKER_INFO" | jq -r '.disks[] | "\(.path) (\((.size / 1000000000) | floor)GB)"' | paste -sd, -)
print_success "Hardware detected for worker node $NODE_NAME:"
print_info " - Interface: $INTERFACE"
@@ -400,9 +405,9 @@ if [ "${SKIP_HARDWARE}" = false ]; then
read -p "Use selected disk '$SELECTED_DISK'? (Y/n): " -r use_disk
if [[ $use_disk =~ ^[Nn]$ ]]; then
echo "Available disks:"
echo "$WORKER_INFO" | jq -r '.disks[]' | nl -w2 -s') '
echo "$WORKER_INFO" | jq -r '.disks[] | "\(.path) (\((.size / 1000000000) | floor)GB)"' | nl -w2 -s') '
read -p "Enter disk number: " -r disk_num
SELECTED_DISK=$(echo "$WORKER_INFO" | jq -r ".disks[$((disk_num-1))]")
SELECTED_DISK=$(echo "$WORKER_INFO" | jq -r ".disks[$((disk_num-1))].path")
if [ "$SELECTED_DISK" = "null" ] || [ -z "$SELECTED_DISK" ]; then
print_error "Invalid disk selection"
continue
@@ -420,8 +425,8 @@ if [ "${SKIP_HARDWARE}" = false ]; then
wild-config-set "cluster.nodes.active.\"${NODE_NAME}\".disk" "$SELECTED_DISK"
# Copy current Talos version and schematic ID to this node
current_talos_version=$(get_current_config "cluster.nodes.talos.version")
current_schematic_id=$(get_current_config "cluster.nodes.talos.schematicId")
current_talos_version=$(wild-config "cluster.nodes.talos.version")
current_schematic_id=$(wild-config "cluster.nodes.talos.schematicId")
if [ -n "$current_talos_version" ] && [ "$current_talos_version" != "null" ]; then
wild-config-set "cluster.nodes.active.\"${NODE_NAME}\".version" "$current_talos_version"
fi

View File

@@ -49,7 +49,6 @@ while [[ $# -gt 0 ]]; do
done
# Initialize Wild Cloud environment
if [ -z "${WC_ROOT}" ]; then
echo "WC_ROOT is not set."
exit 1
@@ -57,130 +56,68 @@ else
source "${WC_ROOT}/scripts/common.sh"
fi
TEMPLATE_DIR="${WC_ROOT}/setup/home-scaffold"
# Check if cloud already exists
if [ -d ".wildcloud" ]; then
echo "Wild Cloud already exists in this directory."
echo ""
read -p "Do you want to update cloud files? (y/N): " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
UPDATE=true
echo "Updating cloud files..."
else
echo "Skipping cloud update."
echo ""
fi
else
# Check if current directory is empty for new cloud
if [ "${UPDATE}" = false ]; then
# Check if directory has any files (including hidden files, excluding . and .. and .git)
if [ -n "$(find . -maxdepth 1 -name ".*" -o -name "*" | grep -v "^\.$" | grep -v "^\.\.$" | grep -v "^\./\.git$" | head -1)" ]; then
echo "Error: Current directory is not empty"
echo "Use --update flag to overwrite existing cloud files while preserving other files"
exit 1
fi
fi
echo "Initializing Wild Cloud in $(pwd)"
UPDATE=false
fi
# Initialize cloud files if needed
if [ ! -d ".wildcloud" ] || [ "${UPDATE}" = true ]; then
if [ "${UPDATE}" = true ]; then
echo "Updating cloud files (preserving existing custom files)"
else
echo "Creating cloud files"
fi
# Function to copy files and directories
copy_cloud_files() {
local src_dir="$1"
local dest_dir="$2"
# Create destination directory if it doesn't exist
mkdir -p "${dest_dir}"
# Copy directory structure
find "${src_dir}" -type d | while read -r src_subdir; do
rel_path="${src_subdir#${src_dir}}"
rel_path="${rel_path#/}" # Remove leading slash if present
if [ -n "${rel_path}" ]; then
mkdir -p "${dest_dir}/${rel_path}"
fi
done
# Copy files
find "${src_dir}" -type f | while read -r src_file; do
rel_path="${src_file#${src_dir}}"
rel_path="${rel_path#/}" # Remove leading slash if present
dest_file="${dest_dir}/${rel_path}"
# Ensure destination directory exists
dest_file_dir=$(dirname "${dest_file}")
mkdir -p "${dest_file_dir}"
if [ "${UPDATE}" = true ] && [ -f "${dest_file}" ]; then
echo "Updating: ${rel_path}"
else
echo "Creating: ${rel_path}"
fi
cp "${src_file}" "${dest_file}"
done
}
# Copy cloud files to current directory
copy_cloud_files "${TEMPLATE_DIR}" "."
echo ""
echo "Wild Cloud initialized successfully!"
echo ""
# Initialize .wildcloud directory if it doesn't exist.
if [ ! -d ".wildcloud" ]; then
mkdir -p ".wildcloud"
UPDATE=true
echo "Created '.wildcloud' directory."
fi
# =============================================================================
# BASIC CONFIGURATION
# =============================================================================
# Basic Information
prompt_if_unset_config "operator.email" "Your email address (for Let's Encrypt certificates)" ""
# Domain Configuration
prompt_if_unset_config "operator.email" "Your email address" ""
prompt_if_unset_config "cloud.baseDomain" "Your base domain name (e.g., example.com)" ""
# Get base domain to use as default for cloud domain
base_domain=$(wild-config "cloud.baseDomain")
prompt_if_unset_config "cloud.domain" "Your public cloud domain" "cloud.${base_domain}"
# Get cloud domain to use as default for internal domain
domain=$(wild-config "cloud.domain")
prompt_if_unset_config "cloud.internalDomain" "Your internal cloud domain" "internal.${domain}"
prompt_if_unset_config "cloud.backup.root" "Existing path to save backups to" ""
# Derive cluster name from domain if not already set
current_cluster_name=$(get_current_config "cluster.name")
current_cluster_name=$(wild-config "cluster.name")
if [ -z "$current_cluster_name" ] || [ "$current_cluster_name" = "null" ]; then
cluster_name=$(echo "${domain}" | tr '.' '-' | tr '[:upper:]' '[:lower:]')
wild-config-set "cluster.name" "${cluster_name}"
print_info "Set cluster name to: ${cluster_name}"
fi
# Check if current directory is empty for new cloud
if [ "${UPDATE}" = false ]; then
# Check if directory has any files (including hidden files, excluding . and .. and .git)
if [ -n "$(find . -maxdepth 1 -name ".*" -o -name "*" | grep -v "^\.$" | grep -v "^\.\.$" | grep -v "^\./\.git$" | grep -v "^\./\.wildcloud$"| head -1)" ]; then
echo "Warning: Current directory is not empty."
read -p "Do you want to overwrite existing files? (y/N): " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
confirm="yes"
else
confirm="no"
fi
if [ "$confirm" != "yes" ]; then
echo "Aborting setup. Please run this script in an empty directory."
exit 1
fi
fi
fi
# Copy cloud files to current directory only if they do not exist.
# Ignore files that already exist.
SRC_DIR="${WC_ROOT}/setup/home-scaffold"
rsync -av --ignore-existing --exclude=".git" "${SRC_DIR}/" ./ > /dev/null
print_success "Ready for cluster setup!"
# =============================================================================
# COMPLETION
# =============================================================================
print_header "Wild Cloud Scaffold Setup Complete!"
print_success "Cloud scaffold initialized successfully!"
echo ""
print_info "Configuration files:"
echo " - ${WC_HOME}/config.yaml"
echo " - ${WC_HOME}/secrets.yaml"
print_header "Wild Cloud Scaffold Setup Complete! Welcome to Wild Cloud!"
echo ""
print_info "Next steps:"
echo "Next steps:"
echo " 1. Set up your Kubernetes cluster:"
echo " wild-setup-cluster"
echo ""
@@ -190,4 +127,3 @@ echo ""
echo "Or run the complete setup:"
echo " wild-setup"
print_success "Ready for cluster setup!"

View File

@@ -59,7 +59,7 @@ else
fi
# Check cluster configuration
if [ -z "$(get_current_config "cluster.name")" ]; then
if [ -z "$(wild-config "cluster.name")" ]; then
print_error "Cluster configuration is missing"
print_info "Run 'wild-setup-cluster' first to configure cluster settings"
exit 1

Some files were not shown because too many files have changed in this diff Show More