Compare commits
40 Commits
Author | SHA1 | Date | |
---|---|---|---|
![]() |
8a569a1720 | ||
![]() |
af60d0c744 | ||
![]() |
61460b63a3 | ||
![]() |
9f302e0e29 | ||
![]() |
f83763b070 | ||
![]() |
c9ab5a31d8 | ||
![]() |
1aa9f1050d | ||
![]() |
3b8b6de338 | ||
![]() |
fd1ba7fbe0 | ||
![]() |
5edf14695f | ||
![]() |
9a7b2fec72 | ||
![]() |
9321e54bce | ||
![]() |
73c0de1f67 | ||
![]() |
3a9bd7c6b3 | ||
![]() |
476f319acc | ||
![]() |
6e3b50c217 | ||
![]() |
c2f8f7e6a0 | ||
![]() |
22d4dc15ce | ||
![]() |
4966bd05f2 | ||
![]() |
6be0260673 | ||
![]() |
97e25a5f08 | ||
![]() |
360069a8e8 | ||
![]() |
33dce80116 | ||
![]() |
9ae248d5f7 | ||
![]() |
333e215b3b | ||
![]() |
b69344fae6 | ||
![]() |
03c631d67f | ||
![]() |
f4f2bfae1c | ||
![]() |
fef238db6d | ||
![]() |
b1b14d8d80 | ||
![]() |
4c14744a32 | ||
![]() |
5ca8c010e5 | ||
![]() |
22537da98e | ||
![]() |
f652a82319 | ||
![]() |
c7c71a203f | ||
![]() |
7bbc4cc52d | ||
![]() |
d3260d352a | ||
![]() |
800f6da0d9 | ||
![]() |
39b174e857 | ||
![]() |
23f1cb7b32 |
@@ -10,6 +10,7 @@ dnsmasq
|
||||
envsubst
|
||||
externaldns
|
||||
ftpd
|
||||
gitea
|
||||
glddns
|
||||
gomplate
|
||||
IMMICH
|
||||
@@ -34,6 +35,7 @@ OVERWRITEWEBROOT
|
||||
PGDATA
|
||||
pgvector
|
||||
rcode
|
||||
restic
|
||||
SAMEORIGIN
|
||||
traefik
|
||||
USEPATH
|
||||
|
8
.gitignore
vendored
8
.gitignore
vendored
@@ -7,9 +7,9 @@ secrets.yaml
|
||||
CLAUDE.md
|
||||
**/.claude/settings.local.json
|
||||
|
||||
# Wild Cloud
|
||||
**/config/secrets.env
|
||||
**/config/config.env
|
||||
|
||||
# Test directory - ignore temporary files
|
||||
|
||||
# Test directory
|
||||
test/tmp
|
||||
test/test-cloud/*
|
||||
!test/test-cloud/README.md
|
||||
|
@@ -2,7 +2,7 @@
|
||||
|
||||
Welcome! So excited you're here!
|
||||
|
||||
_This project is massively in progress. It's not ready to be used yet (even though I am using it as I develop it). This is published publicly for transparency. If you want to help out, please get in touch._
|
||||
_This project is massively in progress. It's not ready to be used yet (even though I am using it as I develop it). This is published publicly for transparency. If you want to help out, please [get in touch](https://forum.civilsociety.dev/c/wild-cloud/5)._
|
||||
|
||||
## Why Build Your Own Cloud?
|
||||
|
||||
|
@@ -130,9 +130,77 @@ To reference operator configuration in the configuration files, use gomplate var
|
||||
|
||||
When `wild-app-add` is run, the app's Kustomize files will be compiled with the operator's Wild Cloud configuration and secrets resulting in standard Kustomize files being placed in the Wild Cloud home directory.
|
||||
|
||||
#### External DNS Configuration
|
||||
|
||||
Wild Cloud apps use external-dns annotations in their ingress resources to automatically manage DNS records:
|
||||
|
||||
- `external-dns.alpha.kubernetes.io/target: {{ .cloud.domain }}` - Creates a CNAME record pointing the app subdomain to the main cluster domain (e.g., `ghost.cloud.payne.io` → `cloud.payne.io`)
|
||||
- `external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"` - Disables Cloudflare proxy for direct DNS resolution
|
||||
|
||||
#### Database Initialization Jobs
|
||||
|
||||
Apps that rely on PostgreSQL or MySQL databases typically need a database initialization job to create the required database and user before the main application starts. These jobs:
|
||||
|
||||
- Run as Kubernetes Jobs that execute once and complete
|
||||
- Create the application database if it doesn't exist
|
||||
- Create the application user with appropriate permissions
|
||||
- Should be included in the app's `kustomization.yaml` resources list
|
||||
- Use the same database connection settings as the main application
|
||||
|
||||
Examples of apps with db-init jobs: `gitea`, `codimd`, `immich`, `openproject`
|
||||
|
||||
##### Database URL Configuration
|
||||
|
||||
**Important:** When apps require database URLs with embedded credentials, always use a separate `dbUrl` secret instead of trying to construct the URL with environment variable substitution in Kustomize templates.
|
||||
|
||||
❌ **Wrong** (Kustomize cannot process runtime env var substitution):
|
||||
```yaml
|
||||
- name: DB_URL
|
||||
value: "postgresql://user:$(DB_PASSWORD)@host/db"
|
||||
```
|
||||
|
||||
✅ **Correct** (Use a dedicated secret):
|
||||
```yaml
|
||||
- name: DB_URL
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: app-secrets
|
||||
key: apps.appname.dbUrl
|
||||
```
|
||||
|
||||
Add `apps.appname.dbUrl` to the manifest's `requiredSecrets` and the `wild-app-add` script will generate the complete URL with embedded credentials.
|
||||
|
||||
##### Security Context Requirements
|
||||
|
||||
Pods must comply with Pod Security Standards. All pods should include proper security contexts to avoid deployment warnings:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 999 # Use appropriate non-root user ID
|
||||
runAsGroup: 999 # Use appropriate group ID
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: container-name
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
readOnlyRootFilesystem: false # Set to true when possible
|
||||
```
|
||||
|
||||
For PostgreSQL init jobs, use `runAsUser: 999` (postgres user). For other database types, use the appropriate non-root user ID for that database container.
|
||||
|
||||
#### Secrets
|
||||
|
||||
Secrets are managed in the `secrets.yaml` file in the Wild Cloud home directory. The app's `manifest.yaml` should list any required secrets under `requiredSecrets`. When the app is added, default secret values will be generated and stored in the `secrets.yaml` file. Secrets are always stored and referenced in the `apps.<app-name>.<secret-name>` yaml path. When `wild-app-deploy` is run, a Secret resource will be created in the Kubernetes cluster with the name `<app-name>-secrets`, containing all secrets defined in the manifest's `requiredSecrets` key. These secrets can then be referenced in the app's Kustomize files using a `secretKeyRef`. For example, to mount a secret in an environment variable, you would use:
|
||||
Secrets are managed in the `secrets.yaml` file in the Wild Cloud home directory. The app's `manifest.yaml` should list any required secrets under `requiredSecrets`. When the app is added, default secret values will be generated and stored in the `secrets.yaml` file. Secrets are always stored and referenced in the `apps.<app-name>.<secret-name>` yaml path. When `wild-app-deploy` is run, a Secret resource will be created in the Kubernetes cluster with the name `<app-name>-secrets`, containing all secrets defined in the manifest's `requiredSecrets` key. These secrets can then be referenced in the app's Kustomize files using a `secretKeyRef`.
|
||||
|
||||
**Important:** Always use the full dotted path from the manifest as the secret key, not just the last segment. For example, to mount a secret in an environment variable, you would use:
|
||||
|
||||
```yaml
|
||||
env:
|
||||
@@ -140,9 +208,11 @@ env:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: immich-secrets
|
||||
key: dbPassword
|
||||
key: apps.immich.dbPassword # Use full dotted path, not just "dbPassword"
|
||||
```
|
||||
|
||||
This approach prevents naming conflicts between apps and makes secret keys more descriptive and consistent with the `secrets.yaml` structure.
|
||||
|
||||
`secrets.yaml` files should not be checked in to a git repository and are ignored by default in Wild Cloud home directories. Checked in kustomize files should only reference secrets, not compile them.
|
||||
|
||||
## App Lifecycle
|
||||
@@ -162,7 +232,11 @@ If you would like to contribute an app to the Wild Cloud, issue a pull request w
|
||||
|
||||
### Converting from Helm Charts
|
||||
|
||||
Wild Cloud apps use Kustomize as kustomize files are simpler, more transparent, and easier to manage in a Git repository than Helm charts. If you have a Helm chart that you want to convert to a Wild Cloud app, the following example steps can simplify the process for you:
|
||||
Wild Cloud apps use Kustomize as kustomize files are simpler, more transparent, and easier to manage in a Git repository than Helm charts.
|
||||
|
||||
IMPORTANT! If an official Helm chart is available for an app, it is recommended to convert that chart to a Wild Cloud app rather than creating a new app from scratch.
|
||||
|
||||
If you have a Helm chart that you want to convert to a Wild Cloud app, the following example steps can simplify the process for you:
|
||||
|
||||
```bash
|
||||
helm fetch --untar --untardir charts nginx-stable/nginx-ingress
|
||||
@@ -187,3 +261,15 @@ After running these commands against your own Helm chart, you will have a Kustom
|
||||
- Use `component: web`, `component: worker`, etc. in selectors and pod template labels
|
||||
- Let Kustomize handle the common labels (`app`, `managedBy`, `partOf`) automatically
|
||||
- remove any Helm-specific labels from the Kustomize files, as Wild Cloud apps do not use Helm labels.
|
||||
|
||||
## Notice: Third-Party Software
|
||||
|
||||
The Kubernetes manifests and Kustomize files in this directory are designed to deploy **third-party software**.
|
||||
|
||||
Unless otherwise stated, the software deployed by these manifests **is not authored or maintained** by this project. All copyrights, licenses, and responsibilities for that software remain with the respective upstream authors.
|
||||
|
||||
These files are provided solely for convenience and automation. Users are responsible for reviewing and complying with the licenses of the software they deploy.
|
||||
|
||||
This project is licensed under the GNU AGPLv3 or later, but this license does **not apply** to the third-party software being deployed.
|
||||
|
||||
See individual deployment directories for upstream project links and container sources.
|
||||
|
31
apps/discourse/configmap.yaml
Normal file
31
apps/discourse/configmap.yaml
Normal file
@@ -0,0 +1,31 @@
|
||||
---
|
||||
# Source: discourse/templates/configmaps.yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: discourse
|
||||
namespace: discourse
|
||||
data:
|
||||
DISCOURSE_HOSTNAME: "{{ .apps.discourse.domain }}"
|
||||
DISCOURSE_SKIP_INSTALL: "no"
|
||||
DISCOURSE_SITE_NAME: "{{ .apps.discourse.siteName }}"
|
||||
DISCOURSE_USERNAME: "{{ .apps.discourse.adminUsername }}"
|
||||
DISCOURSE_EMAIL: "{{ .apps.discourse.adminEmail }}"
|
||||
|
||||
DISCOURSE_REDIS_HOST: "{{ .apps.discourse.redisHostname }}"
|
||||
DISCOURSE_REDIS_PORT_NUMBER: "6379"
|
||||
|
||||
DISCOURSE_DATABASE_HOST: "{{ .apps.discourse.dbHostname }}"
|
||||
DISCOURSE_DATABASE_PORT_NUMBER: "5432"
|
||||
DISCOURSE_DATABASE_NAME: "{{ .apps.discourse.dbName }}"
|
||||
DISCOURSE_DATABASE_USER: "{{ .apps.discourse.dbUsername }}"
|
||||
|
||||
DISCOURSE_SMTP_HOST: "{{ .apps.discourse.smtp.host }}"
|
||||
DISCOURSE_SMTP_PORT: "{{ .apps.discourse.smtp.port }}"
|
||||
DISCOURSE_SMTP_USER: "{{ .apps.discourse.smtp.user }}"
|
||||
DISCOURSE_SMTP_PROTOCOL: "tls"
|
||||
DISCOURSE_SMTP_AUTH: "login"
|
||||
|
||||
# DISCOURSE_PRECOMPILE_ASSETS: "false"
|
||||
# DISCOURSE_SKIP_INSTALL: "no"
|
||||
# DISCOURSE_SKIP_BOOTSTRAP: "yes"
|
77
apps/discourse/db-init-job.yaml
Normal file
77
apps/discourse/db-init-job.yaml
Normal file
@@ -0,0 +1,77 @@
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: discourse-db-init
|
||||
namespace: discourse
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
component: db-init
|
||||
spec:
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 999
|
||||
runAsGroup: 999
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
restartPolicy: OnFailure
|
||||
containers:
|
||||
- name: db-init
|
||||
image: postgres:16-alpine
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
readOnlyRootFilesystem: false
|
||||
env:
|
||||
- name: PGHOST
|
||||
value: "{{ .apps.discourse.dbHostname }}"
|
||||
- name: PGPORT
|
||||
value: "5432"
|
||||
- name: PGUSER
|
||||
value: postgres
|
||||
- name: PGPASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: discourse-secrets
|
||||
key: apps.postgres.password
|
||||
- name: DISCOURSE_DB_USER
|
||||
value: "{{ .apps.discourse.dbUsername }}"
|
||||
- name: DISCOURSE_DB_NAME
|
||||
value: "{{ .apps.discourse.dbName }}"
|
||||
- name: DISCOURSE_DB_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: discourse-secrets
|
||||
key: apps.discourse.dbPassword
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- |
|
||||
echo "Initializing Discourse database..."
|
||||
|
||||
# Create database if it doesn't exist
|
||||
if ! psql -lqt | cut -d \| -f 1 | grep -qw "$DISCOURSE_DB_NAME"; then
|
||||
echo "Creating database $DISCOURSE_DB_NAME..."
|
||||
createdb "$DISCOURSE_DB_NAME"
|
||||
else
|
||||
echo "Database $DISCOURSE_DB_NAME already exists."
|
||||
fi
|
||||
|
||||
# Create user if it doesn't exist and grant permissions
|
||||
psql -d "$DISCOURSE_DB_NAME" -c "
|
||||
DO \$\$
|
||||
BEGIN
|
||||
IF NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = '$DISCOURSE_DB_USER') THEN
|
||||
CREATE USER $DISCOURSE_DB_USER WITH PASSWORD '$DISCOURSE_DB_PASSWORD';
|
||||
END IF;
|
||||
END
|
||||
\$\$;
|
||||
GRANT ALL PRIVILEGES ON DATABASE $DISCOURSE_DB_NAME TO $DISCOURSE_DB_USER;
|
||||
GRANT ALL ON SCHEMA public TO $DISCOURSE_DB_USER;
|
||||
GRANT USAGE ON SCHEMA public TO $DISCOURSE_DB_USER;
|
||||
"
|
||||
|
||||
echo "Database initialization completed."
|
236
apps/discourse/deployment.yaml
Normal file
236
apps/discourse/deployment.yaml
Normal file
@@ -0,0 +1,236 @@
|
||||
---
|
||||
# Source: discourse/templates/deployment.yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: discourse
|
||||
namespace: discourse
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
component: web
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
component: web
|
||||
spec:
|
||||
automountServiceAccountToken: false
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- podAffinityTerm:
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
component: web
|
||||
topologyKey: kubernetes.io/hostname
|
||||
weight: 1
|
||||
|
||||
serviceAccountName: discourse
|
||||
securityContext:
|
||||
fsGroup: 0
|
||||
fsGroupChangePolicy: Always
|
||||
supplementalGroups: []
|
||||
sysctls: []
|
||||
initContainers:
|
||||
containers:
|
||||
- name: discourse
|
||||
image: docker.io/bitnami/discourse:3.4.7-debian-12-r0
|
||||
imagePullPolicy: "IfNotPresent"
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
capabilities:
|
||||
add:
|
||||
- CHOWN
|
||||
- SYS_CHROOT
|
||||
- FOWNER
|
||||
- SETGID
|
||||
- SETUID
|
||||
- DAC_OVERRIDE
|
||||
drop:
|
||||
- ALL
|
||||
privileged: false
|
||||
readOnlyRootFilesystem: false
|
||||
runAsGroup: 0
|
||||
runAsNonRoot: false
|
||||
runAsUser: 0
|
||||
seLinuxOptions: {}
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
env:
|
||||
- name: BITNAMI_DEBUG
|
||||
value: "false"
|
||||
- name: DISCOURSE_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: discourse-secrets
|
||||
key: apps.discourse.adminPassword
|
||||
- name: DISCOURSE_PORT_NUMBER
|
||||
value: "8080"
|
||||
- name: DISCOURSE_EXTERNAL_HTTP_PORT_NUMBER
|
||||
value: "80"
|
||||
- name: DISCOURSE_DATABASE_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: discourse-secrets
|
||||
key: apps.discourse.dbPassword
|
||||
- name: POSTGRESQL_CLIENT_CREATE_DATABASE_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: discourse-secrets
|
||||
key: apps.discourse.dbPassword
|
||||
- name: DISCOURSE_REDIS_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: discourse-secrets
|
||||
key: apps.redis.password
|
||||
- name: DISCOURSE_SECRET_KEY_BASE
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: discourse-secrets
|
||||
key: apps.discourse.secretKeyBase
|
||||
- name: DISCOURSE_SMTP_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: discourse-secrets
|
||||
key: apps.discourse.smtpPassword
|
||||
- name: SMTP_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: discourse-secrets
|
||||
key: apps.discourse.smtpPassword
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: discourse
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 8080
|
||||
protocol: TCP
|
||||
livenessProbe:
|
||||
tcpSocket:
|
||||
port: http
|
||||
initialDelaySeconds: 500
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 6
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /srv/status
|
||||
port: http
|
||||
initialDelaySeconds: 180
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 6
|
||||
resources:
|
||||
limits:
|
||||
cpu: 1
|
||||
ephemeral-storage: 2Gi
|
||||
memory: 8Gi # for precompiling assets!
|
||||
requests:
|
||||
cpu: 750m
|
||||
ephemeral-storage: 50Mi
|
||||
memory: 1Gi
|
||||
volumeMounts:
|
||||
- name: discourse-data
|
||||
mountPath: /bitnami/discourse
|
||||
subPath: discourse
|
||||
- name: sidekiq
|
||||
image: docker.io/bitnami/discourse:3.4.7-debian-12-r0
|
||||
imagePullPolicy: "IfNotPresent"
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
capabilities:
|
||||
add:
|
||||
- CHOWN
|
||||
- SYS_CHROOT
|
||||
- FOWNER
|
||||
- SETGID
|
||||
- SETUID
|
||||
- DAC_OVERRIDE
|
||||
drop:
|
||||
- ALL
|
||||
privileged: false
|
||||
readOnlyRootFilesystem: false
|
||||
runAsGroup: 0
|
||||
runAsNonRoot: false
|
||||
runAsUser: 0
|
||||
seLinuxOptions: {}
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
command:
|
||||
- /opt/bitnami/scripts/discourse/entrypoint.sh
|
||||
args:
|
||||
- /opt/bitnami/scripts/discourse-sidekiq/run.sh
|
||||
env:
|
||||
- name: BITNAMI_DEBUG
|
||||
value: "false"
|
||||
- name: DISCOURSE_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: discourse-secrets
|
||||
key: apps.discourse.adminPassword
|
||||
- name: DISCOURSE_POSTGRESQL_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: discourse-secrets
|
||||
key: apps.discourse.dbPassword
|
||||
- name: DISCOURSE_REDIS_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: discourse-secrets
|
||||
key: apps.redis.password
|
||||
- name: DISCOURSE_SECRET_KEY_BASE
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: discourse-secrets
|
||||
key: apps.discourse.secretKeyBase
|
||||
- name: DISCOURSE_SMTP_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: discourse-secrets
|
||||
key: apps.discourse.smtpPassword
|
||||
- name: SMTP_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: discourse-secrets
|
||||
key: apps.discourse.smtpPassword
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: discourse
|
||||
livenessProbe:
|
||||
exec:
|
||||
command: ["/bin/sh", "-c", "pgrep -f ^sidekiq"]
|
||||
initialDelaySeconds: 500
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 6
|
||||
readinessProbe:
|
||||
exec:
|
||||
command: ["/bin/sh", "-c", "pgrep -f ^sidekiq"]
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 6
|
||||
resources:
|
||||
limits:
|
||||
cpu: 500m
|
||||
ephemeral-storage: 2Gi
|
||||
memory: 768Mi
|
||||
requests:
|
||||
cpu: 375m
|
||||
ephemeral-storage: 50Mi
|
||||
memory: 512Mi
|
||||
volumeMounts:
|
||||
- name: discourse-data
|
||||
mountPath: /bitnami/discourse
|
||||
subPath: discourse
|
||||
volumes:
|
||||
- name: discourse-data
|
||||
persistentVolumeClaim:
|
||||
claimName: discourse
|
26
apps/discourse/ingress.yaml
Normal file
26
apps/discourse/ingress.yaml
Normal file
@@ -0,0 +1,26 @@
|
||||
---
|
||||
# Source: discourse/templates/ingress.yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: discourse
|
||||
namespace: "discourse"
|
||||
annotations:
|
||||
external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
|
||||
external-dns.alpha.kubernetes.io/target: "{{ .cloud.domain }}"
|
||||
spec:
|
||||
rules:
|
||||
- host: "{{ .apps.discourse.domain }}"
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: ImplementationSpecific
|
||||
backend:
|
||||
service:
|
||||
name: discourse
|
||||
port:
|
||||
name: http
|
||||
tls:
|
||||
- hosts:
|
||||
- "{{ .apps.discourse.domain }}"
|
||||
secretName: wildcard-external-wild-cloud-tls
|
18
apps/discourse/kustomization.yaml
Normal file
18
apps/discourse/kustomization.yaml
Normal file
@@ -0,0 +1,18 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
namespace: discourse
|
||||
labels:
|
||||
- includeSelectors: true
|
||||
pairs:
|
||||
app: discourse
|
||||
managedBy: kustomize
|
||||
partOf: wild-cloud
|
||||
resources:
|
||||
- namespace.yaml
|
||||
- serviceaccount.yaml
|
||||
- configmap.yaml
|
||||
- pvc.yaml
|
||||
- deployment.yaml
|
||||
- service.yaml
|
||||
- ingress.yaml
|
||||
- db-init-job.yaml
|
36
apps/discourse/manifest.yaml
Normal file
36
apps/discourse/manifest.yaml
Normal file
@@ -0,0 +1,36 @@
|
||||
name: discourse
|
||||
description: Discourse is a modern, open-source discussion platform designed for online communities and forums.
|
||||
version: 3.4.7
|
||||
icon: https://www.discourse.org/img/icon.png
|
||||
requires:
|
||||
- name: postgres
|
||||
- name: redis
|
||||
defaultConfig:
|
||||
timezone: UTC
|
||||
port: 8080
|
||||
storage: 10Gi
|
||||
adminEmail: admin@{{ .cloud.domain }}
|
||||
adminUsername: admin
|
||||
siteName: "Community"
|
||||
domain: discourse.{{ .cloud.domain }}
|
||||
dbHostname: postgres.postgres.svc.cluster.local
|
||||
dbUsername: discourse
|
||||
dbName: discourse
|
||||
redisHostname: redis.redis.svc.cluster.local
|
||||
tlsSecretName: wildcard-wild-cloud-tls
|
||||
smtp:
|
||||
enabled: false
|
||||
host: "{{ .cloud.smtp.host }}"
|
||||
port: "{{ .cloud.smtp.port }}"
|
||||
user: "{{ .cloud.smtp.user }}"
|
||||
from: "{{ .cloud.smtp.from }}"
|
||||
tls: {{ .cloud.smtp.tls }}
|
||||
startTls: {{ .cloud.smtp.startTls }}
|
||||
requiredSecrets:
|
||||
- apps.discourse.adminPassword
|
||||
- apps.discourse.dbPassword
|
||||
- apps.discourse.dbUrl
|
||||
- apps.redis.password
|
||||
- apps.discourse.secretKeyBase
|
||||
- apps.discourse.smtpPassword
|
||||
- apps.postgres.password
|
@@ -1,5 +1,4 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: jellyfin
|
||||
name: discourse
|
13
apps/discourse/pvc.yaml
Normal file
13
apps/discourse/pvc.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
# Source: discourse/templates/pvc.yaml
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: discourse
|
||||
namespace: discourse
|
||||
spec:
|
||||
accessModes:
|
||||
- "ReadWriteOnce"
|
||||
resources:
|
||||
requests:
|
||||
storage: "{{ .apps.discourse.storage }}"
|
18
apps/discourse/service.yaml
Normal file
18
apps/discourse/service.yaml
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
# Source: discourse/templates/service.yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: discourse
|
||||
namespace: discourse
|
||||
spec:
|
||||
type: ClusterIP
|
||||
sessionAffinity: None
|
||||
ports:
|
||||
- name: http
|
||||
port: 80
|
||||
protocol: TCP
|
||||
targetPort: http
|
||||
nodePort: null
|
||||
selector:
|
||||
component: web
|
8
apps/discourse/serviceaccount.yaml
Normal file
8
apps/discourse/serviceaccount.yaml
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
# Source: discourse/templates/serviceaccount.yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: discourse
|
||||
namespace: discourse
|
||||
automountServiceAccountToken: false
|
@@ -2,3 +2,5 @@ name: example-admin
|
||||
install: true
|
||||
description: An example application that is deployed with internal-only access.
|
||||
version: 1.0.0
|
||||
defaultConfig:
|
||||
tlsSecretName: wildcard-internal-wild-cloud-tls
|
||||
|
@@ -2,3 +2,5 @@ name: example-app
|
||||
install: true
|
||||
description: An example application that is deployed with public access.
|
||||
version: 1.0.0
|
||||
defaultConfig:
|
||||
tlsSecretName: wildcard-wild-cloud-tls
|
||||
|
@@ -1,36 +0,0 @@
|
||||
# Future Apps to be added to Wild Cloud
|
||||
|
||||
## Productivity
|
||||
|
||||
- [Affine](https://docs.affine.pro/self-host-affine): A collaborative document editor with a focus on real-time collaboration and rich media support.
|
||||
- [Vaultwarden](https://github.com/dani-garcia/vaultwarden): A lightweight, self-hosted password manager that is compatible with Bitwarden clients.
|
||||
|
||||
## Automation
|
||||
|
||||
- [Home Assistant](https://www.home-assistant.io/installation/linux): A powerful home automation platform that focuses on privacy and local control.
|
||||
|
||||
## Social
|
||||
|
||||
- [Mastodon](https://docs.joinmastodon.org/admin/install/): A decentralized social network server that allows users to create their own instances.
|
||||
|
||||
## Development
|
||||
|
||||
- [Gitea](https://docs.gitea.io/en-us/install-from-binary/): A self-hosted Git service that is lightweight and easy to set up.
|
||||
|
||||
## Media
|
||||
|
||||
- [Jellyfin](https://jellyfin.org/downloads/server): A free software media system that allows you to organize, manage, and share your media files.
|
||||
- [Glance](https://github.com/glanceapp/glance): RSS aggregator.
|
||||
|
||||
## Collaboration
|
||||
|
||||
- [Discourse](https://github.com/discourse/discourse): A modern forum software that is designed for community engagement and discussion.
|
||||
- [Mattermost](https://docs.mattermost.com/guides/install.html): An open-source messaging platform that provides team collaboration features similar to Slack.
|
||||
- [Outline](https://docs.getoutline.com/install): A collaborative knowledge base and wiki platform that allows teams to create and share documentation.
|
||||
- [Rocket.Chat](https://rocket.chat/docs/installation/manual-installation/): An open-source team communication platform that provides real-time messaging, video conferencing, and file sharing.
|
||||
|
||||
## Infrastructure
|
||||
|
||||
- [Umami](https://umami.is/docs/installation): A self-hosted web analytics solution that provides insights into website traffic and user behavior.
|
||||
- [Authelia](https://authelia.com/docs/): A self-hosted authentication and authorization server that provides two-factor authentication and single sign-on capabilities.
|
||||
|
@@ -1,13 +0,0 @@
|
||||
GHOST_NAMESPACE=ghost
|
||||
GHOST_HOST=blog.${DOMAIN}
|
||||
GHOST_TITLE="My Blog"
|
||||
GHOST_EMAIL=
|
||||
GHOST_STORAGE_SIZE=10Gi
|
||||
GHOST_MARIADB_STORAGE_SIZE=8Gi
|
||||
GHOST_DATABASE_HOST=mariadb.mariadb.svc.cluster.local
|
||||
GHOST_DATABASE_USER=ghost
|
||||
GHOST_DATABASE_NAME=ghost
|
||||
|
||||
# Secrets
|
||||
GHOST_PASSWORD=
|
||||
GHOST_DATABASE_PASSWORD=
|
44
apps/ghost/db-init-job.yaml
Normal file
44
apps/ghost/db-init-job.yaml
Normal file
@@ -0,0 +1,44 @@
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: ghost-db-init
|
||||
labels:
|
||||
component: db-init
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
component: db-init
|
||||
spec:
|
||||
containers:
|
||||
- name: db-init
|
||||
image: {{ .apps.mysql.image }}
|
||||
command: ["/bin/bash", "-c"]
|
||||
args:
|
||||
- |
|
||||
mysql -h ${DB_HOSTNAME} -P ${DB_PORT} -u root -p${MYSQL_ROOT_PASSWORD} <<EOF
|
||||
CREATE DATABASE IF NOT EXISTS ${DB_DATABASE_NAME};
|
||||
CREATE USER IF NOT EXISTS '${DB_USERNAME}'@'%' IDENTIFIED BY '${DB_PASSWORD}';
|
||||
GRANT ALL PRIVILEGES ON ${DB_DATABASE_NAME}.* TO '${DB_USERNAME}'@'%';
|
||||
FLUSH PRIVILEGES;
|
||||
EOF
|
||||
env:
|
||||
- name: MYSQL_ROOT_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mysql-secrets
|
||||
key: apps.mysql.rootPassword
|
||||
- name: DB_HOSTNAME
|
||||
value: "{{ .apps.ghost.dbHost }}"
|
||||
- name: DB_PORT
|
||||
value: "{{ .apps.ghost.dbPort }}"
|
||||
- name: DB_DATABASE_NAME
|
||||
value: "{{ .apps.ghost.dbName }}"
|
||||
- name: DB_USERNAME
|
||||
value: "{{ .apps.ghost.dbUser }}"
|
||||
- name: DB_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: ghost-secrets
|
||||
key: apps.ghost.dbPassword
|
||||
restartPolicy: OnFailure
|
@@ -1,111 +1,26 @@
|
||||
kind: Deployment
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: ghost
|
||||
namespace: ghost
|
||||
uid: d01c62ff-68a6-456a-a630-77c730bffc9b
|
||||
resourceVersion: "2772014"
|
||||
generation: 1
|
||||
creationTimestamp: "2025-04-27T02:01:30Z"
|
||||
labels:
|
||||
app.kubernetes.io/component: ghost
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Wild
|
||||
app.kubernetes.io/name: ghost
|
||||
app.kubernetes.io/version: 5.118.1
|
||||
annotations:
|
||||
deployment.kubernetes.io/revision: "1"
|
||||
meta.helm.sh/release-name: ghost
|
||||
meta.helm.sh/release-namespace: ghost
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: ghost
|
||||
component: web
|
||||
template:
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
app.kubernetes.io/component: ghost
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Wild
|
||||
app.kubernetes.io/name: ghost
|
||||
app.kubernetes.io/version: 5.118.1
|
||||
annotations:
|
||||
checksum/secrets: b1cef92e7f73650dddfb455a7519d7b2bcf051c9cb9136b34f504ee120c63ae6
|
||||
component: web
|
||||
spec:
|
||||
volumes:
|
||||
- name: empty-dir
|
||||
emptyDir: {}
|
||||
- name: ghost-secrets
|
||||
projected:
|
||||
sources:
|
||||
- secret:
|
||||
name: ghost-mysql
|
||||
- secret:
|
||||
name: ghost
|
||||
defaultMode: 420
|
||||
- name: ghost-data
|
||||
persistentVolumeClaim:
|
||||
claimName: ghost
|
||||
initContainers:
|
||||
- name: prepare-base-dir
|
||||
image: docker.io/bitnami/ghost:5.118.1-debian-12-r0
|
||||
command:
|
||||
- /bin/bash
|
||||
args:
|
||||
- "-ec"
|
||||
- >
|
||||
#!/bin/bash
|
||||
|
||||
|
||||
. /opt/bitnami/scripts/liblog.sh
|
||||
|
||||
|
||||
info "Copying base dir to empty dir"
|
||||
|
||||
# In order to not break the application functionality (such as
|
||||
upgrades or plugins) we need
|
||||
|
||||
# to make the base directory writable, so we need to copy it to an
|
||||
empty dir volume
|
||||
|
||||
cp -r --preserve=mode /opt/bitnami/ghost /emptydir/app-base-dir
|
||||
resources:
|
||||
limits:
|
||||
cpu: 375m
|
||||
ephemeral-storage: 2Gi
|
||||
memory: 384Mi
|
||||
requests:
|
||||
cpu: 250m
|
||||
ephemeral-storage: 50Mi
|
||||
memory: 256Mi
|
||||
volumeMounts:
|
||||
- name: empty-dir
|
||||
mountPath: /emptydir
|
||||
terminationMessagePath: /dev/termination-log
|
||||
terminationMessagePolicy: File
|
||||
imagePullPolicy: IfNotPresent
|
||||
securityContext:
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
privileged: false
|
||||
seLinuxOptions: {}
|
||||
runAsUser: 1001
|
||||
runAsGroup: 1001
|
||||
runAsNonRoot: true
|
||||
readOnlyRootFilesystem: true
|
||||
allowPrivilegeEscalation: false
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: ghost
|
||||
image: docker.io/bitnami/ghost:5.118.1-debian-12-r0
|
||||
image: {{ .apps.ghost.image }}
|
||||
ports:
|
||||
- name: https
|
||||
containerPort: 2368
|
||||
- name: http
|
||||
containerPort: {{ .apps.ghost.port }}
|
||||
protocol: TCP
|
||||
env:
|
||||
- name: BITNAMI_DEBUG
|
||||
@@ -113,27 +28,33 @@ spec:
|
||||
- name: ALLOW_EMPTY_PASSWORD
|
||||
value: "yes"
|
||||
- name: GHOST_DATABASE_HOST
|
||||
value: ghost-mysql
|
||||
value: {{ .apps.ghost.dbHost }}
|
||||
- name: GHOST_DATABASE_PORT_NUMBER
|
||||
value: "3306"
|
||||
value: "{{ .apps.ghost.dbPort }}"
|
||||
- name: GHOST_DATABASE_NAME
|
||||
value: ghost
|
||||
value: {{ .apps.ghost.dbName }}
|
||||
- name: GHOST_DATABASE_USER
|
||||
value: ghost
|
||||
- name: GHOST_DATABASE_PASSWORD_FILE
|
||||
value: /opt/bitnami/ghost/secrets/mysql-password
|
||||
value: {{ .apps.ghost.dbUser }}
|
||||
- name: GHOST_DATABASE_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: ghost-secrets
|
||||
key: apps.ghost.dbPassword
|
||||
- name: GHOST_HOST
|
||||
value: blog.cloud.payne.io/
|
||||
value: {{ .apps.ghost.domain }}
|
||||
- name: GHOST_PORT_NUMBER
|
||||
value: "2368"
|
||||
value: "{{ .apps.ghost.port }}"
|
||||
- name: GHOST_USERNAME
|
||||
value: admin
|
||||
- name: GHOST_PASSWORD_FILE
|
||||
value: /opt/bitnami/ghost/secrets/ghost-password
|
||||
value: {{ .apps.ghost.adminUser }}
|
||||
- name: GHOST_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: ghost-secrets
|
||||
key: apps.ghost.adminPassword
|
||||
- name: GHOST_EMAIL
|
||||
value: paul@payne.io
|
||||
value: {{ .apps.ghost.adminEmail }}
|
||||
- name: GHOST_BLOG_TITLE
|
||||
value: User's Blog
|
||||
value: {{ .apps.ghost.blogTitle }}
|
||||
- name: GHOST_ENABLE_HTTPS
|
||||
value: "yes"
|
||||
- name: GHOST_EXTERNAL_HTTP_PORT_NUMBER
|
||||
@@ -142,6 +63,21 @@ spec:
|
||||
value: "443"
|
||||
- name: GHOST_SKIP_BOOTSTRAP
|
||||
value: "no"
|
||||
- name: GHOST_SMTP_SERVICE
|
||||
value: SMTP
|
||||
- name: GHOST_SMTP_HOST
|
||||
value: {{ .apps.ghost.smtp.host }}
|
||||
- name: GHOST_SMTP_PORT
|
||||
value: "{{ .apps.ghost.smtp.port }}"
|
||||
- name: GHOST_SMTP_USER
|
||||
value: {{ .apps.ghost.smtp.user }}
|
||||
- name: GHOST_SMTP_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: ghost-secrets
|
||||
key: apps.ghost.smtpPassword
|
||||
- name: GHOST_SMTP_FROM_ADDRESS
|
||||
value: {{ .apps.ghost.smtp.from }}
|
||||
resources:
|
||||
limits:
|
||||
cpu: 375m
|
||||
@@ -152,22 +88,11 @@ spec:
|
||||
ephemeral-storage: 50Mi
|
||||
memory: 256Mi
|
||||
volumeMounts:
|
||||
- name: empty-dir
|
||||
mountPath: /opt/bitnami/ghost
|
||||
subPath: app-base-dir
|
||||
- name: empty-dir
|
||||
mountPath: /.ghost
|
||||
subPath: app-tmp-dir
|
||||
- name: empty-dir
|
||||
mountPath: /tmp
|
||||
subPath: tmp-dir
|
||||
- name: ghost-data
|
||||
mountPath: /bitnami/ghost
|
||||
- name: ghost-secrets
|
||||
mountPath: /opt/bitnami/ghost/secrets
|
||||
livenessProbe:
|
||||
tcpSocket:
|
||||
port: 2368
|
||||
port: {{ .apps.ghost.port }}
|
||||
initialDelaySeconds: 120
|
||||
timeoutSeconds: 5
|
||||
periodSeconds: 10
|
||||
@@ -176,7 +101,7 @@ spec:
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: https
|
||||
port: http
|
||||
scheme: HTTP
|
||||
httpHeaders:
|
||||
- name: x-forwarded-proto
|
||||
@@ -186,64 +111,22 @@ spec:
|
||||
periodSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 6
|
||||
terminationMessagePath: /dev/termination-log
|
||||
terminationMessagePolicy: File
|
||||
imagePullPolicy: IfNotPresent
|
||||
securityContext:
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
privileged: false
|
||||
seLinuxOptions: {}
|
||||
runAsUser: 1001
|
||||
runAsGroup: 1001
|
||||
runAsNonRoot: true
|
||||
readOnlyRootFilesystem: true
|
||||
readOnlyRootFilesystem: false
|
||||
allowPrivilegeEscalation: false
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
volumes:
|
||||
- name: ghost-data
|
||||
persistentVolumeClaim:
|
||||
claimName: ghost-data
|
||||
restartPolicy: Always
|
||||
terminationGracePeriodSeconds: 30
|
||||
dnsPolicy: ClusterFirst
|
||||
serviceAccountName: ghost
|
||||
serviceAccount: ghost
|
||||
automountServiceAccountToken: false
|
||||
securityContext:
|
||||
fsGroup: 1001
|
||||
fsGroupChangePolicy: Always
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 1
|
||||
podAffinityTerm:
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: ghost
|
||||
topologyKey: kubernetes.io/hostname
|
||||
schedulerName: default-scheduler
|
||||
strategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxUnavailable: 25%
|
||||
maxSurge: 25%
|
||||
revisionHistoryLimit: 10
|
||||
progressDeadlineSeconds: 600
|
||||
status:
|
||||
observedGeneration: 1
|
||||
replicas: 1
|
||||
updatedReplicas: 1
|
||||
unavailableReplicas: 1
|
||||
conditions:
|
||||
- type: Available
|
||||
status: "False"
|
||||
lastUpdateTime: "2025-04-27T02:01:30Z"
|
||||
lastTransitionTime: "2025-04-27T02:01:30Z"
|
||||
reason: MinimumReplicasUnavailable
|
||||
message: Deployment does not have minimum availability.
|
||||
- type: Progressing
|
||||
status: "False"
|
||||
lastUpdateTime: "2025-04-27T02:11:32Z"
|
||||
lastTransitionTime: "2025-04-27T02:11:32Z"
|
||||
reason: ProgressDeadlineExceeded
|
||||
message: ReplicaSet "ghost-586bbc6ddd" has timed out progressing.
|
||||
fsGroup: 1001
|
@@ -1,19 +1,18 @@
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: ghost
|
||||
namespace: {{ .Values.namespace }}
|
||||
namespace: ghost
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: "traefik"
|
||||
cert-manager.io/cluster-issuer: "letsencrypt-prod"
|
||||
external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
|
||||
external-dns.alpha.kubernetes.io/target: "cloud.payne.io"
|
||||
external-dns.alpha.kubernetes.io/target: {{ .cloud.domain }}
|
||||
external-dns.alpha.kubernetes.io/ttl: "60"
|
||||
traefik.ingress.kubernetes.io/redirect-entry-point: https
|
||||
spec:
|
||||
rules:
|
||||
- host: {{ .Values.ghost.host }}
|
||||
- host: {{ .apps.ghost.domain }}
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
@@ -25,5 +24,5 @@ spec:
|
||||
number: 80
|
||||
tls:
|
||||
- hosts:
|
||||
- {{ .Values.ghost.host }}
|
||||
secretName: ghost-tls
|
||||
- {{ .apps.ghost.domain }}
|
||||
secretName: {{ .apps.ghost.tlsSecretName }}
|
@@ -0,0 +1,16 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
namespace: ghost
|
||||
labels:
|
||||
- includeSelectors: true
|
||||
pairs:
|
||||
app: ghost
|
||||
managedBy: kustomize
|
||||
partOf: wild-cloud
|
||||
resources:
|
||||
- namespace.yaml
|
||||
- db-init-job.yaml
|
||||
- deployment.yaml
|
||||
- service.yaml
|
||||
- ingress.yaml
|
||||
- pvc.yaml
|
30
apps/ghost/manifest.yaml
Normal file
30
apps/ghost/manifest.yaml
Normal file
@@ -0,0 +1,30 @@
|
||||
name: ghost
|
||||
description: Ghost is a powerful app for new-media creators to publish, share, and grow a business around their content.
|
||||
version: 5.118.1
|
||||
icon: https://ghost.org/images/logos/ghost-logo-orb.png
|
||||
requires:
|
||||
- name: mysql
|
||||
defaultConfig:
|
||||
image: docker.io/bitnami/ghost:5.118.1-debian-12-r0
|
||||
domain: ghost.{{ .cloud.domain }}
|
||||
tlsSecretName: wildcard-wild-cloud-tls
|
||||
port: 2368
|
||||
storage: 10Gi
|
||||
dbHost: mysql.mysql.svc.cluster.local
|
||||
dbPort: 3306
|
||||
dbName: ghost
|
||||
dbUser: ghost
|
||||
adminUser: admin
|
||||
adminEmail: "admin@{{ .cloud.domain }}"
|
||||
blogTitle: "My Blog"
|
||||
timezone: UTC
|
||||
tlsSecretName: wildcard-wild-cloud-tls
|
||||
smtp:
|
||||
host: "{{ .cloud.smtp.host }}"
|
||||
port: "{{ .cloud.smtp.port }}"
|
||||
from: "{{ .cloud.smtp.from }}"
|
||||
user: "{{ .cloud.smtp.user }}"
|
||||
requiredSecrets:
|
||||
- apps.ghost.adminPassword
|
||||
- apps.ghost.dbPassword
|
||||
- apps.ghost.smtpPassword
|
4
apps/ghost/namespace.yaml
Normal file
4
apps/ghost/namespace.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: ghost
|
@@ -1,26 +0,0 @@
|
||||
---
|
||||
# Source: ghost/templates/networkpolicy.yaml
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: ghost
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Wild
|
||||
app.kubernetes.io/name: ghost
|
||||
app.kubernetes.io/version: 5.118.1
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: ghost
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
egress:
|
||||
- {}
|
||||
ingress:
|
||||
- ports:
|
||||
- port: 2368
|
||||
- port: 2368
|
@@ -1,20 +0,0 @@
|
||||
---
|
||||
# Source: ghost/templates/pdb.yaml
|
||||
apiVersion: policy/v1
|
||||
kind: PodDisruptionBudget
|
||||
metadata:
|
||||
name: ghost
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Wild
|
||||
app.kubernetes.io/name: ghost
|
||||
app.kubernetes.io/version: 5.118.1
|
||||
app.kubernetes.io/component: ghost
|
||||
spec:
|
||||
maxUnavailable: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: ghost
|
||||
app.kubernetes.io/component: ghost
|
11
apps/ghost/pvc.yaml
Normal file
11
apps/ghost/pvc.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: ghost-data
|
||||
namespace: ghost
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .apps.ghost.storage }}
|
@@ -1,14 +0,0 @@
|
||||
---
|
||||
# Source: ghost/templates/service-account.yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: ghost
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Wild
|
||||
app.kubernetes.io/name: ghost
|
||||
app.kubernetes.io/version: 5.118.1
|
||||
app.kubernetes.io/component: ghost
|
||||
automountServiceAccountToken: false
|
14
apps/ghost/service.yaml
Normal file
14
apps/ghost/service.yaml
Normal file
@@ -0,0 +1,14 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ghost
|
||||
namespace: ghost
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- name: http
|
||||
port: 80
|
||||
protocol: TCP
|
||||
targetPort: {{ .apps.ghost.port }}
|
||||
selector:
|
||||
component: web
|
@@ -1,26 +0,0 @@
|
||||
---
|
||||
# Source: ghost/templates/svc.yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ghost
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Wild
|
||||
app.kubernetes.io/name: ghost
|
||||
app.kubernetes.io/version: 5.118.1
|
||||
app.kubernetes.io/component: ghost
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
externalTrafficPolicy: "Cluster"
|
||||
sessionAffinity: None
|
||||
ports:
|
||||
- name: http
|
||||
port: 80
|
||||
protocol: TCP
|
||||
targetPort: http
|
||||
selector:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: ghost
|
||||
app.kubernetes.io/component: ghost
|
@@ -36,7 +36,7 @@ spec:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: postgres-secrets
|
||||
key: password
|
||||
key: apps.postgres.password
|
||||
- name: DB_HOSTNAME
|
||||
value: "{{ .apps.gitea.dbHost }}"
|
||||
- name: DB_DATABASE_NAME
|
||||
@@ -47,5 +47,5 @@ spec:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: gitea-secrets
|
||||
key: dbPassword
|
||||
key: apps.gitea.dbPassword
|
||||
restartPolicy: OnFailure
|
@@ -33,27 +33,27 @@ spec:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: gitea-secrets
|
||||
key: adminPassword
|
||||
key: apps.gitea.adminPassword
|
||||
- name: GITEA__security__SECRET_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: gitea-secrets
|
||||
key: secretKey
|
||||
key: apps.gitea.secretKey
|
||||
- name: GITEA__security__INTERNAL_TOKEN
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: gitea-secrets
|
||||
key: jwtSecret
|
||||
key: apps.gitea.jwtSecret
|
||||
- name: GITEA__database__PASSWD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: gitea-secrets
|
||||
key: dbPassword
|
||||
key: apps.gitea.dbPassword
|
||||
- name: GITEA__mailer__PASSWD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: gitea-secrets
|
||||
key: smtpPassword
|
||||
key: apps.gitea.smtpPassword
|
||||
ports:
|
||||
- name: ssh
|
||||
containerPort: 2222
|
||||
|
@@ -1,19 +1,15 @@
|
||||
SSH_LISTEN_PORT=2222
|
||||
SSH_PORT=22
|
||||
GITEA_APP_INI=/data/gitea/conf/app.ini
|
||||
GITEA_CUSTOM=/data/gitea
|
||||
GITEA_WORK_DIR=/data
|
||||
GITEA_TEMP=/tmp/gitea
|
||||
TMPDIR=/tmp/gitea
|
||||
HOME=/data/gitea/git
|
||||
GITEA_ADMIN_USERNAME={{ .apps.gitea.adminUser }}
|
||||
GITEA_ADMIN_PASSWORD_MODE=keepUpdated
|
||||
|
||||
# Core app settings
|
||||
APP_NAME={{ .apps.gitea.appName }}
|
||||
RUN_MODE={{ .apps.gitea.runMode }}
|
||||
RUN_USER=git
|
||||
WORK_PATH=/data
|
||||
GITEA____APP_NAME={{ .apps.gitea.appName }}
|
||||
GITEA____RUN_MODE={{ .apps.gitea.runMode }}
|
||||
GITEA____RUN_USER=git
|
||||
|
||||
# Security settings
|
||||
GITEA__security__INSTALL_LOCK=true
|
||||
|
@@ -52,8 +52,8 @@ spec:
|
||||
- name: POSTGRES_ADMIN_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: postgres-secrets
|
||||
key: password
|
||||
name: immich-secrets
|
||||
key: apps.postgres.password
|
||||
- name: DB_HOSTNAME
|
||||
value: "{{ .apps.immich.dbHostname }}"
|
||||
- name: DB_DATABASE_NAME
|
||||
@@ -64,5 +64,5 @@ spec:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: immich-secrets
|
||||
key: dbPassword
|
||||
key: apps.immich.dbPassword
|
||||
restartPolicy: OnFailure
|
||||
|
@@ -25,6 +25,11 @@ spec:
|
||||
env:
|
||||
- name: REDIS_HOSTNAME
|
||||
value: "{{ .apps.immich.redisHostname }}"
|
||||
- name: REDIS_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: immich-secrets
|
||||
key: apps.redis.password
|
||||
- name: DB_HOSTNAME
|
||||
value: "{{ .apps.immich.dbHostname }}"
|
||||
- name: DB_USERNAME
|
||||
@@ -33,7 +38,7 @@ spec:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: immich-secrets
|
||||
key: dbPassword
|
||||
key: apps.immich.dbPassword
|
||||
- name: TZ
|
||||
value: "{{ .apps.immich.timezone }}"
|
||||
- name: IMMICH_WORKERS_EXCLUDE
|
||||
@@ -46,3 +51,11 @@ spec:
|
||||
- name: immich-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: immich-pvc
|
||||
affinity:
|
||||
podAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchLabels:
|
||||
app: immich
|
||||
component: server
|
||||
topologyKey: kubernetes.io/hostname
|
||||
|
@@ -28,6 +28,11 @@ spec:
|
||||
env:
|
||||
- name: REDIS_HOSTNAME
|
||||
value: "{{ .apps.immich.redisHostname }}"
|
||||
- name: REDIS_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: immich-secrets
|
||||
key: apps.redis.password
|
||||
- name: DB_HOSTNAME
|
||||
value: "{{ .apps.immich.dbHostname }}"
|
||||
- name: DB_USERNAME
|
||||
@@ -36,7 +41,7 @@ spec:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: immich-secrets
|
||||
key: dbPassword
|
||||
key: apps.immich.dbPassword
|
||||
- name: TZ
|
||||
value: "{{ .apps.immich.timezone }}"
|
||||
- name: IMMICH_WORKERS_EXCLUDE
|
||||
|
@@ -18,6 +18,8 @@ defaultConfig:
|
||||
dbHostname: postgres.postgres.svc.cluster.local
|
||||
dbUsername: immich
|
||||
domain: immich.{{ .cloud.domain }}
|
||||
tlsSecretName: wildcard-wild-cloud-tls
|
||||
requiredSecrets:
|
||||
- apps.immich.dbPassword
|
||||
- apps.postgres.password
|
||||
- apps.redis.password
|
||||
|
@@ -1,12 +0,0 @@
|
||||
# Config
|
||||
JELLYFIN_DOMAIN=jellyfin.$DOMAIN
|
||||
JELLYFIN_CONFIG_STORAGE=1Gi
|
||||
JELLYFIN_CACHE_STORAGE=10Gi
|
||||
JELLYFIN_MEDIA_STORAGE=100Gi
|
||||
TZ=UTC
|
||||
|
||||
# Docker Images
|
||||
JELLYFIN_IMAGE=jellyfin/jellyfin:latest
|
||||
|
||||
# Jellyfin Configuration
|
||||
JELLYFIN_PublishedServerUrl=https://jellyfin.$DOMAIN
|
@@ -1,49 +0,0 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: jellyfin
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: jellyfin
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: jellyfin
|
||||
spec:
|
||||
containers:
|
||||
- image: jellyfin/jellyfin:latest
|
||||
name: jellyfin
|
||||
ports:
|
||||
- containerPort: 8096
|
||||
protocol: TCP
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: config
|
||||
env:
|
||||
- name: TZ
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
key: TZ
|
||||
name: config
|
||||
volumeMounts:
|
||||
- mountPath: /config
|
||||
name: jellyfin-config
|
||||
- mountPath: /cache
|
||||
name: jellyfin-cache
|
||||
- mountPath: /media
|
||||
name: jellyfin-media
|
||||
volumes:
|
||||
- name: jellyfin-config
|
||||
persistentVolumeClaim:
|
||||
claimName: jellyfin-config-pvc
|
||||
- name: jellyfin-cache
|
||||
persistentVolumeClaim:
|
||||
claimName: jellyfin-cache-pvc
|
||||
- name: jellyfin-media
|
||||
persistentVolumeClaim:
|
||||
claimName: jellyfin-media-pvc
|
@@ -1,24 +0,0 @@
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: jellyfin-public
|
||||
annotations:
|
||||
external-dns.alpha.kubernetes.io/target: your.jellyfin.domain
|
||||
external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
|
||||
spec:
|
||||
rules:
|
||||
- host: your.jellyfin.domain
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: jellyfin
|
||||
port:
|
||||
number: 8096
|
||||
tls:
|
||||
- secretName: wildcard-internal-wild-cloud-tls
|
||||
hosts:
|
||||
- your.jellyfin.domain
|
@@ -1,82 +0,0 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
namespace: jellyfin
|
||||
labels:
|
||||
- includeSelectors: true
|
||||
pairs:
|
||||
app: jellyfin
|
||||
managedBy: kustomize
|
||||
partOf: wild-cloud
|
||||
resources:
|
||||
- deployment.yaml
|
||||
- ingress.yaml
|
||||
- namespace.yaml
|
||||
- pvc.yaml
|
||||
- service.yaml
|
||||
configMapGenerator:
|
||||
- name: config
|
||||
envs:
|
||||
- config/config.env
|
||||
|
||||
replacements:
|
||||
- source:
|
||||
kind: ConfigMap
|
||||
name: config
|
||||
fieldPath: data.DOMAIN
|
||||
targets:
|
||||
- select:
|
||||
kind: Ingress
|
||||
name: jellyfin-public
|
||||
fieldPaths:
|
||||
- metadata.annotations.[external-dns.alpha.kubernetes.io/target]
|
||||
- source:
|
||||
kind: ConfigMap
|
||||
name: config
|
||||
fieldPath: data.JELLYFIN_DOMAIN
|
||||
targets:
|
||||
- select:
|
||||
kind: Ingress
|
||||
name: jellyfin-public
|
||||
fieldPaths:
|
||||
- spec.rules.0.host
|
||||
- spec.tls.0.hosts.0
|
||||
- source:
|
||||
kind: ConfigMap
|
||||
name: config
|
||||
fieldPath: data.JELLYFIN_CONFIG_STORAGE
|
||||
targets:
|
||||
- select:
|
||||
kind: PersistentVolumeClaim
|
||||
name: jellyfin-config-pvc
|
||||
fieldPaths:
|
||||
- spec.resources.requests.storage
|
||||
- source:
|
||||
kind: ConfigMap
|
||||
name: config
|
||||
fieldPath: data.JELLYFIN_CACHE_STORAGE
|
||||
targets:
|
||||
- select:
|
||||
kind: PersistentVolumeClaim
|
||||
name: jellyfin-cache-pvc
|
||||
fieldPaths:
|
||||
- spec.resources.requests.storage
|
||||
- source:
|
||||
kind: ConfigMap
|
||||
name: config
|
||||
fieldPath: data.JELLYFIN_MEDIA_STORAGE
|
||||
targets:
|
||||
- select:
|
||||
kind: PersistentVolumeClaim
|
||||
name: jellyfin-media-pvc
|
||||
fieldPaths:
|
||||
- spec.resources.requests.storage
|
||||
- source:
|
||||
kind: ConfigMap
|
||||
name: config
|
||||
fieldPath: data.JELLYFIN_IMAGE
|
||||
targets:
|
||||
- select:
|
||||
kind: Deployment
|
||||
name: jellyfin
|
||||
fieldPaths:
|
||||
- spec.template.spec.containers.0.image
|
@@ -1,37 +0,0 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: jellyfin-config-pvc
|
||||
namespace: jellyfin
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: jellyfin-cache-pvc
|
||||
namespace: jellyfin
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: jellyfin-media-pvc
|
||||
namespace: jellyfin
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
storageClassName: nfs
|
||||
resources:
|
||||
requests:
|
||||
storage: 100Gi
|
@@ -1,15 +0,0 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: jellyfin
|
||||
namespace: jellyfin
|
||||
labels:
|
||||
app: jellyfin
|
||||
spec:
|
||||
ports:
|
||||
- port: 8096
|
||||
targetPort: 8096
|
||||
protocol: TCP
|
||||
selector:
|
||||
app: jellyfin
|
63
apps/keila/db-init-job.yaml
Normal file
63
apps/keila/db-init-job.yaml
Normal file
@@ -0,0 +1,63 @@
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: keila-db-init
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
component: db-init
|
||||
spec:
|
||||
restartPolicy: OnFailure
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 999
|
||||
runAsGroup: 999
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: postgres-init
|
||||
image: postgres:15
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
readOnlyRootFilesystem: false
|
||||
env:
|
||||
- name: PGHOST
|
||||
value: {{ .apps.keila.dbHostname }}
|
||||
- name: PGUSER
|
||||
value: postgres
|
||||
- name: PGPASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: keila-secrets
|
||||
key: apps.postgres.password
|
||||
- name: DB_NAME
|
||||
value: {{ .apps.keila.dbName }}
|
||||
- name: DB_USER
|
||||
value: {{ .apps.keila.dbUsername }}
|
||||
- name: DB_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: keila-secrets
|
||||
key: apps.keila.dbPassword
|
||||
command:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- |
|
||||
set -e
|
||||
echo "Waiting for PostgreSQL to be ready..."
|
||||
until pg_isready; do
|
||||
echo "PostgreSQL is not ready - sleeping"
|
||||
sleep 2
|
||||
done
|
||||
echo "PostgreSQL is ready"
|
||||
|
||||
echo "Creating database and user for Keila..."
|
||||
psql -c "CREATE DATABASE ${DB_NAME};" || echo "Database ${DB_NAME} already exists"
|
||||
psql -c "CREATE USER ${DB_USER} WITH PASSWORD '${DB_PASSWORD}';" || echo "User ${DB_USER} already exists"
|
||||
psql -c "GRANT ALL PRIVILEGES ON DATABASE ${DB_NAME} TO ${DB_USER};"
|
||||
psql -d ${DB_NAME} -c "GRANT ALL ON SCHEMA public TO ${DB_USER};"
|
||||
echo "Database initialization complete"
|
85
apps/keila/deployment.yaml
Normal file
85
apps/keila/deployment.yaml
Normal file
@@ -0,0 +1,85 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: keila
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
component: web
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
component: web
|
||||
spec:
|
||||
containers:
|
||||
- name: keila
|
||||
image: {{ .apps.keila.image }}
|
||||
ports:
|
||||
- containerPort: {{ .apps.keila.port }}
|
||||
env:
|
||||
- name: DB_URL
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: keila-secrets
|
||||
key: apps.keila.dbUrl
|
||||
- name: URL_HOST
|
||||
value: {{ .apps.keila.domain }}
|
||||
- name: URL_SCHEMA
|
||||
value: https
|
||||
- name: URL_PORT
|
||||
value: "443"
|
||||
- name: PORT
|
||||
value: "{{ .apps.keila.port }}"
|
||||
- name: SECRET_KEY_BASE
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: keila-secrets
|
||||
key: apps.keila.secretKeyBase
|
||||
- name: MAILER_SMTP_HOST
|
||||
value: {{ .apps.keila.smtp.host }}
|
||||
- name: MAILER_SMTP_PORT
|
||||
value: "{{ .apps.keila.smtp.port }}"
|
||||
- name: MAILER_ENABLE_SSL
|
||||
value: "{{ .apps.keila.smtp.tls }}"
|
||||
- name: MAILER_ENABLE_STARTTLS
|
||||
value: "{{ .apps.keila.smtp.startTls }}"
|
||||
- name: MAILER_SMTP_USER
|
||||
value: {{ .apps.keila.smtp.user }}
|
||||
- name: MAILER_SMTP_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: keila-secrets
|
||||
key: apps.keila.smtpPassword
|
||||
- name: MAILER_SMTP_FROM_EMAIL
|
||||
value: {{ .apps.keila.smtp.from }}
|
||||
- name: DISABLE_REGISTRATION
|
||||
value: "{{ .apps.keila.disableRegistration }}"
|
||||
- name: KEILA_USER
|
||||
value: "{{ .apps.keila.adminUser }}"
|
||||
- name: KEILA_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: keila-secrets
|
||||
key: apps.keila.adminPassword
|
||||
- name: USER_CONTENT_DIR
|
||||
value: /var/lib/keila/uploads
|
||||
volumeMounts:
|
||||
- name: uploads
|
||||
mountPath: /var/lib/keila/uploads
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: {{ .apps.keila.port }}
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: {{ .apps.keila.port }}
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 5
|
||||
volumes:
|
||||
- name: uploads
|
||||
persistentVolumeClaim:
|
||||
claimName: keila-uploads
|
26
apps/keila/ingress.yaml
Normal file
26
apps/keila/ingress.yaml
Normal file
@@ -0,0 +1,26 @@
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: keila
|
||||
annotations:
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
traefik.ingress.kubernetes.io/router.tls.certresolver: letsencrypt
|
||||
external-dns.alpha.kubernetes.io/target: {{ .cloud.domain }}
|
||||
external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
|
||||
traefik.ingress.kubernetes.io/router.middlewares: keila-cors@kubernetescrd
|
||||
spec:
|
||||
rules:
|
||||
- host: {{ .apps.keila.domain }}
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: keila
|
||||
port:
|
||||
number: 80
|
||||
tls:
|
||||
- secretName: "wildcard-wild-cloud-tls"
|
||||
hosts:
|
||||
- "{{ .apps.keila.domain }}"
|
17
apps/keila/kustomization.yaml
Normal file
17
apps/keila/kustomization.yaml
Normal file
@@ -0,0 +1,17 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
namespace: keila
|
||||
labels:
|
||||
- includeSelectors: true
|
||||
pairs:
|
||||
app: keila
|
||||
managedBy: kustomize
|
||||
partOf: wild-cloud
|
||||
resources:
|
||||
- namespace.yaml
|
||||
- deployment.yaml
|
||||
- service.yaml
|
||||
- ingress.yaml
|
||||
- pvc.yaml
|
||||
- db-init-job.yaml
|
||||
- middleware-cors.yaml
|
31
apps/keila/manifest.yaml
Normal file
31
apps/keila/manifest.yaml
Normal file
@@ -0,0 +1,31 @@
|
||||
name: keila
|
||||
description: Keila is an open-source email marketing platform that allows you to send newsletters and manage mailing lists with privacy and control.
|
||||
version: 1.0.0
|
||||
icon: https://www.keila.io/images/logo.svg
|
||||
requires:
|
||||
- name: postgres
|
||||
defaultConfig:
|
||||
image: pentacent/keila:latest
|
||||
port: 4000
|
||||
storage: 1Gi
|
||||
domain: keila.{{ .cloud.domain }}
|
||||
dbHostname: postgres.postgres.svc.cluster.local
|
||||
dbName: keila
|
||||
dbUsername: keila
|
||||
disableRegistration: "true"
|
||||
adminUser: admin@{{ .cloud.domain }}
|
||||
tlsSecretName: wildcard-wild-cloud-tls
|
||||
smtp:
|
||||
host: "{{ .cloud.smtp.host }}"
|
||||
port: "{{ .cloud.smtp.port }}"
|
||||
from: "{{ .cloud.smtp.from }}"
|
||||
user: "{{ .cloud.smtp.user }}"
|
||||
tls: {{ .cloud.smtp.tls }}
|
||||
startTls: {{ .cloud.smtp.startTls }}
|
||||
requiredSecrets:
|
||||
- apps.keila.secretKeyBase
|
||||
- apps.keila.dbPassword
|
||||
- apps.keila.dbUrl
|
||||
- apps.keila.adminPassword
|
||||
- apps.keila.smtpPassword
|
||||
- apps.postgres.password
|
28
apps/keila/middleware-cors.yaml
Normal file
28
apps/keila/middleware-cors.yaml
Normal file
@@ -0,0 +1,28 @@
|
||||
apiVersion: traefik.io/v1alpha1
|
||||
kind: Middleware
|
||||
metadata:
|
||||
name: cors
|
||||
spec:
|
||||
headers:
|
||||
accessControlAllowCredentials: true
|
||||
accessControlAllowHeaders:
|
||||
- "Content-Type"
|
||||
- "Authorization"
|
||||
- "X-Requested-With"
|
||||
- "Accept"
|
||||
- "Origin"
|
||||
- "Cache-Control"
|
||||
- "X-File-Name"
|
||||
accessControlAllowMethods:
|
||||
- "GET"
|
||||
- "POST"
|
||||
- "PUT"
|
||||
- "DELETE"
|
||||
- "OPTIONS"
|
||||
accessControlAllowOriginList:
|
||||
- "http://localhost:1313"
|
||||
- "https://*.{{ .cloud.domain }}"
|
||||
- "https://{{ .cloud.domain }}"
|
||||
accessControlExposeHeaders:
|
||||
- "*"
|
||||
accessControlMaxAge: 86400
|
4
apps/keila/namespace.yaml
Normal file
4
apps/keila/namespace.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: keila
|
10
apps/keila/pvc.yaml
Normal file
10
apps/keila/pvc.yaml
Normal file
@@ -0,0 +1,10 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: keila-uploads
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .apps.keila.storage }}
|
11
apps/keila/service.yaml
Normal file
11
apps/keila/service.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: keila
|
||||
spec:
|
||||
selector:
|
||||
component: web
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: {{ .apps.keila.port }}
|
||||
protocol: TCP
|
65
apps/listmonk/db-init-job.yaml
Normal file
65
apps/listmonk/db-init-job.yaml
Normal file
@@ -0,0 +1,65 @@
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: listmonk-db-init
|
||||
labels:
|
||||
component: db-init
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
component: db-init
|
||||
spec:
|
||||
restartPolicy: OnFailure
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 999
|
||||
runAsGroup: 999
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: postgres-init
|
||||
image: postgres:15
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
readOnlyRootFilesystem: false
|
||||
env:
|
||||
- name: PGHOST
|
||||
value: {{ .apps.listmonk.dbHost }}
|
||||
- name: PGUSER
|
||||
value: postgres
|
||||
- name: PGPASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: listmonk-secrets
|
||||
key: apps.postgres.password
|
||||
- name: DB_NAME
|
||||
value: {{ .apps.listmonk.dbName }}
|
||||
- name: DB_USER
|
||||
value: {{ .apps.listmonk.dbUser }}
|
||||
- name: DB_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: listmonk-secrets
|
||||
key: apps.listmonk.dbPassword
|
||||
command:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- |
|
||||
set -e
|
||||
echo "Waiting for PostgreSQL to be ready..."
|
||||
until pg_isready; do
|
||||
echo "PostgreSQL is not ready - sleeping"
|
||||
sleep 2
|
||||
done
|
||||
echo "PostgreSQL is ready"
|
||||
|
||||
echo "Creating database and user for Listmonk..."
|
||||
psql -c "CREATE DATABASE ${DB_NAME};" || echo "Database ${DB_NAME} already exists"
|
||||
psql -c "CREATE USER ${DB_USER} WITH PASSWORD '${DB_PASSWORD}';" || echo "User ${DB_USER} already exists"
|
||||
psql -c "GRANT ALL PRIVILEGES ON DATABASE ${DB_NAME} TO ${DB_USER};"
|
||||
psql -d ${DB_NAME} -c "GRANT ALL ON SCHEMA public TO ${DB_USER};"
|
||||
echo "Database initialization complete"
|
86
apps/listmonk/deployment.yaml
Normal file
86
apps/listmonk/deployment.yaml
Normal file
@@ -0,0 +1,86 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: listmonk
|
||||
namespace: listmonk
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
selector:
|
||||
matchLabels:
|
||||
component: web
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
component: web
|
||||
spec:
|
||||
securityContext:
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: listmonk
|
||||
image: listmonk/listmonk:v5.0.3
|
||||
command: ["/bin/sh"]
|
||||
args: ["-c", "./listmonk --install --idempotent --yes && ./listmonk --upgrade --yes && ./listmonk"]
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 9000
|
||||
protocol: TCP
|
||||
env:
|
||||
- name: LISTMONK_app__address
|
||||
value: "0.0.0.0:9000"
|
||||
- name: LISTMONK_db__host
|
||||
value: {{ .apps.listmonk.dbHost }}
|
||||
- name: LISTMONK_db__port
|
||||
value: "{{ .apps.listmonk.dbPort }}"
|
||||
- name: LISTMONK_db__user
|
||||
value: {{ .apps.listmonk.dbUser }}
|
||||
- name: LISTMONK_db__database
|
||||
value: {{ .apps.listmonk.dbName }}
|
||||
- name: LISTMONK_db__ssl_mode
|
||||
value: {{ .apps.listmonk.dbSSLMode }}
|
||||
- name: LISTMONK_db__password
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: listmonk-secrets
|
||||
key: apps.listmonk.dbPassword
|
||||
resources:
|
||||
limits:
|
||||
cpu: 500m
|
||||
ephemeral-storage: 1Gi
|
||||
memory: 512Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
ephemeral-storage: 50Mi
|
||||
memory: 128Mi
|
||||
volumeMounts:
|
||||
- name: listmonk-data
|
||||
mountPath: /listmonk/data
|
||||
livenessProbe:
|
||||
tcpSocket:
|
||||
port: 9000
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 5
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
failureThreshold: 3
|
||||
readinessProbe:
|
||||
tcpSocket:
|
||||
port: 9000
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 3
|
||||
periodSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 3
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
readOnlyRootFilesystem: false
|
||||
volumes:
|
||||
- name: listmonk-data
|
||||
persistentVolumeClaim:
|
||||
claimName: listmonk-data
|
||||
restartPolicy: Always
|
27
apps/listmonk/ingress.yaml
Normal file
27
apps/listmonk/ingress.yaml
Normal file
@@ -0,0 +1,27 @@
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: listmonk
|
||||
namespace: listmonk
|
||||
annotations:
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
external-dns.alpha.kubernetes.io/target: {{ .cloud.domain }}
|
||||
external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
|
||||
spec:
|
||||
ingressClassName: traefik
|
||||
tls:
|
||||
- hosts:
|
||||
- {{ .apps.listmonk.domain }}
|
||||
secretName: {{ .apps.listmonk.tlsSecretName }}
|
||||
rules:
|
||||
- host: {{ .apps.listmonk.domain }}
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: listmonk
|
||||
port:
|
||||
number: 80
|
16
apps/listmonk/kustomization.yaml
Normal file
16
apps/listmonk/kustomization.yaml
Normal file
@@ -0,0 +1,16 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
namespace: listmonk
|
||||
labels:
|
||||
- includeSelectors: true
|
||||
pairs:
|
||||
app: listmonk
|
||||
managedBy: kustomize
|
||||
partOf: wild-cloud
|
||||
resources:
|
||||
- namespace.yaml
|
||||
- db-init-job.yaml
|
||||
- deployment.yaml
|
||||
- service.yaml
|
||||
- ingress.yaml
|
||||
- pvc.yaml
|
20
apps/listmonk/manifest.yaml
Normal file
20
apps/listmonk/manifest.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
name: listmonk
|
||||
description: Listmonk is a standalone, self-hosted, newsletter and mailing list manager. It is fast, feature-rich, and packed into a single binary.
|
||||
version: 5.0.3
|
||||
icon: https://listmonk.app/static/images/logo.svg
|
||||
requires:
|
||||
- name: postgres
|
||||
defaultConfig:
|
||||
domain: listmonk.{{ .cloud.domain }}
|
||||
tlsSecretName: wildcard-wild-cloud-tls
|
||||
storage: 1Gi
|
||||
dbHost: postgres.postgres.svc.cluster.local
|
||||
dbPort: 5432
|
||||
dbName: listmonk
|
||||
dbUser: listmonk
|
||||
dbSSLMode: disable
|
||||
timezone: UTC
|
||||
requiredSecrets:
|
||||
- apps.listmonk.dbPassword
|
||||
- apps.listmonk.dbUrl
|
||||
- apps.postgres.password
|
4
apps/listmonk/namespace.yaml
Normal file
4
apps/listmonk/namespace.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: listmonk
|
11
apps/listmonk/pvc.yaml
Normal file
11
apps/listmonk/pvc.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: listmonk-data
|
||||
namespace: listmonk
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .apps.listmonk.storage }}
|
14
apps/listmonk/service.yaml
Normal file
14
apps/listmonk/service.yaml
Normal file
@@ -0,0 +1,14 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: listmonk
|
||||
namespace: listmonk
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: http
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
component: web
|
@@ -1,11 +0,0 @@
|
||||
MARIADB_NAMESPACE=mariadb
|
||||
MARIADB_RELEASE_NAME=mariadb
|
||||
MARIADB_USER=app
|
||||
MARIADB_DATABASE=app_database
|
||||
MARIADB_STORAGE=8Gi
|
||||
MARIADB_TAG=11.4.5
|
||||
MARIADB_PORT=3306
|
||||
|
||||
# Secrets
|
||||
MARIADB_PASSWORD=
|
||||
MARIADB_ROOT_PASSWORD=
|
@@ -1,27 +1,17 @@
|
||||
---
|
||||
# Source: ghost/charts/mysql/templates/primary/configmap.yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: ghost-mysql
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/version: 8.4.5
|
||||
helm.sh/chart: mysql-12.3.4
|
||||
app.kubernetes.io/part-of: mysql
|
||||
app.kubernetes.io/component: primary
|
||||
name: mysql
|
||||
namespace: mysql
|
||||
data:
|
||||
my.cnf: |-
|
||||
my.cnf: |
|
||||
[mysqld]
|
||||
authentication_policy='* ,,'
|
||||
skip-name-resolve
|
||||
explicit_defaults_for_timestamp
|
||||
basedir=/opt/bitnami/mysql
|
||||
plugin_dir=/opt/bitnami/mysql/lib/plugin
|
||||
port=3306
|
||||
port={{ .apps.mysql.port }}
|
||||
mysqlx=0
|
||||
mysqlx_port=33060
|
||||
socket=/opt/bitnami/mysql/tmp/mysql.sock
|
||||
@@ -36,12 +26,12 @@ data:
|
||||
long_query_time=10.0
|
||||
|
||||
[client]
|
||||
port=3306
|
||||
port={{ .apps.mysql.port }}
|
||||
socket=/opt/bitnami/mysql/tmp/mysql.sock
|
||||
default-character-set=UTF8
|
||||
plugin_dir=/opt/bitnami/mysql/lib/plugin
|
||||
|
||||
[manager]
|
||||
port=3306
|
||||
port={{ .apps.mysql.port }}
|
||||
socket=/opt/bitnami/mysql/tmp/mysql.sock
|
||||
pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
|
||||
pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
|
15
apps/mysql/kustomization.yaml
Normal file
15
apps/mysql/kustomization.yaml
Normal file
@@ -0,0 +1,15 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
namespace: mysql
|
||||
labels:
|
||||
- includeSelectors: true
|
||||
pairs:
|
||||
app: mysql
|
||||
managedBy: kustomize
|
||||
partOf: wild-cloud
|
||||
resources:
|
||||
- namespace.yaml
|
||||
- statefulset.yaml
|
||||
- service.yaml
|
||||
- service-headless.yaml
|
||||
- configmap.yaml
|
17
apps/mysql/manifest.yaml
Normal file
17
apps/mysql/manifest.yaml
Normal file
@@ -0,0 +1,17 @@
|
||||
name: mysql
|
||||
description: MySQL is an open-source relational database management system
|
||||
version: 8.4.5
|
||||
icon: https://www.mysql.com/common/logos/logo-mysql-170x115.png
|
||||
requires: []
|
||||
defaultConfig:
|
||||
image: docker.io/bitnami/mysql:8.4.5-debian-12-r0
|
||||
port: 3306
|
||||
storage: 20Gi
|
||||
dbName: mysql
|
||||
rootUser: root
|
||||
user: mysql
|
||||
timezone: UTC
|
||||
enableSSL: false
|
||||
requiredSecrets:
|
||||
- apps.mysql.rootPassword
|
||||
- apps.mysql.password
|
4
apps/mysql/namespace.yaml
Normal file
4
apps/mysql/namespace.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: mysql
|
@@ -1,31 +0,0 @@
|
||||
---
|
||||
# Source: ghost/charts/mysql/templates/networkpolicy.yaml
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: ghost-mysql
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/version: 8.4.5
|
||||
helm.sh/chart: mysql-12.3.4
|
||||
app.kubernetes.io/part-of: mysql
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/version: 8.4.5
|
||||
helm.sh/chart: mysql-12.3.4
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
egress:
|
||||
- {}
|
||||
ingress:
|
||||
# Allow connection from other cluster pods
|
||||
- ports:
|
||||
- port: 3306
|
@@ -1,23 +0,0 @@
|
||||
---
|
||||
# Source: ghost/charts/mysql/templates/primary/pdb.yaml
|
||||
apiVersion: policy/v1
|
||||
kind: PodDisruptionBudget
|
||||
metadata:
|
||||
name: ghost-mysql
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/version: 8.4.5
|
||||
helm.sh/chart: mysql-12.3.4
|
||||
app.kubernetes.io/part-of: mysql
|
||||
app.kubernetes.io/component: primary
|
||||
spec:
|
||||
maxUnavailable: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/part-of: mysql
|
||||
app.kubernetes.io/component: primary
|
@@ -1,27 +0,0 @@
|
||||
---
|
||||
# Source: ghost/charts/mysql/templates/primary/svc-headless.yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ghost-mysql-headless
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/version: 8.4.5
|
||||
helm.sh/chart: mysql-12.3.4
|
||||
app.kubernetes.io/part-of: mysql
|
||||
app.kubernetes.io/component: primary
|
||||
spec:
|
||||
type: ClusterIP
|
||||
clusterIP: None
|
||||
publishNotReadyAddresses: true
|
||||
ports:
|
||||
- name: mysql
|
||||
port: 3306
|
||||
targetPort: mysql
|
||||
selector:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/component: primary
|
@@ -1,29 +0,0 @@
|
||||
---
|
||||
# Source: ghost/charts/mysql/templates/primary/svc.yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ghost-mysql
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/version: 8.4.5
|
||||
helm.sh/chart: mysql-12.3.4
|
||||
app.kubernetes.io/part-of: mysql
|
||||
app.kubernetes.io/component: primary
|
||||
spec:
|
||||
type: ClusterIP
|
||||
sessionAffinity: None
|
||||
ports:
|
||||
- name: mysql
|
||||
port: 3306
|
||||
protocol: TCP
|
||||
targetPort: mysql
|
||||
nodePort: null
|
||||
selector:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/part-of: mysql
|
||||
app.kubernetes.io/component: primary
|
16
apps/mysql/service-headless.yaml
Normal file
16
apps/mysql/service-headless.yaml
Normal file
@@ -0,0 +1,16 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: mysql-headless
|
||||
namespace: mysql
|
||||
spec:
|
||||
type: ClusterIP
|
||||
clusterIP: None
|
||||
publishNotReadyAddresses: true
|
||||
ports:
|
||||
- name: mysql
|
||||
port: {{ .apps.mysql.port }}
|
||||
protocol: TCP
|
||||
targetPort: mysql
|
||||
selector:
|
||||
component: primary
|
14
apps/mysql/service.yaml
Normal file
14
apps/mysql/service.yaml
Normal file
@@ -0,0 +1,14 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: mysql
|
||||
namespace: mysql
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- name: mysql
|
||||
port: {{ .apps.mysql.port }}
|
||||
protocol: TCP
|
||||
targetPort: mysql
|
||||
selector:
|
||||
component: primary
|
@@ -1,17 +0,0 @@
|
||||
---
|
||||
# Source: ghost/charts/mysql/templates/serviceaccount.yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: ghost-mysql
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/version: 8.4.5
|
||||
helm.sh/chart: mysql-12.3.4
|
||||
app.kubernetes.io/part-of: mysql
|
||||
automountServiceAccountToken: false
|
||||
secrets:
|
||||
- name: ghost-mysql
|
@@ -1,97 +1,57 @@
|
||||
---
|
||||
# Source: ghost/charts/mysql/templates/primary/statefulset.yaml
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: ghost-mysql
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/version: 8.4.5
|
||||
helm.sh/chart: mysql-12.3.4
|
||||
app.kubernetes.io/part-of: mysql
|
||||
app.kubernetes.io/component: primary
|
||||
name: mysql
|
||||
namespace: mysql
|
||||
spec:
|
||||
replicas: 1
|
||||
podManagementPolicy: ""
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/part-of: mysql
|
||||
app.kubernetes.io/component: primary
|
||||
serviceName: ghost-mysql-headless
|
||||
podManagementPolicy: Parallel
|
||||
serviceName: mysql-headless
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
selector:
|
||||
matchLabels:
|
||||
component: primary
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
checksum/configuration: 959b0f76ba7e6be0aaaabf97932398c31b17bc9f86d3839a26a3bbbc48673cd9
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/version: 8.4.5
|
||||
helm.sh/chart: mysql-12.3.4
|
||||
app.kubernetes.io/part-of: mysql
|
||||
app.kubernetes.io/component: primary
|
||||
component: primary
|
||||
spec:
|
||||
serviceAccountName: ghost-mysql
|
||||
|
||||
serviceAccountName: default
|
||||
automountServiceAccountToken: false
|
||||
affinity:
|
||||
podAffinity:
|
||||
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- podAffinityTerm:
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: mysql
|
||||
topologyKey: kubernetes.io/hostname
|
||||
weight: 1
|
||||
nodeAffinity:
|
||||
|
||||
securityContext:
|
||||
fsGroup: 1001
|
||||
fsGroupChangePolicy: Always
|
||||
supplementalGroups: []
|
||||
sysctls: []
|
||||
initContainers:
|
||||
- name: preserve-logs-symlinks
|
||||
image: docker.io/bitnami/mysql:8.4.5-debian-12-r0
|
||||
imagePullPolicy: "IfNotPresent"
|
||||
image: {{ .apps.mysql.image }}
|
||||
imagePullPolicy: IfNotPresent
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
- ALL
|
||||
readOnlyRootFilesystem: true
|
||||
runAsGroup: 1001
|
||||
runAsNonRoot: true
|
||||
runAsUser: 1001
|
||||
seLinuxOptions: {}
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
resources:
|
||||
limits:
|
||||
cpu: 750m
|
||||
ephemeral-storage: 2Gi
|
||||
memory: 768Mi
|
||||
cpu: 250m
|
||||
ephemeral-storage: 1Gi
|
||||
memory: 256Mi
|
||||
requests:
|
||||
cpu: 500m
|
||||
cpu: 100m
|
||||
ephemeral-storage: 50Mi
|
||||
memory: 512Mi
|
||||
memory: 128Mi
|
||||
command:
|
||||
- /bin/bash
|
||||
args:
|
||||
- -ec
|
||||
- |
|
||||
#!/bin/bash
|
||||
|
||||
. /opt/bitnami/scripts/libfs.sh
|
||||
# We copy the logs folder because it has symlinks to stdout and stderr
|
||||
if ! is_dir_empty /opt/bitnami/mysql/logs; then
|
||||
@@ -102,39 +62,41 @@ spec:
|
||||
mountPath: /emptydir
|
||||
containers:
|
||||
- name: mysql
|
||||
image: docker.io/bitnami/mysql:8.4.5-debian-12-r0
|
||||
imagePullPolicy: "IfNotPresent"
|
||||
image: {{ .apps.mysql.image }}
|
||||
imagePullPolicy: IfNotPresent
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
- ALL
|
||||
readOnlyRootFilesystem: true
|
||||
runAsGroup: 1001
|
||||
runAsNonRoot: true
|
||||
runAsUser: 1001
|
||||
seLinuxOptions: {}
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
env:
|
||||
- name: BITNAMI_DEBUG
|
||||
value: "false"
|
||||
- name: MYSQL_ROOT_PASSWORD_FILE
|
||||
value: /opt/bitnami/mysql/secrets/mysql-root-password
|
||||
- name: MYSQL_ENABLE_SSL
|
||||
value: "no"
|
||||
- name: MYSQL_ROOT_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mysql-secrets
|
||||
key: apps.mysql.rootPassword
|
||||
- name: MYSQL_USER
|
||||
value: "bn_ghost"
|
||||
- name: MYSQL_PASSWORD_FILE
|
||||
value: /opt/bitnami/mysql/secrets/mysql-password
|
||||
- name: MYSQL_PORT
|
||||
value: "3306"
|
||||
value: {{ .apps.mysql.user }}
|
||||
- name: MYSQL_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mysql-secrets
|
||||
key: apps.mysql.password
|
||||
- name: MYSQL_DATABASE
|
||||
value: "bitnami_ghost"
|
||||
envFrom:
|
||||
value: {{ .apps.mysql.dbName }}
|
||||
- name: MYSQL_PORT
|
||||
value: "{{ .apps.mysql.port }}"
|
||||
ports:
|
||||
- name: mysql
|
||||
containerPort: 3306
|
||||
containerPort: {{ .apps.mysql.port }}
|
||||
livenessProbe:
|
||||
failureThreshold: 3
|
||||
initialDelaySeconds: 5
|
||||
@@ -147,9 +109,6 @@ spec:
|
||||
- -ec
|
||||
- |
|
||||
password_aux="${MYSQL_ROOT_PASSWORD:-}"
|
||||
if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
|
||||
password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
|
||||
fi
|
||||
mysqladmin status -uroot -p"${password_aux}"
|
||||
readinessProbe:
|
||||
failureThreshold: 3
|
||||
@@ -163,9 +122,6 @@ spec:
|
||||
- -ec
|
||||
- |
|
||||
password_aux="${MYSQL_ROOT_PASSWORD:-}"
|
||||
if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
|
||||
password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
|
||||
fi
|
||||
mysqladmin ping -uroot -p"${password_aux}" | grep "mysqld is alive"
|
||||
startupProbe:
|
||||
failureThreshold: 10
|
||||
@@ -179,9 +135,6 @@ spec:
|
||||
- -ec
|
||||
- |
|
||||
password_aux="${MYSQL_ROOT_PASSWORD:-}"
|
||||
if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
|
||||
password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
|
||||
fi
|
||||
mysqladmin ping -uroot -p"${password_aux}" | grep "mysqld is alive"
|
||||
resources:
|
||||
limits:
|
||||
@@ -210,32 +163,18 @@ spec:
|
||||
- name: config
|
||||
mountPath: /opt/bitnami/mysql/conf/my.cnf
|
||||
subPath: my.cnf
|
||||
- name: mysql-credentials
|
||||
mountPath: /opt/bitnami/mysql/secrets/
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: ghost-mysql
|
||||
- name: mysql-credentials
|
||||
secret:
|
||||
secretName: ghost-mysql
|
||||
items:
|
||||
- key: mysql-root-password
|
||||
path: mysql-root-password
|
||||
- key: mysql-password
|
||||
path: mysql-password
|
||||
name: mysql
|
||||
- name: empty-dir
|
||||
emptyDir: {}
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: data
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/component: primary
|
||||
spec:
|
||||
accessModes:
|
||||
- "ReadWriteOnce"
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: "8Gi"
|
||||
storage: {{ .apps.mysql.storage }}
|
@@ -36,7 +36,7 @@ spec:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: postgres-secrets
|
||||
key: password
|
||||
key: apps.postgres.password
|
||||
- name: DB_HOSTNAME
|
||||
value: "{{ .apps.openproject.dbHostname }}"
|
||||
- name: DB_DATABASE_NAME
|
||||
@@ -47,5 +47,5 @@ spec:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: openproject-secrets
|
||||
key: dbPassword
|
||||
key: apps.openproject.dbPassword
|
||||
restartPolicy: OnFailure
|
@@ -20,13 +20,14 @@ defaultConfig:
|
||||
hsts: true
|
||||
seedLocale: en
|
||||
adminUserName: OpenProject Admin
|
||||
adminUserEmail: '{{ .operator.email }}'
|
||||
adminUserEmail: "{{ .operator.email }}"
|
||||
adminPasswordReset: true
|
||||
postgresStatementTimeout: 120s
|
||||
tmpVolumesStorage: 2Gi
|
||||
tlsSecretName: wildcard-wild-cloud-tls
|
||||
cacheStore: memcache
|
||||
railsRelativeUrlRoot: ""
|
||||
requiredSecrets:
|
||||
- apps.openproject.dbPassword
|
||||
- apps.openproject.adminPassword
|
||||
- apps.postgres.password
|
||||
- apps.postgres.password
|
||||
|
@@ -62,12 +62,12 @@ spec:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: openproject-secrets
|
||||
key: dbPassword
|
||||
key: apps.openproject.dbPassword
|
||||
- name: OPENPROJECT_SEED_ADMIN_USER_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: openproject-secrets
|
||||
key: adminPassword
|
||||
key: apps.openproject.adminPassword
|
||||
resources:
|
||||
limits:
|
||||
memory: 200Mi
|
||||
@@ -106,12 +106,12 @@ spec:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: openproject-secrets
|
||||
key: dbPassword
|
||||
key: apps.openproject.dbPassword
|
||||
- name: OPENPROJECT_SEED_ADMIN_USER_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: openproject-secrets
|
||||
key: adminPassword
|
||||
key: apps.openproject.adminPassword
|
||||
resources:
|
||||
limits:
|
||||
memory: 512Mi
|
||||
|
@@ -84,12 +84,12 @@ spec:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: openproject-secrets
|
||||
key: dbPassword
|
||||
key: apps.openproject.dbPassword
|
||||
- name: OPENPROJECT_SEED_ADMIN_USER_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: openproject-secrets
|
||||
key: adminPassword
|
||||
key: apps.openproject.adminPassword
|
||||
args:
|
||||
- /app/docker/prod/wait-for-db
|
||||
resources:
|
||||
@@ -127,12 +127,12 @@ spec:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: openproject-secrets
|
||||
key: dbPassword
|
||||
key: apps.openproject.dbPassword
|
||||
- name: OPENPROJECT_SEED_ADMIN_USER_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: openproject-secrets
|
||||
key: adminPassword
|
||||
key: apps.openproject.adminPassword
|
||||
args:
|
||||
- /app/docker/prod/web
|
||||
volumeMounts:
|
||||
|
@@ -84,12 +84,12 @@ spec:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: openproject-secrets
|
||||
key: dbPassword
|
||||
key: apps.openproject.dbPassword
|
||||
- name: OPENPROJECT_SEED_ADMIN_USER_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: openproject-secrets
|
||||
key: adminPassword
|
||||
key: apps.openproject.adminPassword
|
||||
args:
|
||||
- bash
|
||||
- /app/docker/prod/wait-for-db
|
||||
@@ -132,7 +132,7 @@ spec:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: openproject-secrets
|
||||
key: dbPassword
|
||||
key: apps.openproject.dbPassword
|
||||
- name: "OPENPROJECT_GOOD_JOB_QUEUES"
|
||||
value: ""
|
||||
volumeMounts:
|
||||
|
@@ -44,7 +44,7 @@ spec:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: postgres-secrets
|
||||
key: password
|
||||
key: apps.postgres.password
|
||||
volumeMounts:
|
||||
- name: postgres-data
|
||||
mountPath: /var/lib/postgresql/data
|
||||
|
@@ -73,5 +73,5 @@ spec:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: postgres-secrets
|
||||
key: password
|
||||
key: apps.postgres.password
|
||||
restartPolicy: Never
|
||||
|
@@ -21,4 +21,13 @@ spec:
|
||||
env:
|
||||
- name: TZ
|
||||
value: "{{ .apps.redis.timezone }}"
|
||||
- name: REDIS_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: redis-secrets
|
||||
key: apps.redis.password
|
||||
command:
|
||||
- redis-server
|
||||
- --requirepass
|
||||
- $(REDIS_PASSWORD)
|
||||
restartPolicy: Always
|
||||
|
@@ -7,3 +7,5 @@ defaultConfig:
|
||||
image: redis:alpine
|
||||
timezone: UTC
|
||||
port: 6379
|
||||
requiredSecrets:
|
||||
- apps.redis.password
|
||||
|
23
bin/backup
23
bin/backup
@@ -1,23 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Simple backup script for your personal cloud
|
||||
# This is a placeholder for future implementation
|
||||
|
||||
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
|
||||
cd "$SCRIPT_DIR"
|
||||
if [[ -f "../load-env.sh" ]]; then
|
||||
source ../load-env.sh
|
||||
fi
|
||||
|
||||
BACKUP_DIR="${PROJECT_DIR}/backups/$(date +%Y-%m-%d)"
|
||||
mkdir -p "$BACKUP_DIR"
|
||||
|
||||
# Back up Kubernetes resources
|
||||
kubectl get all -A -o yaml > "$BACKUP_DIR/all-resources.yaml"
|
||||
kubectl get secrets -A -o yaml > "$BACKUP_DIR/secrets.yaml"
|
||||
kubectl get configmaps -A -o yaml > "$BACKUP_DIR/configmaps.yaml"
|
||||
|
||||
# Back up persistent volumes
|
||||
# TODO: Add logic to back up persistent volume data
|
||||
|
||||
echo "Backup completed: $BACKUP_DIR"
|
@@ -1,85 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# This script generates config.env and secrets.env files for an app
|
||||
# by evaluating variables in the app's .env file and splitting them
|
||||
# into regular config and secret variables based on the "# Secrets" marker
|
||||
#
|
||||
# Usage: bin/generate-config [app-name]
|
||||
|
||||
set -e
|
||||
|
||||
# Source environment variables from load-env.sh
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
if [ -f "$REPO_DIR/load-env.sh" ]; then
|
||||
source "$REPO_DIR/load-env.sh"
|
||||
fi
|
||||
|
||||
# Function to process a single app
|
||||
process_app() {
|
||||
local APP_NAME="$1"
|
||||
local APP_DIR="$APPS_DIR/$APP_NAME"
|
||||
local ENV_FILE="$APP_DIR/config/.env"
|
||||
local CONFIG_FILE="$APP_DIR/config/config.env"
|
||||
local SECRETS_FILE="$APP_DIR/config/secrets.env"
|
||||
|
||||
# Check if the app exists
|
||||
if [ ! -d "$APP_DIR" ]; then
|
||||
echo "Error: App '$APP_NAME' not found"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check if the .env file exists
|
||||
if [ ! -f "$ENV_FILE" ]; then
|
||||
echo "Warning: Environment file not found: $ENV_FILE"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Process the .env file
|
||||
echo "Generating config files for $APP_NAME..."
|
||||
|
||||
# Create temporary files for processed content
|
||||
local TMP_FILE="$APP_DIR/config/processed.env"
|
||||
|
||||
# Process the file with envsubst to expand variables
|
||||
envsubst < "$ENV_FILE" > $TMP_FILE
|
||||
|
||||
# Initialize header for output files
|
||||
echo "# Generated by \`generate-config\` on $(date)" > "$CONFIG_FILE"
|
||||
echo "# Generated by \`generate-config\` on $(date)" > "$SECRETS_FILE"
|
||||
|
||||
# Find the line number of the "# Secrets" marker
|
||||
local SECRETS_LINE=$(grep -n "^# Secrets" $TMP_FILE | cut -d':' -f1)
|
||||
|
||||
if [ -n "$SECRETS_LINE" ]; then
|
||||
# Extract non-comment lines with "=" before the "# Secrets" marker
|
||||
head -n $((SECRETS_LINE - 1)) $TMP_FILE | grep -v "^#" | grep "=" >> "$CONFIG_FILE"
|
||||
|
||||
# Extract non-comment lines with "=" after the "# Secrets" marker
|
||||
tail -n +$((SECRETS_LINE + 1)) $TMP_FILE | grep -v "^#" | grep "=" >> "$SECRETS_FILE"
|
||||
else
|
||||
# No secrets marker found, put everything in config
|
||||
grep -v "^#" $TMP_FILE | grep "=" >> "$CONFIG_FILE"
|
||||
fi
|
||||
|
||||
# Clean up
|
||||
rm -f "$TMP_FILE"
|
||||
|
||||
echo "Generated:"
|
||||
echo " - $CONFIG_FILE"
|
||||
echo " - $SECRETS_FILE"
|
||||
}
|
||||
|
||||
# Process all apps or specific app
|
||||
if [ $# -lt 1 ]; then
|
||||
# No app name provided - process all apps
|
||||
for app_dir in "$APPS_DIR"/*; do
|
||||
if [ -d "$app_dir" ]; then
|
||||
APP_NAME="$(basename "$app_dir")"
|
||||
process_app "$APP_NAME"
|
||||
fi
|
||||
done
|
||||
exit 0
|
||||
fi
|
||||
|
||||
APP_NAME="$1"
|
||||
process_app "$APP_NAME"
|
@@ -1,67 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# This script installs the local CA certificate on Ubuntu systems to avoid
|
||||
# certificate warnings in browsers when accessing internal cloud services.
|
||||
|
||||
# Set up error handling
|
||||
set -e
|
||||
|
||||
# Define colors for better readability
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
RED='\033[0;31m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
CA_DIR="/home/payne/repos/cloud.payne.io-setup/ca"
|
||||
CA_FILE="$CA_DIR/ca.crt"
|
||||
TARGET_DIR="/usr/local/share/ca-certificates"
|
||||
TARGET_FILE="cloud-payne-local-ca.crt"
|
||||
|
||||
echo -e "${BLUE}=== Installing Local CA Certificate on Ubuntu ===${NC}"
|
||||
echo
|
||||
|
||||
# Check if CA file exists
|
||||
if [ ! -f "$CA_FILE" ]; then
|
||||
echo -e "${RED}CA certificate not found at $CA_FILE${NC}"
|
||||
echo -e "${YELLOW}Please run the create-local-ca script first:${NC}"
|
||||
echo -e "${BLUE}./bin/create-local-ca${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Copy to the system certificate directory
|
||||
echo -e "${YELLOW}Copying CA certificate to $TARGET_DIR/$TARGET_FILE...${NC}"
|
||||
sudo cp "$CA_FILE" "$TARGET_DIR/$TARGET_FILE"
|
||||
|
||||
# Update the CA certificates
|
||||
echo -e "${YELLOW}Updating system CA certificates...${NC}"
|
||||
sudo update-ca-certificates
|
||||
|
||||
# Update browsers' CA store (optional, for Firefox)
|
||||
if [ -d "$HOME/.mozilla" ]; then
|
||||
echo -e "${YELLOW}You may need to manually import the certificate in Firefox:${NC}"
|
||||
echo -e "1. Open Firefox"
|
||||
echo -e "2. Go to Preferences > Privacy & Security > Certificates"
|
||||
echo -e "3. Click 'View Certificates' > 'Authorities' tab"
|
||||
echo -e "4. Click 'Import' and select $CA_FILE"
|
||||
echo -e "5. Check 'Trust this CA to identify websites' and click OK"
|
||||
fi
|
||||
|
||||
# Check popular browsers
|
||||
if command -v google-chrome &> /dev/null; then
|
||||
echo -e "${YELLOW}For Chrome, the system-wide certificate should now be recognized${NC}"
|
||||
echo -e "${YELLOW}You may need to restart the browser${NC}"
|
||||
fi
|
||||
|
||||
echo
|
||||
echo -e "${GREEN}=== CA Certificate Installation Complete ===${NC}"
|
||||
echo
|
||||
echo -e "${YELLOW}System-wide CA certificate has been installed.${NC}"
|
||||
echo -e "${YELLOW}You should now be able to access the Kubernetes Dashboard without certificate warnings:${NC}"
|
||||
echo -e "${BLUE}https://kubernetes-dashboard.in.cloud.payne.io${NC}"
|
||||
echo
|
||||
echo -e "${YELLOW}If you still see certificate warnings, try:${NC}"
|
||||
echo "1. Restart your browser"
|
||||
echo "2. Clear your browser's cache and cookies"
|
||||
echo "3. If using a non-standard browser, you may need to import the certificate manually"
|
||||
echo
|
164
bin/wild-app-add
164
bin/wild-app-add
@@ -56,7 +56,7 @@ fi
|
||||
|
||||
CONFIG_FILE="${WC_HOME}/config.yaml"
|
||||
if [ ! -f "${CONFIG_FILE}" ]; then
|
||||
echo "Creating config file at ${CONFIG_FILE}"
|
||||
echo "Creating config file at '${CONFIG_FILE}'."
|
||||
echo "# Wild Cloud Configuration" > "${CONFIG_FILE}"
|
||||
echo "# This file contains app configurations and should be committed to git" >> "${CONFIG_FILE}"
|
||||
echo "" >> "${CONFIG_FILE}"
|
||||
@@ -64,7 +64,7 @@ fi
|
||||
|
||||
SECRETS_FILE="${WC_HOME}/secrets.yaml"
|
||||
if [ ! -f "${SECRETS_FILE}" ]; then
|
||||
echo "Creating secrets file at ${SECRETS_FILE}"
|
||||
echo "Creating secrets file at '${SECRETS_FILE}'."
|
||||
echo "# Wild Cloud Secrets Configuration" > "${SECRETS_FILE}"
|
||||
echo "# This file contains sensitive data and should NOT be committed to git" >> "${SECRETS_FILE}"
|
||||
echo "# Add this file to your .gitignore" >> "${SECRETS_FILE}"
|
||||
@@ -74,8 +74,8 @@ fi
|
||||
# Check if app is cached, if not fetch it first
|
||||
CACHE_APP_DIR="${WC_HOME}/.wildcloud/cache/apps/${APP_NAME}"
|
||||
if [ ! -d "${CACHE_APP_DIR}" ]; then
|
||||
echo "Cache directory for app '${APP_NAME}' not found at ${CACHE_APP_DIR}"
|
||||
echo "Please fetch the app first using 'wild-app-fetch ${APP_NAME}'"
|
||||
echo "Cache directory for app '${APP_NAME}' not found at '${CACHE_APP_DIR}'."
|
||||
echo "Please fetch the app first using 'wild-app-fetch ${APP_NAME}'."
|
||||
exit 1
|
||||
fi
|
||||
if [ ! -d "${CACHE_APP_DIR}" ]; then
|
||||
@@ -89,166 +89,116 @@ fi
|
||||
|
||||
APPS_DIR="${WC_HOME}/apps"
|
||||
if [ ! -d "${APPS_DIR}" ]; then
|
||||
echo "Creating apps directory at ${APPS_DIR}"
|
||||
echo "Creating apps directory at '${APPS_DIR}'."
|
||||
mkdir -p "${APPS_DIR}"
|
||||
fi
|
||||
|
||||
DEST_APP_DIR="${WC_HOME}/apps/${APP_NAME}"
|
||||
if [ -d "${DEST_APP_DIR}" ]; then
|
||||
if [ "${UPDATE}" = true ]; then
|
||||
echo "Updating app '${APP_NAME}'"
|
||||
echo "Updating app '${APP_NAME}'."
|
||||
rm -rf "${DEST_APP_DIR}"
|
||||
else
|
||||
echo "Warning: Destination directory ${DEST_APP_DIR} already exists"
|
||||
echo "Warning: Destination directory ${DEST_APP_DIR} already exists."
|
||||
read -p "Do you want to overwrite it? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo "Configuration cancelled"
|
||||
echo "Configuration cancelled."
|
||||
exit 1
|
||||
fi
|
||||
rm -rf "${DEST_APP_DIR}"
|
||||
fi
|
||||
else
|
||||
echo "Adding app '${APP_NAME}' to '${DEST_APP_DIR}'."
|
||||
fi
|
||||
mkdir -p "${DEST_APP_DIR}"
|
||||
|
||||
echo "Adding app '${APP_NAME}' from cache to ${DEST_APP_DIR}"
|
||||
|
||||
# Step 1: Copy only manifest.yaml from cache first
|
||||
MANIFEST_FILE="${CACHE_APP_DIR}/manifest.yaml"
|
||||
if [ -f "${MANIFEST_FILE}" ]; then
|
||||
echo "Copying manifest.yaml from cache"
|
||||
cp "${MANIFEST_FILE}" "${DEST_APP_DIR}/manifest.yaml"
|
||||
# manifest.yaml is allowed to have gomplate variables in the defaultConfig and requiredSecrets sections.
|
||||
# We need to use gomplate to process these variables before using yq.
|
||||
echo "Copying app manifest from cache."
|
||||
DEST_MANIFEST="${DEST_APP_DIR}/manifest.yaml"
|
||||
if [ -f "${SECRETS_FILE}" ]; then
|
||||
gomplate_cmd="gomplate -c .=${CONFIG_FILE} -c secrets=${SECRETS_FILE} -f ${MANIFEST_FILE} -o ${DEST_MANIFEST}"
|
||||
else
|
||||
gomplate_cmd="gomplate -c .=${CONFIG_FILE} -f ${MANIFEST_FILE} -o ${DEST_MANIFEST}"
|
||||
fi
|
||||
if ! eval "${gomplate_cmd}"; then
|
||||
echo "Error processing manifest.yaml with gomplate"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo "Warning: manifest.yaml not found in cache for app '${APP_NAME}'"
|
||||
echo "Warning: App manifest not found in cache."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Step 2: Add missing config and secret values based on manifest
|
||||
echo "Processing configuration and secrets from manifest.yaml"
|
||||
|
||||
# Check if the app section exists in config.yaml, if not create it
|
||||
if ! yq eval ".apps.${APP_NAME}" "${CONFIG_FILE}" >/dev/null 2>&1; then
|
||||
yq eval ".apps.${APP_NAME} = {}" -i "${CONFIG_FILE}"
|
||||
fi
|
||||
|
||||
# Extract defaultConfig from manifest.yaml and merge into config.yaml
|
||||
if yq eval '.defaultConfig' "${DEST_APP_DIR}/manifest.yaml" | grep -q -v '^null$'; then
|
||||
echo "Merging defaultConfig from manifest.yaml into .wildcloud/config.yaml"
|
||||
# Check if apps section exists in the secrets.yaml, if not create it
|
||||
if ! yq eval ".apps" "${SECRETS_FILE}" >/dev/null 2>&1; then
|
||||
yq eval ".apps = {}" -i "${SECRETS_FILE}"
|
||||
fi
|
||||
|
||||
# Merge processed defaultConfig into the app config, preserving existing values
|
||||
# This preserves existing configuration values while adding missing defaults
|
||||
if yq eval '.defaultConfig' "${DEST_MANIFEST}" | grep -q -v '^null$'; then
|
||||
|
||||
# Check if the app config already exists
|
||||
if yq eval ".apps.${APP_NAME}" "${CONFIG_FILE}" | grep -q '^null$'; then
|
||||
yq eval ".apps.${APP_NAME} = {}" -i "${CONFIG_FILE}"
|
||||
fi
|
||||
# Extract defaultConfig from the processed manifest and merge with existing app config
|
||||
# The * operator merges objects, with the right side taking precedence for conflicting keys
|
||||
# So (.apps.${APP_NAME} // {}) preserves existing values, defaultConfig adds missing ones
|
||||
temp_default_config=$(mktemp)
|
||||
yq eval '.defaultConfig' "${DEST_MANIFEST}" > "$temp_default_config"
|
||||
yq eval ".apps.${APP_NAME} = load(\"$temp_default_config\") * (.apps.${APP_NAME} // {})" -i "${CONFIG_FILE}"
|
||||
rm "$temp_default_config"
|
||||
|
||||
# Merge defaultConfig into the app config, preserving nested structure
|
||||
# This preserves the nested structure for objects like resources.requests.memory
|
||||
temp_manifest=$(mktemp)
|
||||
yq eval '.defaultConfig' "${DEST_APP_DIR}/manifest.yaml" > "$temp_manifest"
|
||||
yq eval ".apps.${APP_NAME} = (.apps.${APP_NAME} // {}) * load(\"$temp_manifest\")" -i "${CONFIG_FILE}"
|
||||
rm "$temp_manifest"
|
||||
|
||||
# Process template variables in the merged config
|
||||
echo "Processing template variables in app config"
|
||||
temp_config=$(mktemp)
|
||||
|
||||
# Build gomplate command with config context
|
||||
gomplate_cmd="gomplate -c .=${CONFIG_FILE}"
|
||||
|
||||
# Add secrets context if secrets.yaml exists
|
||||
if [ -f "${SECRETS_FILE}" ]; then
|
||||
gomplate_cmd="${gomplate_cmd} -c secrets=${SECRETS_FILE}"
|
||||
fi
|
||||
|
||||
# Process the entire config file through gomplate to resolve template variables
|
||||
${gomplate_cmd} -f "${CONFIG_FILE}" > "$temp_config"
|
||||
mv "$temp_config" "${CONFIG_FILE}"
|
||||
|
||||
echo "Merged defaultConfig for app '${APP_NAME}'"
|
||||
echo "Merged default configuration from app manifest into '${CONFIG_FILE}'."
|
||||
# Remove defaultConfig from the copied manifest since it's now in config.yaml.
|
||||
yq eval 'del(.defaultConfig)' -i "${DEST_MANIFEST}"
|
||||
fi
|
||||
|
||||
# Scaffold required secrets into .wildcloud/secrets.yaml if they don't exist
|
||||
if yq eval '.requiredSecrets' "${DEST_APP_DIR}/manifest.yaml" | grep -q -v '^null$'; then
|
||||
echo "Scaffolding required secrets for app '${APP_NAME}'"
|
||||
if yq eval '.requiredSecrets' "${DEST_MANIFEST}" | grep -q -v '^null$'; then
|
||||
|
||||
# Ensure .wildcloud/secrets.yaml exists
|
||||
if [ ! -f "${SECRETS_FILE}" ]; then
|
||||
echo "Creating secrets file at '${SECRETS_FILE}'"
|
||||
echo "# Wild Cloud Secrets Configuration" > "${SECRETS_FILE}"
|
||||
echo "# This file contains sensitive data and should NOT be committed to git" >> "${SECRETS_FILE}"
|
||||
echo "# Add this file to your .gitignore" >> "${SECRETS_FILE}"
|
||||
echo "" >> "${SECRETS_FILE}"
|
||||
fi
|
||||
|
||||
# Check if apps section exists, if not create it
|
||||
if ! yq eval ".apps" "${SECRETS_FILE}" >/dev/null 2>&1; then
|
||||
yq eval ".apps = {}" -i "${SECRETS_FILE}"
|
||||
fi
|
||||
|
||||
# Check if app section exists, if not create it
|
||||
if ! yq eval ".apps.${APP_NAME}" "${SECRETS_FILE}" >/dev/null 2>&1; then
|
||||
yq eval ".apps.${APP_NAME} = {}" -i "${SECRETS_FILE}"
|
||||
fi
|
||||
|
||||
# Add dummy values for each required secret if not already present
|
||||
yq eval '.requiredSecrets[]' "${DEST_APP_DIR}/manifest.yaml" | while read -r secret_path; do
|
||||
# Add random values for each required secret if not already present
|
||||
while read -r secret_path; do
|
||||
current_value=$(yq eval ".${secret_path} // \"null\"" "${SECRETS_FILE}")
|
||||
|
||||
if [ "${current_value}" = "null" ]; then
|
||||
echo "Adding random secret: ${secret_path}"
|
||||
random_secret=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 6)
|
||||
random_secret=$(openssl rand -base64 32 | tr -d "=+/" | cut -c1-32)
|
||||
yq eval ".${secret_path} = \"${random_secret}\"" -i "${SECRETS_FILE}"
|
||||
fi
|
||||
done
|
||||
|
||||
echo "Required secrets scaffolded for app '${APP_NAME}'"
|
||||
done < <(yq eval '.requiredSecrets[]' "${DEST_MANIFEST}")
|
||||
echo "Required secrets declared in app manifest added to '${SECRETS_FILE}'."
|
||||
fi
|
||||
|
||||
# Step 3: Copy and compile all other files from cache to app directory
|
||||
echo "Copying and compiling remaining files from cache"
|
||||
echo "Copying and compiling remaining files from cache."
|
||||
|
||||
# Function to process a file with gomplate if it's a YAML file
|
||||
process_file() {
|
||||
local src_file="$1"
|
||||
local dest_file="$2"
|
||||
cp -r "${CACHE_APP_DIR}/." "${DEST_APP_DIR}/"
|
||||
find "${DEST_APP_DIR}" -type f | while read -r dest_file; do
|
||||
rel_path="${dest_file#${DEST_APP_DIR}/}"
|
||||
|
||||
echo "Processing file: ${dest_file}"
|
||||
|
||||
# Build gomplate command with config context (enables .config shorthand)
|
||||
gomplate_cmd="gomplate -c .=${CONFIG_FILE}"
|
||||
|
||||
# Add secrets context if secrets.yaml exists (enables .secrets shorthand)
|
||||
if [ -f "${SECRETS_FILE}" ]; then
|
||||
gomplate_cmd="${gomplate_cmd} -c secrets=${SECRETS_FILE}"
|
||||
fi
|
||||
|
||||
# Execute gomplate with the file
|
||||
${gomplate_cmd} -f "${src_file}" > "${dest_file}"
|
||||
}
|
||||
|
||||
# Copy directory structure and process files (excluding manifest.yaml which was already copied)
|
||||
find "${CACHE_APP_DIR}" -type d | while read -r src_dir; do
|
||||
rel_path="${src_dir#${CACHE_APP_DIR}}"
|
||||
rel_path="${rel_path#/}" # Remove leading slash if present
|
||||
if [ -n "${rel_path}" ]; then
|
||||
mkdir -p "${DEST_APP_DIR}/${rel_path}"
|
||||
fi
|
||||
done
|
||||
|
||||
find "${CACHE_APP_DIR}" -type f | while read -r src_file; do
|
||||
rel_path="${src_file#${CACHE_APP_DIR}}"
|
||||
rel_path="${rel_path#/}" # Remove leading slash if present
|
||||
|
||||
# Skip manifest.yaml since it was already copied in step 1
|
||||
if [ "${rel_path}" = "manifest.yaml" ]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
dest_file="${DEST_APP_DIR}/${rel_path}"
|
||||
|
||||
# Ensure destination directory exists
|
||||
dest_dir=$(dirname "${dest_file}")
|
||||
mkdir -p "${dest_dir}"
|
||||
|
||||
process_file "${src_file}" "${dest_file}"
|
||||
temp_file=$(mktemp)
|
||||
gomplate -c .=${CONFIG_FILE} -c secrets=${SECRETS_FILE} -f "${dest_file}" > "${temp_file}"
|
||||
mv "${temp_file}" "${dest_file}"
|
||||
done
|
||||
|
||||
echo "Successfully added app '${APP_NAME}' with template processing"
|
||||
echo "Added '${APP_NAME}'."
|
||||
|
379
bin/wild-app-backup
Executable file
379
bin/wild-app-backup
Executable file
@@ -0,0 +1,379 @@
|
||||
#!/usr/bin/env bash
|
||||
set -Eeuo pipefail
|
||||
|
||||
# wild-app-backup - Generic backup script for wild-cloud apps
|
||||
# Usage: wild-app-backup <app-name> [--all]
|
||||
|
||||
# --- Initialize Wild Cloud environment ---------------------------------------
|
||||
if [ -z "${WC_ROOT:-}" ]; then
|
||||
echo "WC_ROOT is not set." >&2
|
||||
exit 1
|
||||
else
|
||||
source "${WC_ROOT}/scripts/common.sh"
|
||||
init_wild_env
|
||||
fi
|
||||
|
||||
# --- Configuration ------------------------------------------------------------
|
||||
get_staging_dir() {
|
||||
if wild-config cloud.backup.staging --check; then
|
||||
wild-config cloud.backup.staging
|
||||
else
|
||||
echo "Staging directory is not set. Configure 'cloud.backup.staging' in config.yaml." >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# --- Helpers ------------------------------------------------------------------
|
||||
require_k8s() {
|
||||
if ! command -v kubectl >/dev/null 2>&1; then
|
||||
echo "kubectl not found." >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
require_yq() {
|
||||
if ! command -v yq >/dev/null 2>&1; then
|
||||
echo "yq not found. Required for parsing manifest.yaml files." >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
get_timestamp() {
|
||||
date -u +'%Y%m%dT%H%M%SZ'
|
||||
}
|
||||
|
||||
# --- App Discovery ------------------------------------------------------------
|
||||
discover_database_deps() {
|
||||
local app_name="$1"
|
||||
local manifest_file="${WC_HOME}/apps/${app_name}/manifest.yaml"
|
||||
|
||||
if [[ -f "$manifest_file" ]]; then
|
||||
yq eval '.requires[].name' "$manifest_file" 2>/dev/null | grep -E '^(postgres|mysql|redis)$' || true
|
||||
fi
|
||||
}
|
||||
|
||||
discover_app_pvcs() {
|
||||
local app_name="$1"
|
||||
kubectl get pvc -n "$app_name" -l "app=$app_name" --no-headers -o custom-columns=":metadata.name" 2>/dev/null || true
|
||||
}
|
||||
|
||||
get_app_pods() {
|
||||
local app_name="$1"
|
||||
kubectl get pods -n "$app_name" -l "app=$app_name" \
|
||||
-o jsonpath='{.items[?(@.status.phase=="Running")].metadata.name}' 2>/dev/null | \
|
||||
tr ' ' '\n' | head -1 || true
|
||||
}
|
||||
|
||||
discover_pvc_mount_paths() {
|
||||
local app_name="$1" pvc_name="$2"
|
||||
|
||||
# Find the volume name that uses this PVC
|
||||
local volume_name
|
||||
volume_name=$(kubectl get deploy -n "$app_name" -l "app=$app_name" \
|
||||
-o jsonpath='{.items[*].spec.template.spec.volumes[?(@.persistentVolumeClaim.claimName=="'$pvc_name'")].name}' 2>/dev/null | awk 'NR==1{print; exit}')
|
||||
|
||||
if [[ -n "$volume_name" ]]; then
|
||||
# Find the mount path for this volume (get first mount path)
|
||||
local mount_path
|
||||
mount_path=$(kubectl get deploy -n "$app_name" -l "app=$app_name" \
|
||||
-o jsonpath='{.items[*].spec.template.spec.containers[*].volumeMounts[?(@.name=="'$volume_name'")].mountPath}' 2>/dev/null | \
|
||||
tr ' ' '\n' | head -1)
|
||||
|
||||
if [[ -n "$mount_path" ]]; then
|
||||
echo "$mount_path"
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
|
||||
# No mount path found
|
||||
return 1
|
||||
}
|
||||
|
||||
# --- Database Backup Functions -----------------------------------------------
|
||||
backup_postgres_database() {
|
||||
local app_name="$1"
|
||||
local backup_dir="$2"
|
||||
local timestamp="$3"
|
||||
local db_name="${app_name}"
|
||||
|
||||
local pg_ns="postgres"
|
||||
local pg_deploy="postgres-deployment"
|
||||
local db_superuser="postgres"
|
||||
|
||||
echo "Backing up PostgreSQL database '$db_name'..." >&2
|
||||
|
||||
# Check if postgres is available
|
||||
if ! kubectl get pods -n "$pg_ns" >/dev/null 2>&1; then
|
||||
echo "PostgreSQL namespace '$pg_ns' not accessible. Skipping database backup." >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
local db_dump="${backup_dir}/database_${timestamp}.dump"
|
||||
local db_globals="${backup_dir}/globals_${timestamp}.sql"
|
||||
|
||||
# Database dump (custom format, compressed)
|
||||
if ! kubectl exec -n "$pg_ns" deploy/"$pg_deploy" -- bash -lc \
|
||||
"pg_dump -U ${db_superuser} -Fc -Z 9 ${db_name}" > "$db_dump"
|
||||
then
|
||||
echo "Database dump failed for '$app_name'." >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Verify dump integrity
|
||||
# if ! kubectl exec -i -n "$pg_ns" deploy/"$pg_deploy" -- bash -lc "pg_restore -l >/dev/null" < "$db_dump"; then
|
||||
# echo "Database dump integrity check failed for '$app_name'." >&2
|
||||
# return 1
|
||||
# fi
|
||||
|
||||
# Dump globals (roles, permissions)
|
||||
if ! kubectl exec -n "$pg_ns" deploy/"$pg_deploy" -- bash -lc \
|
||||
"pg_dumpall -U ${db_superuser} -g" > "$db_globals"
|
||||
then
|
||||
echo "Globals dump failed for '$app_name'." >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo " Database dump: $db_dump" >&2
|
||||
echo " Globals dump: $db_globals" >&2
|
||||
|
||||
# Return paths for manifest generation
|
||||
echo "$db_dump $db_globals"
|
||||
}
|
||||
|
||||
backup_mysql_database() {
|
||||
local app_name="$1"
|
||||
local backup_dir="$2"
|
||||
local timestamp="$3"
|
||||
local db_name="${app_name}"
|
||||
|
||||
local mysql_ns="mysql"
|
||||
local mysql_deploy="mysql-deployment"
|
||||
local mysql_user="root"
|
||||
|
||||
echo "Backing up MySQL database '$db_name'..." >&2
|
||||
|
||||
if ! kubectl get pods -n "$mysql_ns" >/dev/null 2>&1; then
|
||||
echo "MySQL namespace '$mysql_ns' not accessible. Skipping database backup." >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
local db_dump="${backup_dir}/database_${timestamp}.sql"
|
||||
|
||||
# Get MySQL root password from secret
|
||||
local mysql_password
|
||||
if mysql_password=$(kubectl get secret -n "$mysql_ns" mysql-secret -o jsonpath='{.data.password}' 2>/dev/null | base64 -d); then
|
||||
# MySQL dump with password
|
||||
if ! kubectl exec -n "$mysql_ns" deploy/"$mysql_deploy" -- bash -c \
|
||||
"mysqldump -u${mysql_user} -p'${mysql_password}' --single-transaction --routines --triggers ${db_name}" > "$db_dump"
|
||||
then
|
||||
echo "MySQL dump failed for '$app_name'." >&2
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
echo "Could not retrieve MySQL password. Skipping database backup." >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo " Database dump: $db_dump" >&2
|
||||
echo "$db_dump"
|
||||
}
|
||||
|
||||
# --- PVC Backup Functions ----------------------------------------------------
|
||||
backup_pvc() {
|
||||
local app_name="$1"
|
||||
local pvc_name="$2"
|
||||
local backup_dir="$3"
|
||||
local timestamp="$4"
|
||||
|
||||
echo "Backing up PVC '$pvc_name' from namespace '$app_name'..." >&2
|
||||
|
||||
# Get a running pod that actually uses this specific PVC
|
||||
local app_pod
|
||||
# First try to find a pod that has this exact PVC volume mounted
|
||||
local pvc_volume_id=$(kubectl get pvc -n "$app_name" "$pvc_name" -o jsonpath='{.spec.volumeName}' 2>/dev/null)
|
||||
if [[ -n "$pvc_volume_id" ]]; then
|
||||
# Look for a pod that has a mount from this specific volume
|
||||
app_pod=$(kubectl get pods -n "$app_name" -l "app=$app_name" -o json 2>/dev/null | \
|
||||
jq -r '.items[] | select(.status.phase=="Running") | select(.spec.volumes[]?.persistentVolumeClaim.claimName=="'$pvc_name'") | .metadata.name' | head -1)
|
||||
fi
|
||||
|
||||
# Fallback to any running pod
|
||||
if [[ -z "$app_pod" ]]; then
|
||||
app_pod=$(get_app_pods "$app_name")
|
||||
fi
|
||||
|
||||
if [[ -z "$app_pod" ]]; then
|
||||
echo "No running pods found for app '$app_name'. Skipping PVC backup." >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "Using pod '$app_pod' for PVC backup" >&2
|
||||
|
||||
# Discover mount path for this PVC
|
||||
local mount_path
|
||||
mount_path=$(discover_pvc_mount_paths "$app_name" "$pvc_name" | awk 'NR==1{print; exit}')
|
||||
|
||||
if [[ -z "$mount_path" ]]; then
|
||||
echo "Could not determine mount path for PVC '$pvc_name'. Trying to detect..." >&2
|
||||
# Try to find any volume mount that might be the PVC by looking at df output
|
||||
mount_path=$(kubectl exec -n "$app_name" "$app_pod" -- sh -c "df | grep longhorn | awk '{print \$6}' | head -1" 2>/dev/null)
|
||||
if [[ -z "$mount_path" ]]; then
|
||||
mount_path="/data" # Final fallback
|
||||
fi
|
||||
echo "Using detected/fallback mount path: $mount_path" >&2
|
||||
fi
|
||||
|
||||
local pvc_backup_dir="${backup_dir}/${pvc_name}"
|
||||
mkdir -p "$pvc_backup_dir"
|
||||
|
||||
# Stream tar directly from pod to staging directory for restic deduplication
|
||||
local parent_dir=$(dirname "$mount_path")
|
||||
local dir_name=$(basename "$mount_path")
|
||||
|
||||
echo " Streaming PVC data directly to staging..." >&2
|
||||
if kubectl exec -n "$app_name" "$app_pod" -- tar -C "$parent_dir" -cf - "$dir_name" | tar -xf - -C "$pvc_backup_dir" 2>/dev/null; then
|
||||
echo " PVC data streamed successfully" >&2
|
||||
else
|
||||
echo "PVC backup failed for '$pvc_name' in '$app_name'." >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo " PVC backup directory: $pvc_backup_dir" >&2
|
||||
echo "$pvc_backup_dir"
|
||||
}
|
||||
|
||||
# --- Main Backup Function ----------------------------------------------------
|
||||
backup_app() {
|
||||
local app_name="$1"
|
||||
local staging_dir="$2"
|
||||
|
||||
echo "=========================================="
|
||||
echo "Starting backup of app: $app_name"
|
||||
echo "=========================================="
|
||||
|
||||
local timestamp
|
||||
timestamp=$(get_timestamp)
|
||||
|
||||
local backup_dir="${staging_dir}/apps/${app_name}"
|
||||
|
||||
# Clean up any existing backup files for this app
|
||||
if [[ -d "$backup_dir" ]]; then
|
||||
echo "Cleaning up existing backup files for '$app_name'..." >&2
|
||||
rm -rf "$backup_dir"
|
||||
fi
|
||||
mkdir -p "$backup_dir"
|
||||
|
||||
local backup_files=()
|
||||
|
||||
# Check if app has custom backup script first
|
||||
local custom_backup_script="${WC_HOME}/apps/${app_name}/backup.sh"
|
||||
if [[ -x "$custom_backup_script" ]]; then
|
||||
echo "Found custom backup script for '$app_name'. Running..."
|
||||
"$custom_backup_script"
|
||||
echo "Custom backup completed for '$app_name'."
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Generic backup based on manifest discovery
|
||||
local database_deps
|
||||
database_deps=$(discover_database_deps "$app_name")
|
||||
|
||||
local pvcs
|
||||
pvcs=$(discover_app_pvcs "$app_name")
|
||||
|
||||
if [[ -z "$database_deps" && -z "$pvcs" ]]; then
|
||||
echo "No databases or PVCs found for app '$app_name'. Nothing to backup." >&2
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Backup databases
|
||||
for db_type in $database_deps; do
|
||||
case "$db_type" in
|
||||
postgres)
|
||||
if db_files=$(backup_postgres_database "$app_name" "$backup_dir" "$timestamp"); then
|
||||
read -ra db_file_array <<< "$db_files"
|
||||
backup_files+=("${db_file_array[@]}")
|
||||
fi
|
||||
;;
|
||||
mysql)
|
||||
if db_files=$(backup_mysql_database "$app_name" "$backup_dir" "$timestamp"); then
|
||||
backup_files+=("$db_files")
|
||||
fi
|
||||
;;
|
||||
redis)
|
||||
echo "Redis backup not implemented yet. Skipping."
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Backup PVCs
|
||||
for pvc in $pvcs; do
|
||||
if pvc_file=$(backup_pvc "$app_name" "$pvc" "$backup_dir" "$timestamp"); then
|
||||
backup_files+=("$pvc_file")
|
||||
fi
|
||||
done
|
||||
|
||||
# Summary
|
||||
if [[ ${#backup_files[@]} -gt 0 ]]; then
|
||||
echo "----------------------------------------"
|
||||
echo "Backup completed for '$app_name'"
|
||||
echo "Files backed up:"
|
||||
printf ' - %s\n' "${backup_files[@]}"
|
||||
echo "----------------------------------------"
|
||||
else
|
||||
echo "No files were successfully backed up for '$app_name'." >&2
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# --- Main Script Logic -------------------------------------------------------
|
||||
main() {
|
||||
|
||||
if [[ $# -eq 0 || "$1" == "--help" || "$1" == "-h" ]]; then
|
||||
echo "Usage: $0 <app-name> [app-name2...] | --all"
|
||||
echo " $0 --list # List available apps"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
require_k8s
|
||||
require_yq
|
||||
|
||||
local staging_dir
|
||||
staging_dir=$(get_staging_dir)
|
||||
mkdir -p "$staging_dir"
|
||||
echo "Staging backups at: $staging_dir"
|
||||
|
||||
if [[ "$1" == "--list" ]]; then
|
||||
echo "Available apps:"
|
||||
find "${WC_HOME}/apps" -maxdepth 1 -type d -not -path "${WC_HOME}/apps" -exec basename {} \; | sort
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [[ "$1" == "--all" ]]; then
|
||||
echo "Backing up all apps..."
|
||||
local apps
|
||||
mapfile -t apps < <(find "${WC_HOME}/apps" -maxdepth 1 -type d -not -path "${WC_HOME}/apps" -exec basename {} \;)
|
||||
for app in "${apps[@]}"; do
|
||||
if ! backup_app "$app" "$staging_dir"; then
|
||||
echo "Backup failed for '$app', continuing with next app..." >&2
|
||||
fi
|
||||
done
|
||||
else
|
||||
# Backup specific apps
|
||||
local failed_apps=()
|
||||
for app in "$@"; do
|
||||
if ! backup_app "$app" "$staging_dir"; then
|
||||
failed_apps+=("$app")
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ ${#failed_apps[@]} -gt 0 ]]; then
|
||||
echo "The following app backups failed: ${failed_apps[*]}" >&2
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "All backups completed successfully."
|
||||
}
|
||||
|
||||
main "$@"
|
@@ -79,42 +79,35 @@ deploy_secrets() {
|
||||
|
||||
echo "Deploying secrets for app '${app_name}' in namespace '${namespace}'"
|
||||
|
||||
# Create secret data
|
||||
# Gather data for app secret
|
||||
local secret_data=""
|
||||
while IFS= read -r secret_path; do
|
||||
# Get the secret value using full path
|
||||
secret_value=$(yq eval ".${secret_path} // \"\"" "${SECRETS_FILE}")
|
||||
|
||||
# Extract just the key name for the Kubernetes secret (handle dotted paths)
|
||||
secret_key="${secret_path##*.}"
|
||||
|
||||
if [ -n "${secret_value}" ] && [ "${secret_value}" != "null" ]; then
|
||||
if [[ "${secret_value}" == CHANGE_ME_* ]]; then
|
||||
echo "Warning: Secret '${secret_path}' for app '${app_name}' still has dummy value: ${secret_value}"
|
||||
fi
|
||||
secret_data="${secret_data} --from-literal=${secret_key}=${secret_value}"
|
||||
secret_data="${secret_data} --from-literal=${secret_path}=${secret_value}"
|
||||
else
|
||||
echo "Error: Required secret '${secret_path}' not found in ${SECRETS_FILE} for app '${app_name}'"
|
||||
exit 1
|
||||
fi
|
||||
done < <(yq eval '.requiredSecrets[]' "${manifest_file}")
|
||||
|
||||
# Create the secret if we have data
|
||||
# Create/update app secret in cluster
|
||||
if [ -n "${secret_data}" ]; then
|
||||
echo "Creating/updating secret '${app_name}-secrets' in namespace '${namespace}'"
|
||||
if [ "${DRY_RUN:-}" = "--dry-run=client" ]; then
|
||||
echo "DRY RUN: kubectl create secret generic ${app_name}-secrets ${secret_data} --namespace=${namespace} --dry-run=client -o yaml"
|
||||
else
|
||||
# Delete existing secret if it exists, then create new one
|
||||
kubectl delete secret "${app_name}-secrets" --namespace="${namespace}" --ignore-not-found=true
|
||||
kubectl create secret generic "${app_name}-secrets" ${secret_data} --namespace="${namespace}"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Step 1: Create namespaces first (dependencies and main app)
|
||||
# Step 1: Create namespaces first
|
||||
echo "Creating namespaces..."
|
||||
MANIFEST_FILE="apps/${APP_NAME}/manifest.yaml"
|
||||
|
||||
# Create dependency namespaces.
|
||||
if [ -f "${MANIFEST_FILE}" ]; then
|
||||
if yq eval '.requires' "${MANIFEST_FILE}" | grep -q -v '^null$'; then
|
||||
yq eval '.requires[].name' "${MANIFEST_FILE}" | while read -r required_app; do
|
||||
@@ -152,32 +145,6 @@ if [ -f "apps/${APP_NAME}/namespace.yaml" ]; then
|
||||
wild-cluster-secret-copy cert-manager:wildcard-wild-cloud-tls "$NAMESPACE" || echo "Warning: Failed to copy external wildcard certificate"
|
||||
fi
|
||||
|
||||
# Step 2: Deploy secrets (dependencies and main app)
|
||||
echo "Deploying secrets..."
|
||||
if [ -f "${MANIFEST_FILE}" ]; then
|
||||
if yq eval '.requires' "${MANIFEST_FILE}" | grep -q -v '^null$'; then
|
||||
echo "Deploying secrets for required dependencies..."
|
||||
yq eval '.requires[].name' "${MANIFEST_FILE}" | while read -r required_app; do
|
||||
if [ -z "${required_app}" ] || [ "${required_app}" = "null" ]; then
|
||||
echo "Warning: Empty or null dependency found, skipping"
|
||||
continue
|
||||
fi
|
||||
|
||||
if [ ! -d "apps/${required_app}" ]; then
|
||||
echo "Error: Required dependency '${required_app}' not found in apps/ directory"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Deploying secrets for dependency: ${required_app}"
|
||||
# Deploy secrets in dependency's own namespace
|
||||
deploy_secrets "${required_app}"
|
||||
# Also deploy dependency secrets in consuming app's namespace
|
||||
echo "Copying dependency secrets to app namespace: ${APP_NAME}"
|
||||
deploy_secrets "${required_app}" "${APP_NAME}"
|
||||
done
|
||||
fi
|
||||
fi
|
||||
|
||||
# Deploy secrets for this app
|
||||
deploy_secrets "${APP_NAME}"
|
||||
|
||||
|
602
bin/wild-app-restore
Executable file
602
bin/wild-app-restore
Executable file
@@ -0,0 +1,602 @@
|
||||
#!/usr/bin/env bash
|
||||
set -Eeuo pipefail
|
||||
|
||||
# wild-app-restore - Generic restore script for wild-cloud apps
|
||||
# Usage: wild-app-restore <app-name> [snapshot-id] [--db-only|--pvc-only] [--skip-globals]
|
||||
|
||||
# --- Initialize Wild Cloud environment ---------------------------------------
|
||||
if [ -z "${WC_ROOT:-}" ]; then
|
||||
echo "WC_ROOT is not set." >&2
|
||||
exit 1
|
||||
else
|
||||
source "${WC_ROOT}/scripts/common.sh"
|
||||
init_wild_env
|
||||
fi
|
||||
|
||||
# --- Configuration ------------------------------------------------------------
|
||||
get_staging_dir() {
|
||||
if wild-config cloud.backup.staging --check; then
|
||||
wild-config cloud.backup.staging
|
||||
else
|
||||
echo "Staging directory is not set. Configure 'cloud.backup.staging' in config.yaml." >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
get_restic_config() {
|
||||
if wild-config cloud.backup.root --check; then
|
||||
export RESTIC_REPOSITORY="$(wild-config cloud.backup.root)"
|
||||
else
|
||||
echo "WARNING: Could not get cloud backup root." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if wild-secret cloud.backupPassword --check; then
|
||||
export RESTIC_PASSWORD="$(wild-secret cloud.backupPassword)"
|
||||
else
|
||||
echo "WARNING: Could not get cloud backup secret." >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# --- Helpers ------------------------------------------------------------------
|
||||
require_k8s() {
|
||||
if ! command -v kubectl >/dev/null 2>&1; then
|
||||
echo "kubectl not found." >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
require_yq() {
|
||||
if ! command -v yq >/dev/null 2>&1; then
|
||||
echo "yq not found. Required for parsing manifest.yaml files." >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
require_restic() {
|
||||
if ! command -v restic >/dev/null 2>&1; then
|
||||
echo "restic not found. Required for snapshot operations." >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
show_help() {
|
||||
echo "Usage: $0 <app-name> [snapshot-id] [OPTIONS]"
|
||||
echo "Restore application data from restic snapshots"
|
||||
echo ""
|
||||
echo "Arguments:"
|
||||
echo " app-name Name of the application to restore"
|
||||
echo " snapshot-id Specific snapshot ID to restore (optional, uses latest if not provided)"
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " --db-only Restore only database data"
|
||||
echo " --pvc-only Restore only PVC data"
|
||||
echo " --skip-globals Skip restoring database globals (roles, permissions)"
|
||||
echo " --list List available snapshots for the app"
|
||||
echo " -h, --help Show this help message"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " $0 discourse # Restore latest discourse snapshot (all data)"
|
||||
echo " $0 discourse abc123 --db-only # Restore specific snapshot, database only"
|
||||
echo " $0 discourse --list # List available discourse snapshots"
|
||||
}
|
||||
|
||||
# --- App Discovery Functions (from wild-app-backup) --------------------------
|
||||
discover_database_deps() {
|
||||
local app_name="$1"
|
||||
local manifest_file="${WC_HOME}/apps/${app_name}/manifest.yaml"
|
||||
|
||||
if [[ -f "$manifest_file" ]]; then
|
||||
yq eval '.requires[].name' "$manifest_file" 2>/dev/null | grep -E '^(postgres|mysql|redis)$' || true
|
||||
fi
|
||||
}
|
||||
|
||||
discover_app_pvcs() {
|
||||
local app_name="$1"
|
||||
kubectl get pvc -n "$app_name" -l "app=$app_name" --no-headers -o custom-columns=":metadata.name" 2>/dev/null || true
|
||||
}
|
||||
|
||||
get_app_pods() {
|
||||
local app_name="$1"
|
||||
kubectl get pods -n "$app_name" -l "app=$app_name" \
|
||||
-o jsonpath='{.items[?(@.status.phase=="Running")].metadata.name}' 2>/dev/null | \
|
||||
tr ' ' '\n' | head -1 || true
|
||||
}
|
||||
|
||||
# --- Restic Snapshot Functions -----------------------------------------------
|
||||
list_app_snapshots() {
|
||||
local app_name="$1"
|
||||
echo "Available snapshots for app '$app_name':"
|
||||
restic snapshots --tag "$app_name" --json | jq -r '.[] | "\(.short_id) \(.time) \(.hostname) \(.paths | join(" "))"' | \
|
||||
sort -k2 -r | head -20
|
||||
}
|
||||
|
||||
get_latest_snapshot() {
|
||||
local app_name="$1"
|
||||
restic snapshots --tag "$app_name" --json | jq -r '.[0].short_id' 2>/dev/null || echo ""
|
||||
}
|
||||
|
||||
restore_from_snapshot() {
|
||||
local app_name="$1"
|
||||
local snapshot_id="$2"
|
||||
local staging_dir="$3"
|
||||
|
||||
local restore_dir="$staging_dir/restore/$app_name"
|
||||
mkdir -p "$restore_dir"
|
||||
|
||||
echo "Restoring snapshot $snapshot_id to $restore_dir..."
|
||||
if ! restic restore "$snapshot_id" --target "$restore_dir"; then
|
||||
echo "Failed to restore snapshot $snapshot_id" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "$restore_dir"
|
||||
}
|
||||
|
||||
# --- Database Restore Functions ----------------------------------------------
|
||||
restore_postgres_database() {
|
||||
local app_name="$1"
|
||||
local restore_dir="$2"
|
||||
local skip_globals="$3"
|
||||
|
||||
local pg_ns="postgres"
|
||||
local pg_deploy="postgres-deployment"
|
||||
local db_superuser="postgres"
|
||||
local db_name="$app_name"
|
||||
local db_role="$app_name"
|
||||
|
||||
echo "Restoring PostgreSQL database '$db_name'..."
|
||||
|
||||
# Check if postgres is available
|
||||
if ! kubectl get pods -n "$pg_ns" >/dev/null 2>&1; then
|
||||
echo "PostgreSQL namespace '$pg_ns' not accessible. Cannot restore database." >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Find database dump file
|
||||
local db_dump
|
||||
db_dump=$(find "$restore_dir" -name "database_*.dump" -o -name "*_db_*.dump" | head -1)
|
||||
if [[ -z "$db_dump" ]]; then
|
||||
echo "No database dump found for '$app_name'" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Find globals file
|
||||
local globals_file
|
||||
globals_file=$(find "$restore_dir" -name "globals_*.sql" | head -1)
|
||||
|
||||
# Helper functions for postgres operations
|
||||
pg_exec() {
|
||||
kubectl exec -n "$pg_ns" deploy/"$pg_deploy" -- bash -lc "$*"
|
||||
}
|
||||
|
||||
pg_exec_i() {
|
||||
kubectl exec -i -n "$pg_ns" deploy/"$pg_deploy" -- bash -lc "$*"
|
||||
}
|
||||
|
||||
# Restore globals first if available and not skipped
|
||||
if [[ "$skip_globals" != "true" && -n "$globals_file" && -f "$globals_file" ]]; then
|
||||
echo "Restoring database globals..."
|
||||
pg_exec_i "psql -v ON_ERROR_STOP=1 -U ${db_superuser} -d postgres" < "$globals_file"
|
||||
fi
|
||||
|
||||
# Ensure role exists
|
||||
pg_exec "psql -v ON_ERROR_STOP=1 -U ${db_superuser} -d postgres -c \"
|
||||
DO \$\$
|
||||
BEGIN
|
||||
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname='${db_role}') THEN
|
||||
CREATE ROLE ${db_role} LOGIN;
|
||||
END IF;
|
||||
END
|
||||
\$\$;\""
|
||||
|
||||
# Terminate existing connections
|
||||
pg_exec "psql -v ON_ERROR_STOP=1 -U ${db_superuser} -d postgres -c \"
|
||||
SELECT pg_terminate_backend(pid)
|
||||
FROM pg_stat_activity
|
||||
WHERE datname='${db_name}' AND pid <> pg_backend_pid();\""
|
||||
|
||||
# Drop and recreate database
|
||||
pg_exec "psql -v ON_ERROR_STOP=1 -U ${db_superuser} -d postgres -c \"
|
||||
DROP DATABASE IF EXISTS ${db_name};
|
||||
CREATE DATABASE ${db_name} OWNER ${db_role};\""
|
||||
|
||||
# Restore database from dump
|
||||
echo "Restoring database from $db_dump..."
|
||||
if ! pg_exec_i "pg_restore -v -j 4 -U ${db_superuser} --clean --if-exists --no-owner --role=${db_role} -d ${db_name}" < "$db_dump"; then
|
||||
echo "Database restore failed for '$app_name'" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Ensure proper ownership
|
||||
pg_exec "psql -v ON_ERROR_STOP=1 -U ${db_superuser} -d postgres -c \"ALTER DATABASE ${db_name} OWNER TO ${db_role};\""
|
||||
|
||||
echo "Database restore completed for '$app_name'"
|
||||
}
|
||||
|
||||
restore_mysql_database() {
|
||||
local app_name="$1"
|
||||
local restore_dir="$2"
|
||||
|
||||
local mysql_ns="mysql"
|
||||
local mysql_deploy="mysql-deployment"
|
||||
local mysql_user="root"
|
||||
local db_name="$app_name"
|
||||
|
||||
echo "Restoring MySQL database '$db_name'..."
|
||||
|
||||
if ! kubectl get pods -n "$mysql_ns" >/dev/null 2>&1; then
|
||||
echo "MySQL namespace '$mysql_ns' not accessible. Cannot restore database." >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Find database dump file
|
||||
local db_dump
|
||||
db_dump=$(find "$restore_dir" -name "database_*.sql" -o -name "*_db_*.sql" | head -1)
|
||||
if [[ -z "$db_dump" ]]; then
|
||||
echo "No database dump found for '$app_name'" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Get MySQL root password from secret
|
||||
local mysql_password
|
||||
if ! mysql_password=$(kubectl get secret -n "$mysql_ns" mysql-secret -o jsonpath='{.data.password}' 2>/dev/null | base64 -d); then
|
||||
echo "Could not retrieve MySQL password. Cannot restore database." >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Drop and recreate database
|
||||
kubectl exec -n "$mysql_ns" deploy/"$mysql_deploy" -- bash -c \
|
||||
"mysql -u${mysql_user} -p'${mysql_password}' -e 'DROP DATABASE IF EXISTS ${db_name}; CREATE DATABASE ${db_name};'"
|
||||
|
||||
# Restore database from dump
|
||||
echo "Restoring database from $db_dump..."
|
||||
if ! kubectl exec -i -n "$mysql_ns" deploy/"$mysql_deploy" -- bash -c \
|
||||
"mysql -u${mysql_user} -p'${mysql_password}' ${db_name}" < "$db_dump"; then
|
||||
echo "Database restore failed for '$app_name'" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "Database restore completed for '$app_name'"
|
||||
}
|
||||
|
||||
# --- PVC Restore Functions ---------------------------------------------------
|
||||
scale_app() {
|
||||
local app_name="$1"
|
||||
local replicas="$2"
|
||||
|
||||
echo "Scaling app '$app_name' to $replicas replicas..."
|
||||
|
||||
# Find deployments for this app and scale them
|
||||
local deployments
|
||||
deployments=$(kubectl get deploy -n "$app_name" -l "app=$app_name" -o name 2>/dev/null || true)
|
||||
|
||||
if [[ -z "$deployments" ]]; then
|
||||
echo "No deployments found for app '$app_name'" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
for deploy in $deployments; do
|
||||
kubectl scale "$deploy" -n "$app_name" --replicas="$replicas"
|
||||
if [[ "$replicas" -gt 0 ]]; then
|
||||
kubectl rollout status "$deploy" -n "$app_name"
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
restore_app_pvc() {
|
||||
local app_name="$1"
|
||||
local pvc_name="$2"
|
||||
local restore_dir="$3"
|
||||
|
||||
echo "Restoring PVC '$pvc_name' for app '$app_name'..."
|
||||
|
||||
# Find the PVC backup directory in the restore directory
|
||||
local pvc_backup_dir
|
||||
pvc_backup_dir=$(find "$restore_dir" -type d -name "$pvc_name" | head -1)
|
||||
|
||||
if [[ -z "$pvc_backup_dir" || ! -d "$pvc_backup_dir" ]]; then
|
||||
echo "No backup directory found for PVC '$pvc_name'" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Get the Longhorn volume name for this PVC
|
||||
local pv_name
|
||||
pv_name=$(kubectl get pvc -n "$app_name" "$pvc_name" -o jsonpath='{.spec.volumeName}')
|
||||
if [[ -z "$pv_name" ]]; then
|
||||
echo "Could not find PersistentVolume for PVC '$pvc_name'" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
local longhorn_volume
|
||||
longhorn_volume=$(kubectl get pv "$pv_name" -o jsonpath='{.spec.csi.volumeHandle}' 2>/dev/null)
|
||||
if [[ -z "$longhorn_volume" ]]; then
|
||||
echo "Could not find Longhorn volume for PV '$pv_name'" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Create safety snapshot before destructive restore
|
||||
local safety_snapshot="restore-safety-$(date +%s)"
|
||||
echo "Creating safety snapshot '$safety_snapshot' for volume '$longhorn_volume'..."
|
||||
|
||||
kubectl apply -f - <<EOF
|
||||
apiVersion: longhorn.io/v1beta2
|
||||
kind: Snapshot
|
||||
metadata:
|
||||
name: $safety_snapshot
|
||||
namespace: longhorn-system
|
||||
labels:
|
||||
app: wild-app-restore
|
||||
volume: $longhorn_volume
|
||||
pvc: $pvc_name
|
||||
original-app: $app_name
|
||||
spec:
|
||||
volume: $longhorn_volume
|
||||
EOF
|
||||
|
||||
# Wait for snapshot to be ready
|
||||
echo "Waiting for safety snapshot to be ready..."
|
||||
local snapshot_timeout=60
|
||||
local elapsed=0
|
||||
while [[ $elapsed -lt $snapshot_timeout ]]; do
|
||||
local snapshot_ready
|
||||
snapshot_ready=$(kubectl get snapshot.longhorn.io -n longhorn-system "$safety_snapshot" -o jsonpath='{.status.readyToUse}' 2>/dev/null || echo "false")
|
||||
|
||||
if [[ "$snapshot_ready" == "true" ]]; then
|
||||
echo "Safety snapshot created successfully"
|
||||
break
|
||||
fi
|
||||
|
||||
sleep 2
|
||||
elapsed=$((elapsed + 2))
|
||||
done
|
||||
|
||||
if [[ $elapsed -ge $snapshot_timeout ]]; then
|
||||
echo "Warning: Safety snapshot may not be ready, but proceeding with restore..."
|
||||
fi
|
||||
|
||||
# Scale app down to avoid conflicts during restore
|
||||
scale_app "$app_name" 0
|
||||
|
||||
# Wait for pods to terminate and PVC to be unmounted
|
||||
echo "Waiting for pods to terminate and PVC to be released..."
|
||||
sleep 10
|
||||
|
||||
# Get PVC details for node affinity
|
||||
local pv_name
|
||||
pv_name=$(kubectl get pvc -n "$app_name" "$pvc_name" -o jsonpath='{.spec.volumeName}')
|
||||
if [[ -z "$pv_name" ]]; then
|
||||
echo "Could not find PersistentVolume for PVC '$pvc_name'" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Get the node where this Longhorn volume is available
|
||||
local target_node
|
||||
target_node=$(kubectl get pv "$pv_name" -o jsonpath='{.metadata.annotations.volume\.kubernetes\.io/selected-node}' 2>/dev/null || \
|
||||
kubectl get nodes --no-headers -o custom-columns=NAME:.metadata.name | head -1)
|
||||
|
||||
echo "Creating restore utility pod on node: $target_node"
|
||||
|
||||
# Create temporary pod with node affinity and PVC mounted
|
||||
local temp_pod="restore-util-$(date +%s)"
|
||||
kubectl apply -n "$app_name" -f - <<EOF
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: $temp_pod
|
||||
labels:
|
||||
app: restore-utility
|
||||
spec:
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: $target_node
|
||||
containers:
|
||||
- name: restore-util
|
||||
image: alpine:latest
|
||||
command: ["/bin/sh", "-c", "sleep 3600"]
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /restore-target
|
||||
securityContext:
|
||||
runAsUser: 0
|
||||
fsGroup: 0
|
||||
volumes:
|
||||
- name: data
|
||||
persistentVolumeClaim:
|
||||
claimName: $pvc_name
|
||||
restartPolicy: Never
|
||||
tolerations:
|
||||
- operator: Exists
|
||||
EOF
|
||||
|
||||
# Wait for pod to be ready with longer timeout
|
||||
echo "Waiting for restore utility pod to be ready..."
|
||||
if ! kubectl wait --for=condition=Ready pod/"$temp_pod" -n "$app_name" --timeout=120s; then
|
||||
echo "Restore utility pod failed to start. Checking status..."
|
||||
kubectl describe pod -n "$app_name" "$temp_pod"
|
||||
kubectl delete pod -n "$app_name" "$temp_pod" --force --grace-period=0 || true
|
||||
echo "ERROR: Restore failed. Safety snapshot '$safety_snapshot' has been preserved for manual recovery." >&2
|
||||
echo "To recover from safety snapshot, use: kubectl get snapshot.longhorn.io -n longhorn-system $safety_snapshot" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "Clearing existing PVC data..."
|
||||
kubectl exec -n "$app_name" "$temp_pod" -- sh -c "rm -rf /restore-target/* /restore-target/.*" 2>/dev/null || true
|
||||
|
||||
echo "Copying backup data to PVC..."
|
||||
# Use tar to stream data into the pod, preserving permissions
|
||||
if ! tar -C "$pvc_backup_dir" -cf - . | kubectl exec -i -n "$app_name" "$temp_pod" -- tar -C /restore-target -xf -; then
|
||||
echo "Failed to copy data to PVC. Cleaning up..." >&2
|
||||
kubectl delete pod -n "$app_name" "$temp_pod" --force --grace-period=0 || true
|
||||
echo "ERROR: Restore failed. Safety snapshot '$safety_snapshot' has been preserved for manual recovery." >&2
|
||||
echo "To recover from safety snapshot, use: kubectl get snapshot.longhorn.io -n longhorn-system $safety_snapshot" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "Verifying restored data..."
|
||||
kubectl exec -n "$app_name" "$temp_pod" -- sh -c "ls -la /restore-target | head -10"
|
||||
|
||||
# Clean up temporary pod
|
||||
kubectl delete pod -n "$app_name" "$temp_pod"
|
||||
|
||||
# Scale app back up
|
||||
scale_app "$app_name" 1
|
||||
|
||||
# Clean up safety snapshot if restore was successful
|
||||
echo "Cleaning up safety snapshot '$safety_snapshot'..."
|
||||
if kubectl delete snapshot.longhorn.io -n longhorn-system "$safety_snapshot" 2>/dev/null; then
|
||||
echo "Safety snapshot cleaned up successfully"
|
||||
else
|
||||
echo "Warning: Could not clean up safety snapshot '$safety_snapshot'. You may need to delete it manually."
|
||||
fi
|
||||
|
||||
echo "PVC '$pvc_name' restore completed successfully"
|
||||
}
|
||||
|
||||
# --- Main Restore Function ---------------------------------------------------
|
||||
restore_app() {
|
||||
local app_name="$1"
|
||||
local snapshot_id="$2"
|
||||
local mode="$3"
|
||||
local skip_globals="$4"
|
||||
local staging_dir="$5"
|
||||
|
||||
echo "=========================================="
|
||||
echo "Starting restore of app: $app_name"
|
||||
echo "Snapshot: $snapshot_id"
|
||||
echo "Mode: $mode"
|
||||
echo "=========================================="
|
||||
|
||||
# Restore snapshot to staging directory
|
||||
local restore_dir
|
||||
restore_dir=$(restore_from_snapshot "$app_name" "$snapshot_id" "$staging_dir")
|
||||
|
||||
if [[ ! -d "$restore_dir" ]]; then
|
||||
echo "Failed to restore snapshot for '$app_name'" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Discover what components this app has
|
||||
local database_deps
|
||||
database_deps=$(discover_database_deps "$app_name")
|
||||
|
||||
local pvcs
|
||||
pvcs=$(discover_app_pvcs "$app_name")
|
||||
|
||||
# Restore database components
|
||||
if [[ "$mode" == "all" || "$mode" == "db" ]]; then
|
||||
for db_type in $database_deps; do
|
||||
case "$db_type" in
|
||||
postgres)
|
||||
restore_postgres_database "$app_name" "$restore_dir" "$skip_globals"
|
||||
;;
|
||||
mysql)
|
||||
restore_mysql_database "$app_name" "$restore_dir"
|
||||
;;
|
||||
redis)
|
||||
echo "Redis restore not implemented yet. Skipping."
|
||||
;;
|
||||
esac
|
||||
done
|
||||
fi
|
||||
|
||||
# Restore PVC components
|
||||
if [[ "$mode" == "all" || "$mode" == "pvc" ]]; then
|
||||
for pvc in $pvcs; do
|
||||
restore_app_pvc "$app_name" "$pvc" "$restore_dir"
|
||||
done
|
||||
fi
|
||||
|
||||
# Clean up restore directory
|
||||
rm -rf "$restore_dir"
|
||||
|
||||
echo "=========================================="
|
||||
echo "Restore completed for app: $app_name"
|
||||
echo "=========================================="
|
||||
}
|
||||
|
||||
# --- Main Script Logic -------------------------------------------------------
|
||||
main() {
|
||||
require_k8s
|
||||
require_yq
|
||||
require_restic
|
||||
|
||||
get_restic_config
|
||||
|
||||
local staging_dir
|
||||
staging_dir=$(get_staging_dir)
|
||||
mkdir -p "$staging_dir/restore"
|
||||
|
||||
# Parse arguments
|
||||
if [[ $# -eq 0 || "$1" == "--help" || "$1" == "-h" ]]; then
|
||||
show_help
|
||||
exit 0
|
||||
fi
|
||||
|
||||
local app_name="$1"
|
||||
shift
|
||||
|
||||
local snapshot_id=""
|
||||
local mode="all"
|
||||
local skip_globals="false"
|
||||
local list_snapshots="false"
|
||||
|
||||
# Parse remaining arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--db-only)
|
||||
mode="db"
|
||||
shift
|
||||
;;
|
||||
--pvc-only)
|
||||
mode="pvc"
|
||||
shift
|
||||
;;
|
||||
--skip-globals)
|
||||
skip_globals="true"
|
||||
shift
|
||||
;;
|
||||
--list)
|
||||
list_snapshots="true"
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
show_help
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
if [[ -z "$snapshot_id" ]]; then
|
||||
snapshot_id="$1"
|
||||
else
|
||||
echo "Unknown option: $1" >&2
|
||||
show_help
|
||||
exit 1
|
||||
fi
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# List snapshots if requested
|
||||
if [[ "$list_snapshots" == "true" ]]; then
|
||||
list_app_snapshots "$app_name"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Get latest snapshot if none specified
|
||||
if [[ -z "$snapshot_id" ]]; then
|
||||
snapshot_id=$(get_latest_snapshot "$app_name")
|
||||
if [[ -z "$snapshot_id" ]]; then
|
||||
echo "No snapshots found for app '$app_name'" >&2
|
||||
exit 1
|
||||
fi
|
||||
echo "Using latest snapshot: $snapshot_id"
|
||||
fi
|
||||
|
||||
# Perform the restore
|
||||
restore_app "$app_name" "$snapshot_id" "$mode" "$skip_globals" "$staging_dir"
|
||||
|
||||
echo "Restore operation completed successfully."
|
||||
}
|
||||
|
||||
main "$@"
|
245
bin/wild-backup
Executable file
245
bin/wild-backup
Executable file
@@ -0,0 +1,245 @@
|
||||
#!/bin/bash
|
||||
# Simple backup script for your personal cloud
|
||||
|
||||
set -e
|
||||
set -o pipefail
|
||||
|
||||
# Parse command line flags
|
||||
BACKUP_HOME=true
|
||||
BACKUP_APPS=true
|
||||
BACKUP_CLUSTER=true
|
||||
|
||||
show_help() {
|
||||
echo "Usage: $0 [OPTIONS]"
|
||||
echo "Backup components of your wild-cloud infrastructure"
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " --home-only Backup only WC_HOME (wild-cloud configuration)"
|
||||
echo " --apps-only Backup only applications (databases and PVCs)"
|
||||
echo " --cluster-only Backup only Kubernetes cluster resources"
|
||||
echo " --no-home Skip WC_HOME backup"
|
||||
echo " --no-apps Skip application backups"
|
||||
echo " --no-cluster Skip cluster resource backup"
|
||||
echo " -h, --help Show this help message"
|
||||
echo ""
|
||||
echo "Default: Backup all components (home, apps, cluster)"
|
||||
}
|
||||
|
||||
# Process command line arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--home-only)
|
||||
BACKUP_HOME=true
|
||||
BACKUP_APPS=false
|
||||
BACKUP_CLUSTER=false
|
||||
shift
|
||||
;;
|
||||
--apps-only)
|
||||
BACKUP_HOME=false
|
||||
BACKUP_APPS=true
|
||||
BACKUP_CLUSTER=false
|
||||
shift
|
||||
;;
|
||||
--cluster-only)
|
||||
BACKUP_HOME=false
|
||||
BACKUP_APPS=false
|
||||
BACKUP_CLUSTER=true
|
||||
shift
|
||||
;;
|
||||
--no-home)
|
||||
BACKUP_HOME=false
|
||||
shift
|
||||
;;
|
||||
--no-apps)
|
||||
BACKUP_APPS=false
|
||||
shift
|
||||
;;
|
||||
--no-cluster)
|
||||
BACKUP_CLUSTER=false
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
show_help
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
echo "Unknown option: $1"
|
||||
show_help
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Initialize Wild Cloud environment
|
||||
if [ -z "${WC_ROOT}" ]; then
|
||||
echo "WC_ROOT is not set."
|
||||
exit 1
|
||||
else
|
||||
source "${WC_ROOT}/scripts/common.sh"
|
||||
init_wild_env
|
||||
fi
|
||||
|
||||
if `wild-config cloud.backup.root --check`; then
|
||||
export RESTIC_REPOSITORY="$(wild-config cloud.backup.root)"
|
||||
else
|
||||
echo "WARNING: Could not get cloud backup root."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if `wild-secret cloud.backupPassword --check`; then
|
||||
export RESTIC_PASSWORD="$(wild-secret cloud.backupPassword)"
|
||||
else
|
||||
echo "WARNING: Could not get cloud backup secret."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if `wild-config cloud.backup.staging --check`; then
|
||||
STAGING_DIR="$(wild-config cloud.backup.staging)"
|
||||
else
|
||||
echo "WARNING: Could not get cloud backup staging directory."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Backup at '$RESTIC_REPOSITORY'."
|
||||
|
||||
# Initialize the repository if needed.
|
||||
echo "Checking if restic repository exists..."
|
||||
if restic cat config >/dev/null 2>&1; then
|
||||
echo "Using existing backup repository."
|
||||
else
|
||||
echo "No existing backup repository found. Initializing restic repository..."
|
||||
restic init
|
||||
echo "Repository initialized successfully."
|
||||
fi
|
||||
|
||||
# Backup entire WC_HOME
|
||||
if [ "$BACKUP_HOME" = true ]; then
|
||||
echo "Backing up WC_HOME..."
|
||||
restic --verbose --tag wild-cloud --tag wc-home --tag "$(date +%Y-%m-%d)" backup $WC_HOME
|
||||
echo "WC_HOME backup completed."
|
||||
# TODO: Ignore wild cloud cache?
|
||||
else
|
||||
echo "Skipping WC_HOME backup."
|
||||
fi
|
||||
|
||||
mkdir -p "$STAGING_DIR"
|
||||
|
||||
# Run backup for all apps at once
|
||||
if [ "$BACKUP_APPS" = true ]; then
|
||||
echo "Running backup for all apps..."
|
||||
wild-app-backup --all
|
||||
|
||||
# Upload each app's backup to restic individually
|
||||
for app_dir in "$STAGING_DIR"/apps/*; do
|
||||
if [ ! -d "$app_dir" ]; then
|
||||
continue
|
||||
fi
|
||||
app="$(basename "$app_dir")"
|
||||
echo "Uploading backup for app: $app"
|
||||
restic --verbose --tag wild-cloud --tag "$app" --tag "$(date +%Y-%m-%d)" backup "$app_dir"
|
||||
echo "Backup for app '$app' completed."
|
||||
done
|
||||
else
|
||||
echo "Skipping application backups."
|
||||
fi
|
||||
|
||||
# --- etcd Backup Function ----------------------------------------------------
|
||||
backup_etcd() {
|
||||
local cluster_backup_dir="$1"
|
||||
local etcd_backup_file="$cluster_backup_dir/etcd-snapshot.db"
|
||||
|
||||
echo "Creating etcd snapshot..."
|
||||
|
||||
# For Talos, we use talosctl to create etcd snapshots
|
||||
if command -v talosctl >/dev/null 2>&1; then
|
||||
# Try to get etcd snapshot via talosctl (works for Talos clusters)
|
||||
local control_plane_nodes
|
||||
control_plane_nodes=$(kubectl get nodes -l node-role.kubernetes.io/control-plane -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}' | tr ' ' '\n' | head -1)
|
||||
|
||||
if [[ -n "$control_plane_nodes" ]]; then
|
||||
echo "Using talosctl to backup etcd from control plane node: $control_plane_nodes"
|
||||
if talosctl --nodes "$control_plane_nodes" etcd snapshot "$etcd_backup_file"; then
|
||||
echo " etcd backup created: $etcd_backup_file"
|
||||
return 0
|
||||
else
|
||||
echo " talosctl etcd snapshot failed, trying alternative method..."
|
||||
fi
|
||||
else
|
||||
echo " No control plane nodes found for talosctl method"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Alternative: Try to backup via etcd pod if available
|
||||
local etcd_pod
|
||||
etcd_pod=$(kubectl get pods -n kube-system -l component=etcd -o jsonpath='{.items[0].metadata.name}' 2>/dev/null || true)
|
||||
|
||||
if [[ -n "$etcd_pod" ]]; then
|
||||
echo "Using etcd pod: $etcd_pod"
|
||||
# Create snapshot using etcdctl inside the etcd pod
|
||||
if kubectl exec -n kube-system "$etcd_pod" -- etcdctl \
|
||||
--endpoints=https://127.0.0.1:2379 \
|
||||
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
|
||||
--cert=/etc/kubernetes/pki/etcd/server.crt \
|
||||
--key=/etc/kubernetes/pki/etcd/server.key \
|
||||
snapshot save /tmp/etcd-snapshot.db; then
|
||||
|
||||
# Copy snapshot out of pod
|
||||
kubectl cp -n kube-system "$etcd_pod:/tmp/etcd-snapshot.db" "$etcd_backup_file"
|
||||
|
||||
# Clean up temporary file in pod
|
||||
kubectl exec -n kube-system "$etcd_pod" -- rm -f /tmp/etcd-snapshot.db
|
||||
|
||||
echo " etcd backup created: $etcd_backup_file"
|
||||
return 0
|
||||
else
|
||||
echo " etcd pod snapshot failed"
|
||||
fi
|
||||
else
|
||||
echo " No etcd pod found in kube-system namespace"
|
||||
fi
|
||||
|
||||
# Final fallback: Try direct etcdctl if available on local system
|
||||
if command -v etcdctl >/dev/null 2>&1; then
|
||||
echo "Attempting local etcdctl backup..."
|
||||
# This would need proper certificates and endpoints configured
|
||||
echo " Local etcdctl backup not implemented (requires certificate configuration)"
|
||||
fi
|
||||
|
||||
echo " Warning: Could not create etcd backup - no working method found"
|
||||
echo " Consider installing talosctl or ensuring etcd pods are accessible"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Back up Kubernetes cluster resources
|
||||
if [ "$BACKUP_CLUSTER" = true ]; then
|
||||
echo "Backing up Kubernetes cluster resources..."
|
||||
CLUSTER_BACKUP_DIR="$STAGING_DIR/cluster"
|
||||
|
||||
# Clean up any existing cluster backup files
|
||||
if [[ -d "$CLUSTER_BACKUP_DIR" ]]; then
|
||||
echo "Cleaning up existing cluster backup files..."
|
||||
rm -rf "$CLUSTER_BACKUP_DIR"
|
||||
fi
|
||||
mkdir -p "$CLUSTER_BACKUP_DIR"
|
||||
|
||||
kubectl get all -A -o yaml > "$CLUSTER_BACKUP_DIR/all-resources.yaml"
|
||||
kubectl get secrets -A -o yaml > "$CLUSTER_BACKUP_DIR/secrets.yaml"
|
||||
kubectl get configmaps -A -o yaml > "$CLUSTER_BACKUP_DIR/configmaps.yaml"
|
||||
kubectl get persistentvolumes -o yaml > "$CLUSTER_BACKUP_DIR/persistentvolumes.yaml"
|
||||
kubectl get persistentvolumeclaims -A -o yaml > "$CLUSTER_BACKUP_DIR/persistentvolumeclaims.yaml"
|
||||
kubectl get storageclasses -o yaml > "$CLUSTER_BACKUP_DIR/storageclasses.yaml"
|
||||
|
||||
echo "Backing up etcd..."
|
||||
backup_etcd "$CLUSTER_BACKUP_DIR"
|
||||
|
||||
echo "Cluster resources backed up to $CLUSTER_BACKUP_DIR"
|
||||
|
||||
# Upload cluster backup to restic
|
||||
echo "Uploading cluster backup to restic..."
|
||||
restic --verbose --tag wild-cloud --tag cluster --tag "$(date +%Y-%m-%d)" backup "$CLUSTER_BACKUP_DIR"
|
||||
echo "Cluster backup completed."
|
||||
else
|
||||
echo "Skipping cluster backup."
|
||||
fi
|
||||
|
||||
echo "Backup completed: $BACKUP_DIR"
|
245
bin/wild-backup copy
Executable file
245
bin/wild-backup copy
Executable file
@@ -0,0 +1,245 @@
|
||||
#!/bin/bash
|
||||
# Simple backup script for your personal cloud
|
||||
|
||||
set -e
|
||||
set -o pipefail
|
||||
|
||||
# Parse command line flags
|
||||
BACKUP_HOME=true
|
||||
BACKUP_APPS=true
|
||||
BACKUP_CLUSTER=true
|
||||
|
||||
show_help() {
|
||||
echo "Usage: $0 [OPTIONS]"
|
||||
echo "Backup components of your wild-cloud infrastructure"
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " --home-only Backup only WC_HOME (wild-cloud configuration)"
|
||||
echo " --apps-only Backup only applications (databases and PVCs)"
|
||||
echo " --cluster-only Backup only Kubernetes cluster resources"
|
||||
echo " --no-home Skip WC_HOME backup"
|
||||
echo " --no-apps Skip application backups"
|
||||
echo " --no-cluster Skip cluster resource backup"
|
||||
echo " -h, --help Show this help message"
|
||||
echo ""
|
||||
echo "Default: Backup all components (home, apps, cluster)"
|
||||
}
|
||||
|
||||
# Process command line arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--home-only)
|
||||
BACKUP_HOME=true
|
||||
BACKUP_APPS=false
|
||||
BACKUP_CLUSTER=false
|
||||
shift
|
||||
;;
|
||||
--apps-only)
|
||||
BACKUP_HOME=false
|
||||
BACKUP_APPS=true
|
||||
BACKUP_CLUSTER=false
|
||||
shift
|
||||
;;
|
||||
--cluster-only)
|
||||
BACKUP_HOME=false
|
||||
BACKUP_APPS=false
|
||||
BACKUP_CLUSTER=true
|
||||
shift
|
||||
;;
|
||||
--no-home)
|
||||
BACKUP_HOME=false
|
||||
shift
|
||||
;;
|
||||
--no-apps)
|
||||
BACKUP_APPS=false
|
||||
shift
|
||||
;;
|
||||
--no-cluster)
|
||||
BACKUP_CLUSTER=false
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
show_help
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
echo "Unknown option: $1"
|
||||
show_help
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Initialize Wild Cloud environment
|
||||
if [ -z "${WC_ROOT}" ]; then
|
||||
echo "WC_ROOT is not set."
|
||||
exit 1
|
||||
else
|
||||
source "${WC_ROOT}/scripts/common.sh"
|
||||
init_wild_env
|
||||
fi
|
||||
|
||||
if `wild-config cloud.backup.root --check`; then
|
||||
export RESTIC_REPOSITORY="$(wild-config cloud.backup.root)"
|
||||
else
|
||||
echo "WARNING: Could not get cloud backup root."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if `wild-secret cloud.backupPassword --check`; then
|
||||
export RESTIC_PASSWORD="$(wild-secret cloud.backupPassword)"
|
||||
else
|
||||
echo "WARNING: Could not get cloud backup secret."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if `wild-config cloud.backup.staging --check`; then
|
||||
STAGING_DIR="$(wild-config cloud.backup.staging)"
|
||||
else
|
||||
echo "WARNING: Could not get cloud backup staging directory."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Backup at '$RESTIC_REPOSITORY'."
|
||||
|
||||
# Initialize the repository if needed.
|
||||
echo "Checking if restic repository exists..."
|
||||
if restic cat config >/dev/null 2>&1; then
|
||||
echo "Using existing backup repository."
|
||||
else
|
||||
echo "No existing backup repository found. Initializing restic repository..."
|
||||
restic init
|
||||
echo "Repository initialized successfully."
|
||||
fi
|
||||
|
||||
# Backup entire WC_HOME
|
||||
if [ "$BACKUP_HOME" = true ]; then
|
||||
echo "Backing up WC_HOME..."
|
||||
restic --verbose --tag wild-cloud --tag wc-home --tag "$(date +%Y-%m-%d)" backup $WC_HOME
|
||||
echo "WC_HOME backup completed."
|
||||
# TODO: Ignore wild cloud cache?
|
||||
else
|
||||
echo "Skipping WC_HOME backup."
|
||||
fi
|
||||
|
||||
mkdir -p "$STAGING_DIR"
|
||||
|
||||
# Run backup for all apps at once
|
||||
if [ "$BACKUP_APPS" = true ]; then
|
||||
echo "Running backup for all apps..."
|
||||
wild-app-backup --all
|
||||
|
||||
# Upload each app's backup to restic individually
|
||||
for app_dir in "$STAGING_DIR"/apps/*; do
|
||||
if [ ! -d "$app_dir" ]; then
|
||||
continue
|
||||
fi
|
||||
app="$(basename "$app_dir")"
|
||||
echo "Uploading backup for app: $app"
|
||||
restic --verbose --tag wild-cloud --tag "$app" --tag "$(date +%Y-%m-%d)" backup "$app_dir"
|
||||
echo "Backup for app '$app' completed."
|
||||
done
|
||||
else
|
||||
echo "Skipping application backups."
|
||||
fi
|
||||
|
||||
# --- etcd Backup Function ----------------------------------------------------
|
||||
backup_etcd() {
|
||||
local cluster_backup_dir="$1"
|
||||
local etcd_backup_file="$cluster_backup_dir/etcd-snapshot.db"
|
||||
|
||||
echo "Creating etcd snapshot..."
|
||||
|
||||
# For Talos, we use talosctl to create etcd snapshots
|
||||
if command -v talosctl >/dev/null 2>&1; then
|
||||
# Try to get etcd snapshot via talosctl (works for Talos clusters)
|
||||
local control_plane_nodes
|
||||
control_plane_nodes=$(kubectl get nodes -l node-role.kubernetes.io/control-plane -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}' | tr ' ' '\n' | head -1)
|
||||
|
||||
if [[ -n "$control_plane_nodes" ]]; then
|
||||
echo "Using talosctl to backup etcd from control plane node: $control_plane_nodes"
|
||||
if talosctl --nodes "$control_plane_nodes" etcd snapshot "$etcd_backup_file"; then
|
||||
echo " etcd backup created: $etcd_backup_file"
|
||||
return 0
|
||||
else
|
||||
echo " talosctl etcd snapshot failed, trying alternative method..."
|
||||
fi
|
||||
else
|
||||
echo " No control plane nodes found for talosctl method"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Alternative: Try to backup via etcd pod if available
|
||||
local etcd_pod
|
||||
etcd_pod=$(kubectl get pods -n kube-system -l component=etcd -o jsonpath='{.items[0].metadata.name}' 2>/dev/null || true)
|
||||
|
||||
if [[ -n "$etcd_pod" ]]; then
|
||||
echo "Using etcd pod: $etcd_pod"
|
||||
# Create snapshot using etcdctl inside the etcd pod
|
||||
if kubectl exec -n kube-system "$etcd_pod" -- etcdctl \
|
||||
--endpoints=https://127.0.0.1:2379 \
|
||||
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
|
||||
--cert=/etc/kubernetes/pki/etcd/server.crt \
|
||||
--key=/etc/kubernetes/pki/etcd/server.key \
|
||||
snapshot save /tmp/etcd-snapshot.db; then
|
||||
|
||||
# Copy snapshot out of pod
|
||||
kubectl cp -n kube-system "$etcd_pod:/tmp/etcd-snapshot.db" "$etcd_backup_file"
|
||||
|
||||
# Clean up temporary file in pod
|
||||
kubectl exec -n kube-system "$etcd_pod" -- rm -f /tmp/etcd-snapshot.db
|
||||
|
||||
echo " etcd backup created: $etcd_backup_file"
|
||||
return 0
|
||||
else
|
||||
echo " etcd pod snapshot failed"
|
||||
fi
|
||||
else
|
||||
echo " No etcd pod found in kube-system namespace"
|
||||
fi
|
||||
|
||||
# Final fallback: Try direct etcdctl if available on local system
|
||||
if command -v etcdctl >/dev/null 2>&1; then
|
||||
echo "Attempting local etcdctl backup..."
|
||||
# This would need proper certificates and endpoints configured
|
||||
echo " Local etcdctl backup not implemented (requires certificate configuration)"
|
||||
fi
|
||||
|
||||
echo " Warning: Could not create etcd backup - no working method found"
|
||||
echo " Consider installing talosctl or ensuring etcd pods are accessible"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Back up Kubernetes cluster resources
|
||||
if [ "$BACKUP_CLUSTER" = true ]; then
|
||||
echo "Backing up Kubernetes cluster resources..."
|
||||
CLUSTER_BACKUP_DIR="$STAGING_DIR/cluster"
|
||||
|
||||
# Clean up any existing cluster backup files
|
||||
if [[ -d "$CLUSTER_BACKUP_DIR" ]]; then
|
||||
echo "Cleaning up existing cluster backup files..."
|
||||
rm -rf "$CLUSTER_BACKUP_DIR"
|
||||
fi
|
||||
mkdir -p "$CLUSTER_BACKUP_DIR"
|
||||
|
||||
kubectl get all -A -o yaml > "$CLUSTER_BACKUP_DIR/all-resources.yaml"
|
||||
kubectl get secrets -A -o yaml > "$CLUSTER_BACKUP_DIR/secrets.yaml"
|
||||
kubectl get configmaps -A -o yaml > "$CLUSTER_BACKUP_DIR/configmaps.yaml"
|
||||
kubectl get persistentvolumes -o yaml > "$CLUSTER_BACKUP_DIR/persistentvolumes.yaml"
|
||||
kubectl get persistentvolumeclaims -A -o yaml > "$CLUSTER_BACKUP_DIR/persistentvolumeclaims.yaml"
|
||||
kubectl get storageclasses -o yaml > "$CLUSTER_BACKUP_DIR/storageclasses.yaml"
|
||||
|
||||
echo "Backing up etcd..."
|
||||
backup_etcd "$CLUSTER_BACKUP_DIR"
|
||||
|
||||
echo "Cluster resources backed up to $CLUSTER_BACKUP_DIR"
|
||||
|
||||
# Upload cluster backup to restic
|
||||
echo "Uploading cluster backup to restic..."
|
||||
restic --verbose --tag wild-cloud --tag cluster --tag "$(date +%Y-%m-%d)" backup "$CLUSTER_BACKUP_DIR"
|
||||
echo "Cluster backup completed."
|
||||
else
|
||||
echo "Skipping cluster backup."
|
||||
fi
|
||||
|
||||
echo "Backup completed: $BACKUP_DIR"
|
@@ -58,10 +58,8 @@ fi
|
||||
|
||||
print_header "Talos Cluster Configuration Generation"
|
||||
|
||||
# Ensure required directories exist
|
||||
NODE_SETUP_DIR="${WC_HOME}/setup/cluster-nodes"
|
||||
|
||||
# Check if generated directory already exists and has content
|
||||
NODE_SETUP_DIR="${WC_HOME}/setup/cluster-nodes"
|
||||
if [ -d "${NODE_SETUP_DIR}/generated" ] && [ "$(ls -A "${NODE_SETUP_DIR}/generated" 2>/dev/null)" ] && [ "$FORCE" = false ]; then
|
||||
print_success "Cluster configuration already exists in ${NODE_SETUP_DIR}/generated/"
|
||||
print_info "Skipping cluster configuration generation"
|
||||
@@ -77,8 +75,6 @@ if [ -d "${NODE_SETUP_DIR}/generated" ]; then
|
||||
rm -rf "${NODE_SETUP_DIR}/generated"
|
||||
fi
|
||||
mkdir -p "${NODE_SETUP_DIR}/generated"
|
||||
talosctl gen secrets
|
||||
print_info "New secrets will be generated in ${NODE_SETUP_DIR}/generated/"
|
||||
|
||||
# Ensure we have the configuration we need.
|
||||
|
||||
@@ -94,9 +90,8 @@ print_info "Cluster name: $CLUSTER_NAME"
|
||||
print_info "Control plane endpoint: https://$VIP:6443"
|
||||
|
||||
cd "${NODE_SETUP_DIR}/generated"
|
||||
talosctl gen secrets
|
||||
talosctl gen config --with-secrets secrets.yaml "$CLUSTER_NAME" "https://$VIP:6443"
|
||||
cd - >/dev/null
|
||||
|
||||
# Verify generated files
|
||||
|
||||
print_success "Cluster configuration generation completed!"
|
||||
print_success "Cluster configuration generation completed!"
|
||||
|
@@ -51,76 +51,32 @@ else
|
||||
init_wild_env
|
||||
fi
|
||||
|
||||
# Check for required configuration
|
||||
if [ -z "$(get_current_config "cluster.nodes.talos.version")" ] || [ -z "$(get_current_config "cluster.nodes.talos.schematicId")" ]; then
|
||||
print_header "Talos Configuration Required"
|
||||
print_error "Missing required Talos configuration"
|
||||
print_info "Please run 'wild-setup' first to configure your cluster"
|
||||
print_info "Or set the required configuration manually:"
|
||||
print_info " wild-config-set cluster.nodes.talos.version v1.10.4"
|
||||
print_info " wild-config-set cluster.nodes.talos.schematicId YOUR_SCHEMATIC_ID"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# =============================================================================
|
||||
# INSTALLER IMAGE GENERATION AND ASSET DOWNLOADING
|
||||
# =============================================================================
|
||||
|
||||
print_header "Talos Installer Image Generation and Asset Download"
|
||||
print_header "Talos asset download"
|
||||
|
||||
# Get Talos version and schematic ID from config
|
||||
TALOS_VERSION=$(get_current_config cluster.nodes.talos.version)
|
||||
SCHEMATIC_ID=$(get_current_config cluster.nodes.talos.schematicId)
|
||||
# Talos version
|
||||
prompt_if_unset_config "cluster.nodes.talos.version" "Talos version" "v1.11.0"
|
||||
TALOS_VERSION=$(wild-config "cluster.nodes.talos.version")
|
||||
|
||||
# Talos schematic ID
|
||||
prompt_if_unset_config "cluster.nodes.talos.schematicId" "Talos schematic ID" "56774e0894c8a3a3a9834a2aea65f24163cacf9506abbcbdc3ba135eaca4953f"
|
||||
SCHEMATIC_ID=$(wild-config "cluster.nodes.talos.schematicId")
|
||||
|
||||
print_info "Creating custom Talos installer image..."
|
||||
print_info "Talos version: $TALOS_VERSION"
|
||||
|
||||
# Validate schematic ID
|
||||
if [ -z "$SCHEMATIC_ID" ] || [ "$SCHEMATIC_ID" = "null" ]; then
|
||||
print_error "No schematic ID found in config.yaml"
|
||||
print_info "Please run 'wild-setup' first to configure your cluster"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
print_info "Schematic ID: $SCHEMATIC_ID"
|
||||
|
||||
if [ -f "${WC_HOME}/config.yaml" ] && yq eval '.cluster.nodes.talos.schematic.customization.systemExtensions.officialExtensions' "${WC_HOME}/config.yaml" >/dev/null 2>&1; then
|
||||
echo ""
|
||||
print_info "Schematic includes:"
|
||||
yq eval '.cluster.nodes.talos.schematic.customization.systemExtensions.officialExtensions[]' "${WC_HOME}/config.yaml" | sed 's/^/ - /' || true
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Generate installer image URL
|
||||
INSTALLER_URL="factory.talos.dev/metal-installer/$SCHEMATIC_ID:$TALOS_VERSION"
|
||||
|
||||
print_success "Custom installer image URL generated!"
|
||||
echo ""
|
||||
print_info "Installer URL: $INSTALLER_URL"
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# ASSET DOWNLOADING AND CACHING
|
||||
# =============================================================================
|
||||
|
||||
print_header "Downloading and Caching PXE Boot Assets"
|
||||
|
||||
# Create cache directories organized by schematic ID
|
||||
CACHE_DIR="${WC_HOME}/.wildcloud"
|
||||
SCHEMATIC_CACHE_DIR="${CACHE_DIR}/node-boot-assets/${SCHEMATIC_ID}"
|
||||
PXE_CACHE_DIR="${SCHEMATIC_CACHE_DIR}/pxe"
|
||||
IPXE_CACHE_DIR="${SCHEMATIC_CACHE_DIR}/ipxe"
|
||||
ISO_CACHE_DIR="${SCHEMATIC_CACHE_DIR}/iso"
|
||||
mkdir -p "$PXE_CACHE_DIR/amd64"
|
||||
mkdir -p "$IPXE_CACHE_DIR"
|
||||
mkdir -p "$ISO_CACHE_DIR"
|
||||
|
||||
# Download Talos kernel and initramfs for PXE boot
|
||||
print_info "Downloading Talos PXE assets..."
|
||||
KERNEL_URL="https://pxe.factory.talos.dev/image/${SCHEMATIC_ID}/${TALOS_VERSION}/kernel-amd64"
|
||||
INITRAMFS_URL="https://pxe.factory.talos.dev/image/${SCHEMATIC_ID}/${TALOS_VERSION}/initramfs-amd64.xz"
|
||||
|
||||
KERNEL_PATH="${PXE_CACHE_DIR}/amd64/vmlinuz"
|
||||
INITRAMFS_PATH="${PXE_CACHE_DIR}/amd64/initramfs.xz"
|
||||
print_header "Downloading and caching boot assets"
|
||||
|
||||
# Function to download with progress
|
||||
download_asset() {
|
||||
@@ -129,17 +85,19 @@ download_asset() {
|
||||
local description="$3"
|
||||
|
||||
if [ -f "$path" ]; then
|
||||
print_info "$description already cached at $path"
|
||||
print_success "$description already cached at $path"
|
||||
return 0
|
||||
fi
|
||||
|
||||
print_info "Downloading $description..."
|
||||
print_info "URL: $url"
|
||||
|
||||
if command -v wget >/dev/null 2>&1; then
|
||||
wget --progress=bar:force -O "$path" "$url"
|
||||
elif command -v curl >/dev/null 2>&1; then
|
||||
curl -L --progress-bar -o "$path" "$url"
|
||||
if command -v curl >/dev/null 2>&1; then
|
||||
curl -L -o "$path" "$url" \
|
||||
--progress-bar \
|
||||
--write-out "✓ Downloaded %{size_download} bytes at %{speed_download} B/s\n"
|
||||
elif command -v wget >/dev/null 2>&1; then
|
||||
wget --progress=bar:force:noscroll -O "$path" "$url"
|
||||
else
|
||||
print_error "Neither wget nor curl is available for downloading"
|
||||
return 1
|
||||
@@ -153,42 +111,51 @@ download_asset() {
|
||||
fi
|
||||
|
||||
print_success "$description downloaded successfully"
|
||||
echo
|
||||
}
|
||||
|
||||
# Download Talos PXE assets
|
||||
CACHE_DIR="${WC_HOME}/.wildcloud"
|
||||
SCHEMATIC_CACHE_DIR="${CACHE_DIR}/node-boot-assets/${SCHEMATIC_ID}"
|
||||
PXE_CACHE_DIR="${SCHEMATIC_CACHE_DIR}/pxe"
|
||||
IPXE_CACHE_DIR="${SCHEMATIC_CACHE_DIR}/ipxe"
|
||||
ISO_CACHE_DIR="${SCHEMATIC_CACHE_DIR}/iso"
|
||||
mkdir -p "$PXE_CACHE_DIR/amd64"
|
||||
mkdir -p "$IPXE_CACHE_DIR"
|
||||
mkdir -p "$ISO_CACHE_DIR"
|
||||
|
||||
# Download Talos kernel and initramfs for PXE boot
|
||||
KERNEL_URL="https://pxe.factory.talos.dev/image/${SCHEMATIC_ID}/${TALOS_VERSION}/kernel-amd64"
|
||||
KERNEL_PATH="${PXE_CACHE_DIR}/amd64/vmlinuz"
|
||||
download_asset "$KERNEL_URL" "$KERNEL_PATH" "Talos kernel"
|
||||
|
||||
INITRAMFS_URL="https://pxe.factory.talos.dev/image/${SCHEMATIC_ID}/${TALOS_VERSION}/initramfs-amd64.xz"
|
||||
INITRAMFS_PATH="${PXE_CACHE_DIR}/amd64/initramfs.xz"
|
||||
download_asset "$INITRAMFS_URL" "$INITRAMFS_PATH" "Talos initramfs"
|
||||
|
||||
# Download iPXE bootloader files
|
||||
print_info "Downloading iPXE bootloader assets..."
|
||||
download_asset "http://boot.ipxe.org/ipxe.efi" "${IPXE_CACHE_DIR}/ipxe.efi" "iPXE EFI bootloader"
|
||||
download_asset "http://boot.ipxe.org/undionly.kpxe" "${IPXE_CACHE_DIR}/undionly.kpxe" "iPXE BIOS bootloader"
|
||||
download_asset "http://boot.ipxe.org/arm64-efi/ipxe.efi" "${IPXE_CACHE_DIR}/ipxe-arm64.efi" "iPXE ARM64 EFI bootloader"
|
||||
|
||||
# Download Talos ISO
|
||||
print_info "Downloading Talos ISO..."
|
||||
ISO_URL="https://factory.talos.dev/image/${SCHEMATIC_ID}/${TALOS_VERSION}/metal-amd64.iso"
|
||||
ISO_FILENAME="talos-${TALOS_VERSION}-metal-amd64.iso"
|
||||
ISO_PATH="${ISO_CACHE_DIR}/${ISO_FILENAME}"
|
||||
ISO_PATH="${ISO_CACHE_DIR}/talos-${TALOS_VERSION}-metal-amd64.iso"
|
||||
download_asset "$ISO_URL" "$ISO_PATH" "Talos ISO"
|
||||
|
||||
echo ""
|
||||
print_success "All assets downloaded and cached!"
|
||||
echo ""
|
||||
print_info "Cached assets for schematic $SCHEMATIC_ID:"
|
||||
echo " Talos kernel: $KERNEL_PATH"
|
||||
echo " Talos initramfs: $INITRAMFS_PATH"
|
||||
echo " Talos ISO: $ISO_PATH"
|
||||
echo " iPXE EFI: ${IPXE_CACHE_DIR}/ipxe.efi"
|
||||
echo " iPXE BIOS: ${IPXE_CACHE_DIR}/undionly.kpxe"
|
||||
echo " iPXE ARM64: ${IPXE_CACHE_DIR}/ipxe-arm64.efi"
|
||||
print_header "Summary"
|
||||
print_success "Cached assets for schematic $SCHEMATIC_ID:"
|
||||
echo "- Talos kernel: $KERNEL_PATH"
|
||||
echo "- Talos initramfs: $INITRAMFS_PATH"
|
||||
echo "- Talos ISO: $ISO_PATH"
|
||||
echo "- iPXE EFI: ${IPXE_CACHE_DIR}/ipxe.efi"
|
||||
echo "- iPXE BIOS: ${IPXE_CACHE_DIR}/undionly.kpxe"
|
||||
echo "- iPXE ARM64: ${IPXE_CACHE_DIR}/ipxe-arm64.efi"
|
||||
echo ""
|
||||
print_info "Cache location: $SCHEMATIC_CACHE_DIR"
|
||||
echo ""
|
||||
print_info "Use these assets for:"
|
||||
echo " - PXE boot: Use kernel and initramfs from cache"
|
||||
echo " - USB creation: Use ISO file for dd or imaging tools"
|
||||
echo " Example: sudo dd if=$ISO_PATH of=/dev/sdX bs=4M status=progress"
|
||||
echo " - Custom installer: https://$INSTALLER_URL"
|
||||
echo "- PXE boot: Use kernel and initramfs from cache"
|
||||
echo "- USB creation: Use ISO file for dd or imaging tools"
|
||||
echo " Example: sudo dd if=$ISO_PATH of=/dev/sdX bs=4M status=progress"
|
||||
echo "- Custom installer: https://$INSTALLER_URL"
|
||||
echo ""
|
||||
print_success "Installer image generation and asset caching completed!"
|
@@ -96,7 +96,7 @@ else
|
||||
init_wild_env
|
||||
fi
|
||||
|
||||
print_header "Talos Node Configuration Application"
|
||||
print_header "Talos node configuration"
|
||||
|
||||
# Check if the specified node is registered
|
||||
NODE_INTERFACE=$(yq eval ".cluster.nodes.active.\"${NODE_NAME}\".interface" "${WC_HOME}/config.yaml" 2>/dev/null)
|
||||
@@ -156,10 +156,7 @@ PATCH_FILE="${NODE_SETUP_DIR}/patch/${NODE_NAME}.yaml"
|
||||
|
||||
# Check if patch file exists
|
||||
if [ ! -f "$PATCH_FILE" ]; then
|
||||
print_error "Patch file not found: $PATCH_FILE"
|
||||
print_info "Generate the patch file first:"
|
||||
print_info " wild-cluster-node-patch-generate $NODE_NAME"
|
||||
exit 1
|
||||
wild-cluster-node-patch-generate "$NODE_NAME"
|
||||
fi
|
||||
|
||||
# Determine base config file
|
||||
|
124
bin/wild-cluster-services-configure
Executable file
124
bin/wild-cluster-services-configure
Executable file
@@ -0,0 +1,124 @@
|
||||
#\!/bin/bash
|
||||
|
||||
set -e
|
||||
set -o pipefail
|
||||
|
||||
# Usage function
|
||||
usage() {
|
||||
echo "Usage: wild-cluster-services-configure [options] [service...]"
|
||||
echo ""
|
||||
echo "Compile service templates with configuration"
|
||||
echo ""
|
||||
echo "Arguments:"
|
||||
echo " service Specific service(s) to compile (optional)"
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " -h, --help Show this help message"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " wild-cluster-services-configure # Compile all services"
|
||||
echo " wild-cluster-services-configure metallb traefik # Compile specific services"
|
||||
echo ""
|
||||
echo "Available services:"
|
||||
echo " metallb, longhorn, traefik, coredns, cert-manager,"
|
||||
echo " externaldns, kubernetes-dashboard, nfs, docker-registry"
|
||||
}
|
||||
|
||||
# Parse arguments
|
||||
DRY_RUN=false
|
||||
LIST_SERVICES=false
|
||||
SPECIFIC_SERVICES=()
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
-h|--help)
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
--dry-run)
|
||||
DRY_RUN=true
|
||||
shift
|
||||
;;
|
||||
-*)
|
||||
echo "Unknown option $1"
|
||||
usage
|
||||
exit 1
|
||||
;;
|
||||
*)
|
||||
SPECIFIC_SERVICES+=("$1")
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Initialize Wild Cloud environment
|
||||
if [ -z "${WC_ROOT}" ]; then
|
||||
print "WC_ROOT is not set."
|
||||
exit 1
|
||||
else
|
||||
source "${WC_ROOT}/scripts/common.sh"
|
||||
init_wild_env
|
||||
fi
|
||||
|
||||
CLUSTER_SETUP_DIR="${WC_HOME}/setup/cluster-services"
|
||||
|
||||
# Check if cluster setup directory exists
|
||||
if [ ! -d "$CLUSTER_SETUP_DIR" ]; then
|
||||
print_error "Cluster services setup directory not found: $CLUSTER_SETUP_DIR"
|
||||
print_info "Run 'wild-cluster-services-generate' first to generate setup files"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# =============================================================================
|
||||
# CLUSTER SERVICES TEMPLATE COMPILATION
|
||||
# =============================================================================
|
||||
|
||||
print_header "Cluster services template compilation"
|
||||
|
||||
# Get list of services to compile
|
||||
if [ ${#SPECIFIC_SERVICES[@]} -gt 0 ]; then
|
||||
SERVICES_TO_INSTALL=("${SPECIFIC_SERVICES[@]}")
|
||||
print_info "Compiling specific services: ${SERVICES_TO_INSTALL[*]}"
|
||||
else
|
||||
# Compile all available services in a specific order for dependencies
|
||||
SERVICES_TO_INSTALL=(
|
||||
"metallb"
|
||||
"longhorn"
|
||||
"traefik"
|
||||
"coredns"
|
||||
"cert-manager"
|
||||
"externaldns"
|
||||
"kubernetes-dashboard"
|
||||
"nfs"
|
||||
"docker-registry"
|
||||
)
|
||||
print_info "Installing all available services"
|
||||
fi
|
||||
|
||||
print_info "Services to compile: ${SERVICES_TO_INSTALL[*]}"
|
||||
|
||||
# Compile services
|
||||
cd "$CLUSTER_SETUP_DIR"
|
||||
INSTALLED_COUNT=0
|
||||
FAILED_COUNT=0
|
||||
|
||||
for service in "${SERVICES_TO_INSTALL[@]}"; do
|
||||
print_info "Compiling $service"
|
||||
|
||||
service_dir="$CLUSTER_SETUP_DIR/$service"
|
||||
source_service_dir="$service_dir/kustomize.template"
|
||||
dest_service_dir="$service_dir/kustomize"
|
||||
|
||||
# Run configuration to make sure we have the template values we need.
|
||||
config_script="$service_dir/configure.sh"
|
||||
if [ -f "$config_script" ]; then
|
||||
source "$config_script"
|
||||
fi
|
||||
|
||||
wild-compile-template-dir --clean "$source_service_dir" "$dest_service_dir"
|
||||
echo ""
|
||||
done
|
||||
|
||||
cd - >/dev/null
|
||||
|
||||
print_success "Successfully compiled: $INSTALLED_COUNT services"
|
148
bin/wild-cluster-services-fetch
Executable file
148
bin/wild-cluster-services-fetch
Executable file
@@ -0,0 +1,148 @@
|
||||
#\!/bin/bash
|
||||
|
||||
set -e
|
||||
set -o pipefail
|
||||
|
||||
# Usage function
|
||||
usage() {
|
||||
echo "Usage: wild-cluster-services-fetch [options]"
|
||||
echo ""
|
||||
echo "Fetch cluster services setup files from the repository."
|
||||
echo ""
|
||||
echo "Arguments:"
|
||||
echo " service Specific service(s) to install (optional)"
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " -h, --help Show this help message"
|
||||
echo " --force Force fetching even if files exist"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " wild-cluster-services-fetch # Fetch all services"
|
||||
echo " wild-cluster-services-fetch metallb traefik # Fetch specific services"
|
||||
echo ""
|
||||
echo "Available services:"
|
||||
echo " metallb, longhorn, traefik, coredns, cert-manager,"
|
||||
echo " externaldns, kubernetes-dashboard, nfs, docker-registry"
|
||||
}
|
||||
|
||||
# Parse arguments
|
||||
FORCE=false
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
-h|--help)
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
--force)
|
||||
FORCE=true
|
||||
shift
|
||||
;;
|
||||
-*)
|
||||
echo "Unknown option $1"
|
||||
usage
|
||||
exit 1
|
||||
;;
|
||||
*)
|
||||
echo "Unexpected argument: $1"
|
||||
usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Initialize Wild Cloud environment
|
||||
if [ -z "${WC_ROOT}" ]; then
|
||||
print "WC_ROOT is not set."
|
||||
exit 1
|
||||
else
|
||||
source "${WC_ROOT}/scripts/common.sh"
|
||||
init_wild_env
|
||||
fi
|
||||
|
||||
print_header "Fetching cluster services templates"
|
||||
|
||||
SOURCE_DIR="${WC_ROOT}/setup/cluster-services"
|
||||
DEST_DIR="${WC_HOME}/setup/cluster-services"
|
||||
|
||||
# Check if source directory exists
|
||||
if [ ! -d "$SOURCE_DIR" ]; then
|
||||
print_error "Cluster setup source directory not found: $SOURCE_DIR"
|
||||
print_info "Make sure the wild-cloud repository is properly set up"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if destination already exists
|
||||
if [ -d "$DEST_DIR" ] && [ "$FORCE" = false ]; then
|
||||
print_warning "Cluster setup directory already exists: $DEST_DIR"
|
||||
read -p "Overwrite existing files? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
FORCE=true
|
||||
fi
|
||||
else
|
||||
mkdir -p "$DEST_DIR"
|
||||
fi
|
||||
|
||||
# Copy README
|
||||
if [ ! -f "${WC_HOME}/setup/README.md" ]; then
|
||||
cp "${WC_ROOT}/setup/README.md" "${WC_HOME}/setup/README.md"
|
||||
fi
|
||||
|
||||
# Get list of services to install
|
||||
if [ ${#SPECIFIC_SERVICES[@]} -gt 0 ]; then
|
||||
SERVICES_TO_INSTALL=("${SPECIFIC_SERVICES[@]}")
|
||||
print_info "Fetching specific services: ${SERVICES_TO_INSTALL[*]}"
|
||||
else
|
||||
# Install all available services in a specific order for dependencies
|
||||
SERVICES_TO_INSTALL=(
|
||||
"metallb"
|
||||
"longhorn"
|
||||
"traefik"
|
||||
"coredns"
|
||||
"cert-manager"
|
||||
"externaldns"
|
||||
"kubernetes-dashboard"
|
||||
"nfs"
|
||||
"docker-registry"
|
||||
)
|
||||
print_info "Fetching all available services."
|
||||
fi
|
||||
|
||||
for service in "${SERVICES_TO_INSTALL[@]}"; do
|
||||
|
||||
SERVICE_SOURCE_DIR="$SOURCE_DIR/$service"
|
||||
SERVICE_DEST_DIR="$DEST_DIR/$service"
|
||||
TEMPLATE_SOURCE_DIR="$SERVICE_SOURCE_DIR/kustomize.template"
|
||||
TEMPLATE_DEST_DIR="$SERVICE_DEST_DIR/kustomize.template"
|
||||
|
||||
if [ ! -d "$TEMPLATE_SOURCE_DIR" ]; then
|
||||
print_error "Source directory not found: $TEMPLATE_SOURCE_DIR"
|
||||
continue
|
||||
fi
|
||||
|
||||
if $FORCE && [ -d "$TEMPLATE_DEST_DIR" ]; then
|
||||
print_info "Removing existing $service templates in: $TEMPLATE_DEST_DIR"
|
||||
rm -rf "$TEMPLATE_DEST_DIR"
|
||||
elif [ -d "$TEMPLATE_DEST_DIR" ]; then
|
||||
print_info "Files already exist for $service, skipping (use --force to overwrite)."
|
||||
continue
|
||||
fi
|
||||
|
||||
mkdir -p "$SERVICE_DEST_DIR"
|
||||
mkdir -p "$TEMPLATE_DEST_DIR"
|
||||
cp -f "$SERVICE_SOURCE_DIR/README.md" "$SERVICE_DEST_DIR/"
|
||||
|
||||
if [ -f "$SERVICE_SOURCE_DIR/configure.sh" ]; then
|
||||
cp -f "$SERVICE_SOURCE_DIR/configure.sh" "$SERVICE_DEST_DIR/"
|
||||
fi
|
||||
|
||||
if [ -f "$SERVICE_SOURCE_DIR/install.sh" ]; then
|
||||
cp -f "$SERVICE_SOURCE_DIR/install.sh" "$SERVICE_DEST_DIR/"
|
||||
fi
|
||||
|
||||
if [ -d "$TEMPLATE_SOURCE_DIR" ]; then
|
||||
cp -r "$TEMPLATE_SOURCE_DIR/"* "$TEMPLATE_DEST_DIR/"
|
||||
fi
|
||||
|
||||
print_success "Fetched $service templates."
|
||||
done
|
@@ -1,208 +0,0 @@
|
||||
#\!/bin/bash
|
||||
|
||||
set -e
|
||||
set -o pipefail
|
||||
|
||||
# Usage function
|
||||
usage() {
|
||||
echo "Usage: wild-cluster-services-generate [options]"
|
||||
echo ""
|
||||
echo "Generate cluster services setup files by compiling templates."
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " -h, --help Show this help message"
|
||||
echo " --force Force regeneration even if files exist"
|
||||
echo ""
|
||||
echo "This script will:"
|
||||
echo " - Copy cluster service templates from WC_ROOT to WC_HOME"
|
||||
echo " - Compile all templates with current configuration"
|
||||
echo " - Prepare services for installation"
|
||||
echo ""
|
||||
echo "Requirements:"
|
||||
echo " - Must be run from a wild-cloud directory"
|
||||
echo " - Basic cluster configuration must be completed"
|
||||
echo " - Service configuration (DNS, storage, etc.) must be completed"
|
||||
}
|
||||
|
||||
# Parse arguments
|
||||
FORCE=false
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
-h|--help)
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
--force)
|
||||
FORCE=true
|
||||
shift
|
||||
;;
|
||||
-*)
|
||||
echo "Unknown option $1"
|
||||
usage
|
||||
exit 1
|
||||
;;
|
||||
*)
|
||||
echo "Unexpected argument: $1"
|
||||
usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Initialize Wild Cloud environment
|
||||
if [ -z "${WC_ROOT}" ]; then
|
||||
print "WC_ROOT is not set."
|
||||
exit 1
|
||||
else
|
||||
source "${WC_ROOT}/scripts/common.sh"
|
||||
init_wild_env
|
||||
fi
|
||||
|
||||
# =============================================================================
|
||||
# CLUSTER SERVICES SETUP GENERATION
|
||||
# =============================================================================
|
||||
|
||||
print_header "Cluster Services Setup Generation"
|
||||
|
||||
SOURCE_DIR="${WC_ROOT}/setup/cluster-services"
|
||||
DEST_DIR="${WC_HOME}/setup/cluster-services"
|
||||
|
||||
# Check if source directory exists
|
||||
if [ ! -d "$SOURCE_DIR" ]; then
|
||||
print_error "Cluster setup source directory not found: $SOURCE_DIR"
|
||||
print_info "Make sure the wild-cloud repository is properly set up"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if destination already exists
|
||||
if [ -d "$DEST_DIR" ] && [ "$FORCE" = false ]; then
|
||||
print_warning "Cluster setup directory already exists: $DEST_DIR"
|
||||
read -p "Overwrite existing files? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||
print_info "Skipping cluster services generation"
|
||||
exit 0
|
||||
fi
|
||||
print_info "Regenerating cluster setup files..."
|
||||
rm -rf "$DEST_DIR"
|
||||
elif [ "$FORCE" = true ] && [ -d "$DEST_DIR" ]; then
|
||||
print_info "Force regeneration enabled, removing existing files..."
|
||||
rm -rf "$DEST_DIR"
|
||||
fi
|
||||
|
||||
# Copy and compile cluster setup files
|
||||
print_info "Copying and compiling cluster setup files from repository..."
|
||||
mkdir -p "${WC_HOME}/setup"
|
||||
|
||||
# Copy README if it doesn't exist
|
||||
if [ ! -f "${WC_HOME}/setup/README.md" ]; then
|
||||
cp "${WC_ROOT}/setup/README.md" "${WC_HOME}/setup/README.md"
|
||||
fi
|
||||
|
||||
# Create destination directory
|
||||
mkdir -p "$DEST_DIR"
|
||||
|
||||
# First, copy root-level files from setup/cluster/ (install-all.sh, get_helm.sh, etc.)
|
||||
print_info "Copying root-level cluster setup files..."
|
||||
for item in "$SOURCE_DIR"/*; do
|
||||
if [ -f "$item" ]; then
|
||||
item_name=$(basename "$item")
|
||||
print_info " Copying: ${item_name}"
|
||||
cp "$item" "$DEST_DIR/$item_name"
|
||||
fi
|
||||
done
|
||||
|
||||
# Then, process each service directory in the source
|
||||
print_info "Processing service directories..."
|
||||
for service_dir in "$SOURCE_DIR"/*; do
|
||||
if [ ! -d "$service_dir" ]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
service_name=$(basename "$service_dir")
|
||||
dest_service_dir="$DEST_DIR/$service_name"
|
||||
|
||||
print_info "Processing service: $service_name"
|
||||
|
||||
# Create destination service directory
|
||||
mkdir -p "$dest_service_dir"
|
||||
|
||||
# Copy all files except kustomize.template directory
|
||||
for item in "$service_dir"/*; do
|
||||
item_name=$(basename "$item")
|
||||
|
||||
if [ "$item_name" = "kustomize.template" ]; then
|
||||
# Compile kustomize.template to kustomize directory
|
||||
if [ -d "$item" ]; then
|
||||
print_info " Compiling kustomize templates for $service_name"
|
||||
wild-compile-template-dir --clean "$item" "$dest_service_dir/kustomize"
|
||||
fi
|
||||
else
|
||||
# Copy other files as-is (install.sh, README.md, etc.)
|
||||
if [ -f "$item" ]; then
|
||||
# Compile individual template files
|
||||
if grep -q "{{" "$item" 2>/dev/null; then
|
||||
print_info " Compiling: ${item_name}"
|
||||
wild-compile-template < "$item" > "$dest_service_dir/$item_name"
|
||||
else
|
||||
cp "$item" "$dest_service_dir/$item_name"
|
||||
fi
|
||||
elif [ -d "$item" ]; then
|
||||
cp -r "$item" "$dest_service_dir/"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
done
|
||||
|
||||
print_success "Cluster setup files copied and compiled"
|
||||
|
||||
# Verify required configuration
|
||||
print_info "Verifying service configuration..."
|
||||
|
||||
MISSING_CONFIG=()
|
||||
|
||||
# Check essential configuration values
|
||||
if [ -z "$(wild-config cluster.name 2>/dev/null)" ]; then
|
||||
MISSING_CONFIG+=("cluster.name")
|
||||
fi
|
||||
|
||||
if [ -z "$(wild-config cloud.domain 2>/dev/null)" ]; then
|
||||
MISSING_CONFIG+=("cloud.domain")
|
||||
fi
|
||||
|
||||
if [ -z "$(wild-config cluster.ipAddressPool 2>/dev/null)" ]; then
|
||||
MISSING_CONFIG+=("cluster.ipAddressPool")
|
||||
fi
|
||||
|
||||
if [ -z "$(wild-config operator.email 2>/dev/null)" ]; then
|
||||
MISSING_CONFIG+=("operator.email")
|
||||
fi
|
||||
|
||||
if [ ${#MISSING_CONFIG[@]} -gt 0 ]; then
|
||||
print_warning "Some required configuration values are missing:"
|
||||
for config in "${MISSING_CONFIG[@]}"; do
|
||||
print_warning " - $config"
|
||||
done
|
||||
print_info "Run 'wild-setup' to complete the configuration"
|
||||
fi
|
||||
|
||||
print_success "Cluster services setup generation completed!"
|
||||
echo ""
|
||||
print_info "Generated setup directory: $DEST_DIR"
|
||||
echo ""
|
||||
print_info "Available services:"
|
||||
for service_dir in "$DEST_DIR"/*; do
|
||||
if [ -d "$service_dir" ] && [ -f "$service_dir/install.sh" ]; then
|
||||
service_name=$(basename "$service_dir")
|
||||
print_info " - $service_name"
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
print_info "Next steps:"
|
||||
echo " 1. Review the generated configuration files in $DEST_DIR"
|
||||
echo " 2. Make sure your cluster is running and kubectl is configured"
|
||||
echo " 3. Install services with: wild-cluster-services-up"
|
||||
echo " 4. Or install individual services by running their install.sh scripts"
|
||||
|
||||
print_success "Ready for cluster services installation!"
|
@@ -14,22 +14,15 @@ usage() {
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " -h, --help Show this help message"
|
||||
echo " --list List available services"
|
||||
echo " --dry-run Show what would be installed without running"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " wild-cluster-services-up # Install all services"
|
||||
echo " wild-cluster-services-up metallb traefik # Install specific services"
|
||||
echo " wild-cluster-services-up --list # List available services"
|
||||
echo ""
|
||||
echo "Available services (when setup files exist):"
|
||||
echo "Available services:"
|
||||
echo " metallb, longhorn, traefik, coredns, cert-manager,"
|
||||
echo " externaldns, kubernetes-dashboard, nfs, docker-registry"
|
||||
echo ""
|
||||
echo "Requirements:"
|
||||
echo " - Must be run from a wild-cloud directory"
|
||||
echo " - Cluster services must be generated first (wild-cluster-services-generate)"
|
||||
echo " - Kubernetes cluster must be running and kubectl configured"
|
||||
}
|
||||
|
||||
# Parse arguments
|
||||
@@ -43,10 +36,6 @@ while [[ $# -gt 0 ]]; do
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
--list)
|
||||
LIST_SERVICES=true
|
||||
shift
|
||||
;;
|
||||
--dry-run)
|
||||
DRY_RUN=true
|
||||
shift
|
||||
@@ -81,43 +70,11 @@ if [ ! -d "$CLUSTER_SETUP_DIR" ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Function to get available services
|
||||
get_available_services() {
|
||||
local services=()
|
||||
for service_dir in "$CLUSTER_SETUP_DIR"/*; do
|
||||
if [ -d "$service_dir" ] && [ -f "$service_dir/install.sh" ]; then
|
||||
services+=($(basename "$service_dir"))
|
||||
fi
|
||||
done
|
||||
echo "${services[@]}"
|
||||
}
|
||||
|
||||
# List services if requested
|
||||
if [ "$LIST_SERVICES" = true ]; then
|
||||
print_header "Available Cluster Services"
|
||||
AVAILABLE_SERVICES=($(get_available_services))
|
||||
|
||||
if [ ${#AVAILABLE_SERVICES[@]} -eq 0 ]; then
|
||||
print_warning "No services found in $CLUSTER_SETUP_DIR"
|
||||
print_info "Run 'wild-cluster-services-generate' first"
|
||||
else
|
||||
print_info "Services available for installation:"
|
||||
for service in "${AVAILABLE_SERVICES[@]}"; do
|
||||
if [ -f "$CLUSTER_SETUP_DIR/$service/install.sh" ]; then
|
||||
print_success " ✓ $service"
|
||||
else
|
||||
print_warning " ✗ $service (install.sh missing)"
|
||||
fi
|
||||
done
|
||||
fi
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# =============================================================================
|
||||
# CLUSTER SERVICES INSTALLATION
|
||||
# =============================================================================
|
||||
|
||||
print_header "Cluster Services Installation"
|
||||
print_header "Cluster services installation"
|
||||
|
||||
# Check kubectl connectivity
|
||||
if [ "$DRY_RUN" = false ]; then
|
||||
@@ -151,28 +108,11 @@ else
|
||||
print_info "Installing all available services"
|
||||
fi
|
||||
|
||||
# Filter to only include services that actually exist
|
||||
EXISTING_SERVICES=()
|
||||
for service in "${SERVICES_TO_INSTALL[@]}"; do
|
||||
if [ -d "$CLUSTER_SETUP_DIR/$service" ] && [ -f "$CLUSTER_SETUP_DIR/$service/install.sh" ]; then
|
||||
EXISTING_SERVICES+=("$service")
|
||||
elif [ ${#SPECIFIC_SERVICES[@]} -gt 0 ]; then
|
||||
# Only warn if user specifically requested this service
|
||||
print_warning "Service '$service' not found or missing install.sh"
|
||||
fi
|
||||
done
|
||||
|
||||
if [ ${#EXISTING_SERVICES[@]} -eq 0 ]; then
|
||||
print_error "No installable services found"
|
||||
print_info "Run 'wild-cluster-services-generate' first to generate setup files"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
print_info "Services to install: ${EXISTING_SERVICES[*]}"
|
||||
print_info "Services to install: ${SERVICES_TO_INSTALL[*]}"
|
||||
|
||||
if [ "$DRY_RUN" = true ]; then
|
||||
print_info "DRY RUN - would install the following services:"
|
||||
for service in "${EXISTING_SERVICES[@]}"; do
|
||||
for service in "${SERVICES_TO_INSTALL[@]}"; do
|
||||
print_info " - $service: $CLUSTER_SETUP_DIR/$service/install.sh"
|
||||
done
|
||||
exit 0
|
||||
@@ -183,10 +123,12 @@ cd "$CLUSTER_SETUP_DIR"
|
||||
INSTALLED_COUNT=0
|
||||
FAILED_COUNT=0
|
||||
|
||||
for service in "${EXISTING_SERVICES[@]}"; do
|
||||
SOURCE_DIR="${WC_ROOT}/setup/cluster-services"
|
||||
|
||||
for service in "${SERVICES_TO_INSTALL[@]}"; do
|
||||
echo ""
|
||||
print_header "Installing $service"
|
||||
|
||||
print_header "Installing $service"
|
||||
|
||||
if [ -f "./$service/install.sh" ]; then
|
||||
print_info "Running $service installation..."
|
||||
if ./"$service"/install.sh; then
|
||||
@@ -206,7 +148,7 @@ cd - >/dev/null
|
||||
|
||||
# Summary
|
||||
echo ""
|
||||
print_header "Installation Summary"
|
||||
print_header "Installation summary"
|
||||
print_success "Successfully installed: $INSTALLED_COUNT services"
|
||||
if [ $FAILED_COUNT -gt 0 ]; then
|
||||
print_warning "Failed to install: $FAILED_COUNT services"
|
||||
@@ -219,13 +161,13 @@ if [ $INSTALLED_COUNT -gt 0 ]; then
|
||||
echo " 2. Check service status with: kubectl get services --all-namespaces"
|
||||
|
||||
# Service-specific next steps
|
||||
if [[ " ${EXISTING_SERVICES[*]} " =~ " kubernetes-dashboard " ]]; then
|
||||
if [[ " ${SERVICES_TO_INSTALL[*]} " =~ " kubernetes-dashboard " ]]; then
|
||||
INTERNAL_DOMAIN=$(wild-config cloud.internalDomain 2>/dev/null || echo "your-internal-domain")
|
||||
echo " 3. Access dashboard at: https://dashboard.${INTERNAL_DOMAIN}"
|
||||
echo " 4. Get dashboard token with: ${WC_ROOT}/bin/dashboard-token"
|
||||
fi
|
||||
|
||||
if [[ " ${EXISTING_SERVICES[*]} " =~ " cert-manager " ]]; then
|
||||
if [[ " ${SERVICES_TO_INSTALL[*]} " =~ " cert-manager " ]]; then
|
||||
echo " 3. Check cert-manager: kubectl get clusterissuers"
|
||||
fi
|
||||
fi
|
||||
|
@@ -5,7 +5,7 @@ set -o pipefail
|
||||
|
||||
# Usage function
|
||||
usage() {
|
||||
echo "Usage: wild-config <yaml_key_path>"
|
||||
echo "Usage: wild-config [--check] <yaml_key_path>"
|
||||
echo ""
|
||||
echo "Read a value from \$WC_HOME/config.yaml using a YAML key path."
|
||||
echo ""
|
||||
@@ -13,18 +13,25 @@ usage() {
|
||||
echo " wild-config 'cluster.name' # Get cluster name"
|
||||
echo " wild-config 'apps.myapp.replicas' # Get app replicas count"
|
||||
echo " wild-config 'services[0].name' # Get first service name"
|
||||
echo " wild-config --check 'cluster.name' # Exit 1 if key doesn't exist"
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " --check Exit 1 if key doesn't exist (no output)"
|
||||
echo " -h, --help Show this help message"
|
||||
}
|
||||
|
||||
# Parse arguments
|
||||
CHECK_MODE=false
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
-h|--help)
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
--check)
|
||||
CHECK_MODE=true
|
||||
shift
|
||||
;;
|
||||
-*)
|
||||
echo "Unknown option $1"
|
||||
usage
|
||||
@@ -61,17 +68,30 @@ fi
|
||||
CONFIG_FILE="${WC_HOME}/config.yaml"
|
||||
|
||||
if [ ! -f "${CONFIG_FILE}" ]; then
|
||||
if [ "${CHECK_MODE}" = true ]; then
|
||||
exit 1
|
||||
fi
|
||||
echo "Error: config file not found at ${CONFIG_FILE}" >&2
|
||||
exit 1
|
||||
echo ""
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Use yq to extract the value from the YAML file
|
||||
result=$(yq eval ".${KEY_PATH}" "${CONFIG_FILE}") 2>/dev/null
|
||||
result=$(yq eval ".${KEY_PATH}" "${CONFIG_FILE}" 2>/dev/null || echo "null")
|
||||
|
||||
# Check if result is null (key not found)
|
||||
if [ "${result}" = "null" ]; then
|
||||
if [ "${CHECK_MODE}" = true ]; then
|
||||
exit 1
|
||||
fi
|
||||
echo "Error: Key path '${KEY_PATH}' not found in ${CONFIG_FILE}" >&2
|
||||
exit 1
|
||||
echo ""
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# In check mode, exit 0 if key exists (don't output value)
|
||||
if [ "${CHECK_MODE}" = true ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "${result}"
|
@@ -73,11 +73,9 @@ CONFIG_FILE="${WC_HOME}/config.yaml"
|
||||
|
||||
# Create config file if it doesn't exist
|
||||
if [ ! -f "${CONFIG_FILE}" ]; then
|
||||
echo "Creating new config file at ${CONFIG_FILE}"
|
||||
print_info "Creating new config file at ${CONFIG_FILE}"
|
||||
echo "{}" > "${CONFIG_FILE}"
|
||||
fi
|
||||
|
||||
# Use yq to set the value in the YAML file
|
||||
yq eval ".${KEY_PATH} = \"${VALUE}\"" -i "${CONFIG_FILE}"
|
||||
|
||||
echo "Set ${KEY_PATH} = ${VALUE}"
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user