Initial commit.
This commit is contained in:
36
.env.example
Normal file
36
.env.example
Normal file
@@ -0,0 +1,36 @@
|
||||
# Basic configuration
|
||||
DOMAIN=example.com
|
||||
EMAIL=your.email@example.com
|
||||
USE_HOSTNAME=your-hostname
|
||||
KUBECONFIG=/etc/rancher/k3s/k3s.yaml
|
||||
ENVIRONMENT=prod
|
||||
|
||||
# Dynamic DNS configuration for external access
|
||||
DYNAMIC_DNS=your-dynamic-dns.example.com
|
||||
|
||||
# Dashboard access
|
||||
ADMIN_USERNAME=admin
|
||||
|
||||
# PostgreSQL configuration
|
||||
POSTGRES_NAMESPACE=postgres
|
||||
POSTGRES_RELEASE_NAME=postgresql
|
||||
POSTGRES_DB=postgres
|
||||
POSTGRES_USER=postgres
|
||||
POSTGRES_PASSWORD=your-secure-password
|
||||
POSTGRES_STORAGE=10Gi
|
||||
|
||||
# MariaDB configuration
|
||||
MARIADB_NAMESPACE=mariadb
|
||||
MARIADB_RELEASE_NAME=mariadb
|
||||
MARIADB_ROOT_PASSWORD=your-root-password
|
||||
MARIADB_USER=app
|
||||
MARIADB_PASSWORD=your-secure-password
|
||||
MARIADB_DATABASE=app_database
|
||||
MARIADB_STORAGE=8Gi
|
||||
MARIADB_TAG=11.4.5
|
||||
|
||||
# Cert Manager configuration
|
||||
CERT_MANAGER_NAMESPACE=cert-manager
|
||||
CERT_MANAGER_RELEASE_NAME=cert-manager
|
||||
CERT_ISSUERS_RELEASE_NAME=cert-issuers
|
||||
CLOUDFLARE_API_TOKEN=your-cloudflare-api-token-here # Required for wildcard certificates
|
7
.gitignore
vendored
Normal file
7
.gitignore
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
.env
|
||||
ca
|
||||
backups
|
||||
backup
|
||||
services
|
||||
charts
|
||||
|
88
README.md
Normal file
88
README.md
Normal file
@@ -0,0 +1,88 @@
|
||||
# Sovereign Cloud
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
> Take control of your digital life with your own personal cloud infrastructure
|
||||
|
||||
## Why Build Your Own Cloud?
|
||||
|
||||
In a world where our digital lives are increasingly controlled by large corporations, having your own personal cloud puts you back in control:
|
||||
|
||||
- **Privacy**: Your data stays on your hardware, under your control
|
||||
- **Ownership**: No subscription fees or sudden price increases
|
||||
- **Freedom**: Run the apps you want, the way you want them
|
||||
- **Learning**: Gain valuable skills in modern cloud technologies
|
||||
- **Resilience**: Reduce reliance on third-party services that can disappear
|
||||
|
||||
## What is This Project?
|
||||
|
||||
This project provides a complete, production-ready Kubernetes infrastructure designed for personal use. It combines enterprise-grade technologies in an easy-to-deploy package, allowing you to:
|
||||
|
||||
- Host your own services like web apps, databases, and more
|
||||
- Access services securely from anywhere with automatic HTTPS
|
||||
- Keep some services private on your home network
|
||||
- Deploy new applications with a single command
|
||||
- Manage everything through a slick web dashboard
|
||||
|
||||
## What Can You Run?
|
||||
|
||||
The possibilities are endless! Here are just a few ideas:
|
||||
|
||||
- **Personal Websites & Blogs** (WordPress, Ghost, Hugo)
|
||||
- **Photo Storage & Sharing** (PhotoPrism, Immich)
|
||||
- **Document Management** (Paperless-ngx)
|
||||
- **Media Servers** (Jellyfin, Plex)
|
||||
- **Home Automation** (Home Assistant)
|
||||
- **Password Managers** (Bitwarden, Vaultwarden)
|
||||
- **Note Taking Apps** (Joplin, Trilium)
|
||||
- **Productivity Tools** (Nextcloud, Gitea, Plausible Analytics)
|
||||
- **Database Servers** (PostgreSQL, MariaDB, MongoDB)
|
||||
- **And much more!**
|
||||
|
||||
## Key Features
|
||||
|
||||
- **One-Command Setup**: Get a complete Kubernetes infrastructure with a single script
|
||||
- **Secure by Default**: Automatic HTTPS certificates for all services
|
||||
- **Split-Horizon DNS**: Access services internally or externally with the same domain
|
||||
- **Custom Domains**: Use your own domain name for all services
|
||||
- **Service Templates**: Deploy new applications with a simple command
|
||||
- **Dashboard**: Web UI for monitoring and managing your infrastructure
|
||||
- **No Cloud Vendor Lock-in**: Run on your own hardware, from a Raspberry Pi to old laptops
|
||||
|
||||
## Getting Started
|
||||
|
||||
For detailed instructions, check out our documentation:
|
||||
|
||||
- [**Setup Guide**](./docs/SETUP.md) - Step-by-step instructions for setting up your infrastructure
|
||||
- [**Applications Guide**](./docs/APPS.md) - How to deploy and manage applications on your cloud
|
||||
- [**Charts Guide**](./charts/README.md) - Working with Helm charts and custom applications
|
||||
- [**Maintenance Guide**](./docs/MAINTENANCE.md) - Troubleshooting, backups, updates, and security
|
||||
|
||||
After setup, visit your dashboard at `https://dashboard.internal.yourdomain.com` to start exploring your new personal cloud infrastructure!
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
.
|
||||
├── bin/ # Helper scripts
|
||||
├── apps/ # Apps
|
||||
├── docs/ # Documentation
|
||||
│ ├── SETUP.md # Setup instructions
|
||||
│ ├── APPS.md # Application deployment guide
|
||||
│ ├── MAINTENANCE.md # Maintenance and troubleshooting
|
||||
│ ├── OPS.md # Operations guide
|
||||
│ └── INGRESS.md # Network configuration guide
|
||||
├── infrastructure_setup/ # Infrastructure setup scripts
|
||||
├── services/ # Custom service templates and deployed services
|
||||
└── load-env.sh # Environment variable loader
|
||||
```
|
||||
|
||||
## Contributing
|
||||
|
||||
Contributions are welcome! Please feel free to submit a Pull Request.
|
||||
|
||||
## License
|
||||
|
||||
TBD
|
7
apps/README.md
Normal file
7
apps/README.md
Normal file
@@ -0,0 +1,7 @@
|
||||
# Apps
|
||||
|
||||
This will be the primary place for Sovereign Cloud-maintained apps to be included.
|
||||
|
||||
## Status
|
||||
|
||||
Currently none are functional.
|
249
apps/ghost/manifests/deployment.yaml
Normal file
249
apps/ghost/manifests/deployment.yaml
Normal file
@@ -0,0 +1,249 @@
|
||||
kind: Deployment
|
||||
apiVersion: apps/v1
|
||||
metadata:
|
||||
name: ghost
|
||||
namespace: ghost
|
||||
uid: d01c62ff-68a6-456a-a630-77c730bffc9b
|
||||
resourceVersion: '2772014'
|
||||
generation: 1
|
||||
creationTimestamp: '2025-04-27T02:01:30Z'
|
||||
labels:
|
||||
app.kubernetes.io/component: ghost
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Sovereign
|
||||
app.kubernetes.io/name: ghost
|
||||
app.kubernetes.io/version: 5.118.1
|
||||
annotations:
|
||||
deployment.kubernetes.io/revision: '1'
|
||||
meta.helm.sh/release-name: ghost
|
||||
meta.helm.sh/release-namespace: ghost
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: ghost
|
||||
template:
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
app.kubernetes.io/component: ghost
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Sovereign
|
||||
app.kubernetes.io/name: ghost
|
||||
app.kubernetes.io/version: 5.118.1
|
||||
annotations:
|
||||
checksum/secrets: b1cef92e7f73650dddfb455a7519d7b2bcf051c9cb9136b34f504ee120c63ae6
|
||||
spec:
|
||||
volumes:
|
||||
- name: empty-dir
|
||||
emptyDir: {}
|
||||
- name: ghost-secrets
|
||||
projected:
|
||||
sources:
|
||||
- secret:
|
||||
name: ghost-mysql
|
||||
- secret:
|
||||
name: ghost
|
||||
defaultMode: 420
|
||||
- name: ghost-data
|
||||
persistentVolumeClaim:
|
||||
claimName: ghost
|
||||
initContainers:
|
||||
- name: prepare-base-dir
|
||||
image: docker.io/bitnami/ghost:5.118.1-debian-12-r0
|
||||
command:
|
||||
- /bin/bash
|
||||
args:
|
||||
- '-ec'
|
||||
- >
|
||||
#!/bin/bash
|
||||
|
||||
|
||||
. /opt/bitnami/scripts/liblog.sh
|
||||
|
||||
|
||||
info "Copying base dir to empty dir"
|
||||
|
||||
# In order to not break the application functionality (such as
|
||||
upgrades or plugins) we need
|
||||
|
||||
# to make the base directory writable, so we need to copy it to an
|
||||
empty dir volume
|
||||
|
||||
cp -r --preserve=mode /opt/bitnami/ghost /emptydir/app-base-dir
|
||||
resources:
|
||||
limits:
|
||||
cpu: 375m
|
||||
ephemeral-storage: 2Gi
|
||||
memory: 384Mi
|
||||
requests:
|
||||
cpu: 250m
|
||||
ephemeral-storage: 50Mi
|
||||
memory: 256Mi
|
||||
volumeMounts:
|
||||
- name: empty-dir
|
||||
mountPath: /emptydir
|
||||
terminationMessagePath: /dev/termination-log
|
||||
terminationMessagePolicy: File
|
||||
imagePullPolicy: IfNotPresent
|
||||
securityContext:
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
privileged: false
|
||||
seLinuxOptions: {}
|
||||
runAsUser: 1001
|
||||
runAsGroup: 1001
|
||||
runAsNonRoot: true
|
||||
readOnlyRootFilesystem: true
|
||||
allowPrivilegeEscalation: false
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: ghost
|
||||
image: docker.io/bitnami/ghost:5.118.1-debian-12-r0
|
||||
ports:
|
||||
- name: https
|
||||
containerPort: 2368
|
||||
protocol: TCP
|
||||
env:
|
||||
- name: BITNAMI_DEBUG
|
||||
value: 'false'
|
||||
- name: ALLOW_EMPTY_PASSWORD
|
||||
value: 'yes'
|
||||
- name: GHOST_DATABASE_HOST
|
||||
value: ghost-mysql
|
||||
- name: GHOST_DATABASE_PORT_NUMBER
|
||||
value: '3306'
|
||||
- name: GHOST_DATABASE_NAME
|
||||
value: ghost
|
||||
- name: GHOST_DATABASE_USER
|
||||
value: ghost
|
||||
- name: GHOST_DATABASE_PASSWORD_FILE
|
||||
value: /opt/bitnami/ghost/secrets/mysql-password
|
||||
- name: GHOST_HOST
|
||||
value: blog.cloud.payne.io/
|
||||
- name: GHOST_PORT_NUMBER
|
||||
value: '2368'
|
||||
- name: GHOST_USERNAME
|
||||
value: admin
|
||||
- name: GHOST_PASSWORD_FILE
|
||||
value: /opt/bitnami/ghost/secrets/ghost-password
|
||||
- name: GHOST_EMAIL
|
||||
value: paul@payne.io
|
||||
- name: GHOST_BLOG_TITLE
|
||||
value: User's Blog
|
||||
- name: GHOST_ENABLE_HTTPS
|
||||
value: 'yes'
|
||||
- name: GHOST_EXTERNAL_HTTP_PORT_NUMBER
|
||||
value: '80'
|
||||
- name: GHOST_EXTERNAL_HTTPS_PORT_NUMBER
|
||||
value: '443'
|
||||
- name: GHOST_SKIP_BOOTSTRAP
|
||||
value: 'no'
|
||||
resources:
|
||||
limits:
|
||||
cpu: 375m
|
||||
ephemeral-storage: 2Gi
|
||||
memory: 384Mi
|
||||
requests:
|
||||
cpu: 250m
|
||||
ephemeral-storage: 50Mi
|
||||
memory: 256Mi
|
||||
volumeMounts:
|
||||
- name: empty-dir
|
||||
mountPath: /opt/bitnami/ghost
|
||||
subPath: app-base-dir
|
||||
- name: empty-dir
|
||||
mountPath: /.ghost
|
||||
subPath: app-tmp-dir
|
||||
- name: empty-dir
|
||||
mountPath: /tmp
|
||||
subPath: tmp-dir
|
||||
- name: ghost-data
|
||||
mountPath: /bitnami/ghost
|
||||
- name: ghost-secrets
|
||||
mountPath: /opt/bitnami/ghost/secrets
|
||||
livenessProbe:
|
||||
tcpSocket:
|
||||
port: 2368
|
||||
initialDelaySeconds: 120
|
||||
timeoutSeconds: 5
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
failureThreshold: 6
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: https
|
||||
scheme: HTTP
|
||||
httpHeaders:
|
||||
- name: x-forwarded-proto
|
||||
value: https
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 3
|
||||
periodSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 6
|
||||
terminationMessagePath: /dev/termination-log
|
||||
terminationMessagePolicy: File
|
||||
imagePullPolicy: IfNotPresent
|
||||
securityContext:
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
privileged: false
|
||||
seLinuxOptions: {}
|
||||
runAsUser: 1001
|
||||
runAsGroup: 1001
|
||||
runAsNonRoot: true
|
||||
readOnlyRootFilesystem: true
|
||||
allowPrivilegeEscalation: false
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
restartPolicy: Always
|
||||
terminationGracePeriodSeconds: 30
|
||||
dnsPolicy: ClusterFirst
|
||||
serviceAccountName: ghost
|
||||
serviceAccount: ghost
|
||||
automountServiceAccountToken: false
|
||||
securityContext:
|
||||
fsGroup: 1001
|
||||
fsGroupChangePolicy: Always
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 1
|
||||
podAffinityTerm:
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: ghost
|
||||
topologyKey: kubernetes.io/hostname
|
||||
schedulerName: default-scheduler
|
||||
strategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxUnavailable: 25%
|
||||
maxSurge: 25%
|
||||
revisionHistoryLimit: 10
|
||||
progressDeadlineSeconds: 600
|
||||
status:
|
||||
observedGeneration: 1
|
||||
replicas: 1
|
||||
updatedReplicas: 1
|
||||
unavailableReplicas: 1
|
||||
conditions:
|
||||
- type: Available
|
||||
status: 'False'
|
||||
lastUpdateTime: '2025-04-27T02:01:30Z'
|
||||
lastTransitionTime: '2025-04-27T02:01:30Z'
|
||||
reason: MinimumReplicasUnavailable
|
||||
message: Deployment does not have minimum availability.
|
||||
- type: Progressing
|
||||
status: 'False'
|
||||
lastUpdateTime: '2025-04-27T02:11:32Z'
|
||||
lastTransitionTime: '2025-04-27T02:11:32Z'
|
||||
reason: ProgressDeadlineExceeded
|
||||
message: ReplicaSet "ghost-586bbc6ddd" has timed out progressing.
|
29
apps/ghost/manifests/ingress.yaml
Normal file
29
apps/ghost/manifests/ingress.yaml
Normal file
@@ -0,0 +1,29 @@
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: ghost
|
||||
namespace: {{ .Values.namespace }}
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: "traefik"
|
||||
cert-manager.io/cluster-issuer: "letsencrypt-prod"
|
||||
external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
|
||||
external-dns.alpha.kubernetes.io/target: "cloud.payne.io"
|
||||
external-dns.alpha.kubernetes.io/ttl: "60"
|
||||
traefik.ingress.kubernetes.io/redirect-entry-point: https
|
||||
spec:
|
||||
rules:
|
||||
- host: {{ .Values.ghost.host }}
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: ghost
|
||||
port:
|
||||
number: 80
|
||||
tls:
|
||||
- hosts:
|
||||
- {{ .Values.ghost.host }}
|
||||
secretName: ghost-tls
|
0
apps/ghost/manifests/kustomization.yaml
Normal file
0
apps/ghost/manifests/kustomization.yaml
Normal file
26
apps/ghost/manifests/networkpolicy.yaml
Normal file
26
apps/ghost/manifests/networkpolicy.yaml
Normal file
@@ -0,0 +1,26 @@
|
||||
---
|
||||
# Source: ghost/templates/networkpolicy.yaml
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: ghost
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Sovereign
|
||||
app.kubernetes.io/name: ghost
|
||||
app.kubernetes.io/version: 5.118.1
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: ghost
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
egress:
|
||||
- {}
|
||||
ingress:
|
||||
- ports:
|
||||
- port: 2368
|
||||
- port: 2368
|
20
apps/ghost/manifests/pdb.yaml
Normal file
20
apps/ghost/manifests/pdb.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
---
|
||||
# Source: ghost/templates/pdb.yaml
|
||||
apiVersion: policy/v1
|
||||
kind: PodDisruptionBudget
|
||||
metadata:
|
||||
name: ghost
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Sovereign
|
||||
app.kubernetes.io/name: ghost
|
||||
app.kubernetes.io/version: 5.118.1
|
||||
app.kubernetes.io/component: ghost
|
||||
spec:
|
||||
maxUnavailable: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: ghost
|
||||
app.kubernetes.io/component: ghost
|
15
apps/ghost/manifests/secrets.yaml
Normal file
15
apps/ghost/manifests/secrets.yaml
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
# Source: ghost/templates/secrets.yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: ghost
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Sovereign
|
||||
app.kubernetes.io/name: ghost
|
||||
app.kubernetes.io/version: 5.118.1
|
||||
type: Opaque
|
||||
data:
|
||||
ghost-password: "NnhXejdkNm9KTw=="
|
14
apps/ghost/manifests/service-account.yaml
Normal file
14
apps/ghost/manifests/service-account.yaml
Normal file
@@ -0,0 +1,14 @@
|
||||
---
|
||||
# Source: ghost/templates/service-account.yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: ghost
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Sovereign
|
||||
app.kubernetes.io/name: ghost
|
||||
app.kubernetes.io/version: 5.118.1
|
||||
app.kubernetes.io/component: ghost
|
||||
automountServiceAccountToken: false
|
26
apps/ghost/manifests/svc.yaml
Normal file
26
apps/ghost/manifests/svc.yaml
Normal file
@@ -0,0 +1,26 @@
|
||||
---
|
||||
# Source: ghost/templates/svc.yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ghost
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Sovereign
|
||||
app.kubernetes.io/name: ghost
|
||||
app.kubernetes.io/version: 5.118.1
|
||||
app.kubernetes.io/component: ghost
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
externalTrafficPolicy: "Cluster"
|
||||
sessionAffinity: None
|
||||
ports:
|
||||
- name: http
|
||||
port: 80
|
||||
protocol: TCP
|
||||
targetPort: http
|
||||
selector:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: ghost
|
||||
app.kubernetes.io/component: ghost
|
31
apps/mysql/maniifests/networkpolicy.yaml
Normal file
31
apps/mysql/maniifests/networkpolicy.yaml
Normal file
@@ -0,0 +1,31 @@
|
||||
---
|
||||
# Source: ghost/charts/mysql/templates/networkpolicy.yaml
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: ghost-mysql
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/version: 8.4.5
|
||||
helm.sh/chart: mysql-12.3.4
|
||||
app.kubernetes.io/part-of: mysql
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/version: 8.4.5
|
||||
helm.sh/chart: mysql-12.3.4
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
egress:
|
||||
- {}
|
||||
ingress:
|
||||
# Allow connection from other cluster pods
|
||||
- ports:
|
||||
- port: 3306
|
47
apps/mysql/maniifests/primary/configmap.yaml
Normal file
47
apps/mysql/maniifests/primary/configmap.yaml
Normal file
@@ -0,0 +1,47 @@
|
||||
---
|
||||
# Source: ghost/charts/mysql/templates/primary/configmap.yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: ghost-mysql
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/version: 8.4.5
|
||||
helm.sh/chart: mysql-12.3.4
|
||||
app.kubernetes.io/part-of: mysql
|
||||
app.kubernetes.io/component: primary
|
||||
data:
|
||||
my.cnf: |-
|
||||
[mysqld]
|
||||
authentication_policy='* ,,'
|
||||
skip-name-resolve
|
||||
explicit_defaults_for_timestamp
|
||||
basedir=/opt/bitnami/mysql
|
||||
plugin_dir=/opt/bitnami/mysql/lib/plugin
|
||||
port=3306
|
||||
mysqlx=0
|
||||
mysqlx_port=33060
|
||||
socket=/opt/bitnami/mysql/tmp/mysql.sock
|
||||
datadir=/bitnami/mysql/data
|
||||
tmpdir=/opt/bitnami/mysql/tmp
|
||||
max_allowed_packet=16M
|
||||
bind-address=*
|
||||
pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
|
||||
log-error=/opt/bitnami/mysql/logs/mysqld.log
|
||||
character-set-server=UTF8
|
||||
slow_query_log=0
|
||||
long_query_time=10.0
|
||||
|
||||
[client]
|
||||
port=3306
|
||||
socket=/opt/bitnami/mysql/tmp/mysql.sock
|
||||
default-character-set=UTF8
|
||||
plugin_dir=/opt/bitnami/mysql/lib/plugin
|
||||
|
||||
[manager]
|
||||
port=3306
|
||||
socket=/opt/bitnami/mysql/tmp/mysql.sock
|
||||
pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
|
23
apps/mysql/maniifests/primary/pdb.yaml
Normal file
23
apps/mysql/maniifests/primary/pdb.yaml
Normal file
@@ -0,0 +1,23 @@
|
||||
---
|
||||
# Source: ghost/charts/mysql/templates/primary/pdb.yaml
|
||||
apiVersion: policy/v1
|
||||
kind: PodDisruptionBudget
|
||||
metadata:
|
||||
name: ghost-mysql
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/version: 8.4.5
|
||||
helm.sh/chart: mysql-12.3.4
|
||||
app.kubernetes.io/part-of: mysql
|
||||
app.kubernetes.io/component: primary
|
||||
spec:
|
||||
maxUnavailable: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/part-of: mysql
|
||||
app.kubernetes.io/component: primary
|
241
apps/mysql/maniifests/primary/statefulset.yaml
Normal file
241
apps/mysql/maniifests/primary/statefulset.yaml
Normal file
@@ -0,0 +1,241 @@
|
||||
---
|
||||
# Source: ghost/charts/mysql/templates/primary/statefulset.yaml
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: ghost-mysql
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/version: 8.4.5
|
||||
helm.sh/chart: mysql-12.3.4
|
||||
app.kubernetes.io/part-of: mysql
|
||||
app.kubernetes.io/component: primary
|
||||
spec:
|
||||
replicas: 1
|
||||
podManagementPolicy: ""
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/part-of: mysql
|
||||
app.kubernetes.io/component: primary
|
||||
serviceName: ghost-mysql-headless
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
checksum/configuration: 959b0f76ba7e6be0aaaabf97932398c31b17bc9f86d3839a26a3bbbc48673cd9
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/version: 8.4.5
|
||||
helm.sh/chart: mysql-12.3.4
|
||||
app.kubernetes.io/part-of: mysql
|
||||
app.kubernetes.io/component: primary
|
||||
spec:
|
||||
serviceAccountName: ghost-mysql
|
||||
|
||||
automountServiceAccountToken: false
|
||||
affinity:
|
||||
podAffinity:
|
||||
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- podAffinityTerm:
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: mysql
|
||||
topologyKey: kubernetes.io/hostname
|
||||
weight: 1
|
||||
nodeAffinity:
|
||||
|
||||
securityContext:
|
||||
fsGroup: 1001
|
||||
fsGroupChangePolicy: Always
|
||||
supplementalGroups: []
|
||||
sysctls: []
|
||||
initContainers:
|
||||
- name: preserve-logs-symlinks
|
||||
image: docker.io/bitnami/mysql:8.4.5-debian-12-r0
|
||||
imagePullPolicy: "IfNotPresent"
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
readOnlyRootFilesystem: true
|
||||
runAsGroup: 1001
|
||||
runAsNonRoot: true
|
||||
runAsUser: 1001
|
||||
seLinuxOptions: {}
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
resources:
|
||||
limits:
|
||||
cpu: 750m
|
||||
ephemeral-storage: 2Gi
|
||||
memory: 768Mi
|
||||
requests:
|
||||
cpu: 500m
|
||||
ephemeral-storage: 50Mi
|
||||
memory: 512Mi
|
||||
command:
|
||||
- /bin/bash
|
||||
args:
|
||||
- -ec
|
||||
- |
|
||||
#!/bin/bash
|
||||
|
||||
. /opt/bitnami/scripts/libfs.sh
|
||||
# We copy the logs folder because it has symlinks to stdout and stderr
|
||||
if ! is_dir_empty /opt/bitnami/mysql/logs; then
|
||||
cp -r /opt/bitnami/mysql/logs /emptydir/app-logs-dir
|
||||
fi
|
||||
volumeMounts:
|
||||
- name: empty-dir
|
||||
mountPath: /emptydir
|
||||
containers:
|
||||
- name: mysql
|
||||
image: docker.io/bitnami/mysql:8.4.5-debian-12-r0
|
||||
imagePullPolicy: "IfNotPresent"
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
readOnlyRootFilesystem: true
|
||||
runAsGroup: 1001
|
||||
runAsNonRoot: true
|
||||
runAsUser: 1001
|
||||
seLinuxOptions: {}
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
env:
|
||||
- name: BITNAMI_DEBUG
|
||||
value: "false"
|
||||
- name: MYSQL_ROOT_PASSWORD_FILE
|
||||
value: /opt/bitnami/mysql/secrets/mysql-root-password
|
||||
- name: MYSQL_ENABLE_SSL
|
||||
value: "no"
|
||||
- name: MYSQL_USER
|
||||
value: "bn_ghost"
|
||||
- name: MYSQL_PASSWORD_FILE
|
||||
value: /opt/bitnami/mysql/secrets/mysql-password
|
||||
- name: MYSQL_PORT
|
||||
value: "3306"
|
||||
- name: MYSQL_DATABASE
|
||||
value: "bitnami_ghost"
|
||||
envFrom:
|
||||
ports:
|
||||
- name: mysql
|
||||
containerPort: 3306
|
||||
livenessProbe:
|
||||
failureThreshold: 3
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 1
|
||||
exec:
|
||||
command:
|
||||
- /bin/bash
|
||||
- -ec
|
||||
- |
|
||||
password_aux="${MYSQL_ROOT_PASSWORD:-}"
|
||||
if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
|
||||
password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
|
||||
fi
|
||||
mysqladmin status -uroot -p"${password_aux}"
|
||||
readinessProbe:
|
||||
failureThreshold: 3
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 1
|
||||
exec:
|
||||
command:
|
||||
- /bin/bash
|
||||
- -ec
|
||||
- |
|
||||
password_aux="${MYSQL_ROOT_PASSWORD:-}"
|
||||
if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
|
||||
password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
|
||||
fi
|
||||
mysqladmin ping -uroot -p"${password_aux}" | grep "mysqld is alive"
|
||||
startupProbe:
|
||||
failureThreshold: 10
|
||||
initialDelaySeconds: 15
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 1
|
||||
exec:
|
||||
command:
|
||||
- /bin/bash
|
||||
- -ec
|
||||
- |
|
||||
password_aux="${MYSQL_ROOT_PASSWORD:-}"
|
||||
if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
|
||||
password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
|
||||
fi
|
||||
mysqladmin ping -uroot -p"${password_aux}" | grep "mysqld is alive"
|
||||
resources:
|
||||
limits:
|
||||
cpu: 750m
|
||||
ephemeral-storage: 2Gi
|
||||
memory: 768Mi
|
||||
requests:
|
||||
cpu: 500m
|
||||
ephemeral-storage: 50Mi
|
||||
memory: 512Mi
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /bitnami/mysql
|
||||
- name: empty-dir
|
||||
mountPath: /tmp
|
||||
subPath: tmp-dir
|
||||
- name: empty-dir
|
||||
mountPath: /opt/bitnami/mysql/conf
|
||||
subPath: app-conf-dir
|
||||
- name: empty-dir
|
||||
mountPath: /opt/bitnami/mysql/tmp
|
||||
subPath: app-tmp-dir
|
||||
- name: empty-dir
|
||||
mountPath: /opt/bitnami/mysql/logs
|
||||
subPath: app-logs-dir
|
||||
- name: config
|
||||
mountPath: /opt/bitnami/mysql/conf/my.cnf
|
||||
subPath: my.cnf
|
||||
- name: mysql-credentials
|
||||
mountPath: /opt/bitnami/mysql/secrets/
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: ghost-mysql
|
||||
- name: mysql-credentials
|
||||
secret:
|
||||
secretName: ghost-mysql
|
||||
items:
|
||||
- key: mysql-root-password
|
||||
path: mysql-root-password
|
||||
- key: mysql-password
|
||||
path: mysql-password
|
||||
- name: empty-dir
|
||||
emptyDir: {}
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: data
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/component: primary
|
||||
spec:
|
||||
accessModes:
|
||||
- "ReadWriteOnce"
|
||||
resources:
|
||||
requests:
|
||||
storage: "8Gi"
|
27
apps/mysql/maniifests/primary/svc-headless.yaml
Normal file
27
apps/mysql/maniifests/primary/svc-headless.yaml
Normal file
@@ -0,0 +1,27 @@
|
||||
---
|
||||
# Source: ghost/charts/mysql/templates/primary/svc-headless.yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ghost-mysql-headless
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/version: 8.4.5
|
||||
helm.sh/chart: mysql-12.3.4
|
||||
app.kubernetes.io/part-of: mysql
|
||||
app.kubernetes.io/component: primary
|
||||
spec:
|
||||
type: ClusterIP
|
||||
clusterIP: None
|
||||
publishNotReadyAddresses: true
|
||||
ports:
|
||||
- name: mysql
|
||||
port: 3306
|
||||
targetPort: mysql
|
||||
selector:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/component: primary
|
29
apps/mysql/maniifests/primary/svc.yaml
Normal file
29
apps/mysql/maniifests/primary/svc.yaml
Normal file
@@ -0,0 +1,29 @@
|
||||
---
|
||||
# Source: ghost/charts/mysql/templates/primary/svc.yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ghost-mysql
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/version: 8.4.5
|
||||
helm.sh/chart: mysql-12.3.4
|
||||
app.kubernetes.io/part-of: mysql
|
||||
app.kubernetes.io/component: primary
|
||||
spec:
|
||||
type: ClusterIP
|
||||
sessionAffinity: None
|
||||
ports:
|
||||
- name: mysql
|
||||
port: 3306
|
||||
protocol: TCP
|
||||
targetPort: mysql
|
||||
nodePort: null
|
||||
selector:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/part-of: mysql
|
||||
app.kubernetes.io/component: primary
|
18
apps/mysql/maniifests/secrets.yaml
Normal file
18
apps/mysql/maniifests/secrets.yaml
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
# Source: ghost/charts/mysql/templates/secrets.yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: ghost-mysql
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/version: 8.4.5
|
||||
helm.sh/chart: mysql-12.3.4
|
||||
app.kubernetes.io/part-of: mysql
|
||||
type: Opaque
|
||||
data:
|
||||
mysql-root-password: "SnVOdks2T2tDdA=="
|
||||
mysql-password: "eGllcFBUZWJabw=="
|
17
apps/mysql/maniifests/serviceaccount.yaml
Normal file
17
apps/mysql/maniifests/serviceaccount.yaml
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
# Source: ghost/charts/mysql/templates/serviceaccount.yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: ghost-mysql
|
||||
namespace: "default"
|
||||
labels:
|
||||
app.kubernetes.io/instance: ghost
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/version: 8.4.5
|
||||
helm.sh/chart: mysql-12.3.4
|
||||
app.kubernetes.io/part-of: mysql
|
||||
automountServiceAccountToken: false
|
||||
secrets:
|
||||
- name: ghost-mysql
|
3
bin/README.md
Normal file
3
bin/README.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# Sovereign Cloud Binaries
|
||||
|
||||
These are the scripts that help you manage your cloud.
|
69
bin/chart-diff
Executable file
69
bin/chart-diff
Executable file
@@ -0,0 +1,69 @@
|
||||
#!/bin/bash
|
||||
# chart-diff
|
||||
# Shows differences between current and new chart version using gomplate for values
|
||||
# Usage: chart-diff CHART [RELEASE_NAME] [NAMESPACE]
|
||||
|
||||
set -e
|
||||
|
||||
# Get script directories
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
|
||||
|
||||
# Load environment variables if needed
|
||||
if [[ -z "$ENVIRONMENT" ]]; then
|
||||
if [[ -f "${REPO_ROOT}/load-env.sh" ]]; then
|
||||
source "${REPO_ROOT}/load-env.sh"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Print usage
|
||||
if [[ $# -lt 1 || "$1" == "-h" || "$1" == "--help" ]]; then
|
||||
echo "Usage: $(basename $0) CHART [RELEASE_NAME] [NAMESPACE]"
|
||||
echo ""
|
||||
echo "Shows differences between current and new chart version."
|
||||
echo ""
|
||||
echo " CHART Chart name in charts/ directory"
|
||||
echo " RELEASE_NAME Release name (defaults to CHART)"
|
||||
echo " NAMESPACE Namespace (defaults to RELEASE_NAME)"
|
||||
echo ""
|
||||
echo ""
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CHART="$1"
|
||||
RELEASE_NAME="${2:-$CHART}"
|
||||
NAMESPACE="${3:-$RELEASE_NAME}"
|
||||
|
||||
# We use kubectl diff now, no need for helm-diff plugin
|
||||
|
||||
# Check if chart exists
|
||||
CHART_PATH="${REPO_ROOT}/charts/${CHART}"
|
||||
if [[ ! -d "$CHART_PATH" ]]; then
|
||||
echo "Error: Chart not found at ${CHART_PATH}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# We'll use chart-template for values, so we don't need to check here
|
||||
|
||||
# Show what would change
|
||||
echo "==> Showing differences for chart: $CHART"
|
||||
echo "==> Release name: $RELEASE_NAME"
|
||||
echo "==> Namespace: $NAMESPACE"
|
||||
echo ""
|
||||
|
||||
# Create temporary files for the template output
|
||||
TEMP_OUTPUT=$(mktemp)
|
||||
CLEAN_OUTPUT=$(mktemp)
|
||||
|
||||
# Generate the template and filter out the header text
|
||||
"${SCRIPT_DIR}/chart-template" "$CHART" "$RELEASE_NAME" "$NAMESPACE" > "$TEMP_OUTPUT"
|
||||
sed -n '/^---$/,$p' "$TEMP_OUTPUT" > "$CLEAN_OUTPUT"
|
||||
|
||||
# Use kubectl diff to show actual differences between current state and template
|
||||
echo "==> Showing differences between current cluster state and template for $CHART:"
|
||||
echo "==> (+ indicates additions, - indicates removals)"
|
||||
echo ""
|
||||
kubectl diff -f "$CLEAN_OUTPUT" || true
|
||||
|
||||
# Clean up
|
||||
rm "$TEMP_OUTPUT" "$CLEAN_OUTPUT"
|
108
bin/chart-install
Executable file
108
bin/chart-install
Executable file
@@ -0,0 +1,108 @@
|
||||
#!/bin/bash
|
||||
# chart-install
|
||||
# Installs a Helm chart using gomplate for templating values
|
||||
# Usage: chart-install CHART [RELEASE_NAME] [NAMESPACE] [--dry-run]
|
||||
|
||||
set -e
|
||||
|
||||
# Get script directories
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
|
||||
|
||||
# Load environment variables if needed
|
||||
if [[ -z "$ENVIRONMENT" ]]; then
|
||||
if [[ -f "${REPO_ROOT}/load-env.sh" ]]; then
|
||||
source "${REPO_ROOT}/load-env.sh"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Print usage
|
||||
if [[ $# -lt 1 || "$1" == "-h" || "$1" == "--help" ]]; then
|
||||
echo "Usage: $(basename $0) CHART [RELEASE_NAME] [NAMESPACE] [--dry-run]"
|
||||
echo ""
|
||||
echo "Install or upgrade a chart with environment variable substitution."
|
||||
echo ""
|
||||
echo " CHART Chart name in charts/ directory"
|
||||
echo " RELEASE_NAME Release name (defaults to CHART)"
|
||||
echo " NAMESPACE Namespace (defaults to RELEASE_NAME)"
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " --dry-run Show what would be installed without actually installing"
|
||||
echo ""
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check for dry run flag
|
||||
DRY_RUN=false
|
||||
for arg in "$@"; do
|
||||
if [ "$arg" == "--dry-run" ]; then
|
||||
DRY_RUN=true
|
||||
fi
|
||||
done
|
||||
|
||||
CHART="$1"
|
||||
RELEASE_NAME="${2:-$CHART}"
|
||||
NAMESPACE="${3:-$RELEASE_NAME}"
|
||||
|
||||
# Check if chart exists
|
||||
CHART_PATH="${REPO_ROOT}/charts/${CHART}"
|
||||
if [[ ! -d "$CHART_PATH" ]]; then
|
||||
echo "Error: Chart not found at ${CHART_PATH}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Update chart dependencies if Chart.yaml has dependencies section
|
||||
if [ -f "${CHART_PATH}/Chart.yaml" ] && grep -q "dependencies:" "${CHART_PATH}/Chart.yaml"; then
|
||||
echo "Updating dependencies for chart: $CHART"
|
||||
helm dependency update "$CHART_PATH"
|
||||
fi
|
||||
|
||||
# We'll use chart-template for values, so we don't need to check here
|
||||
|
||||
# Create namespace (unless --dry-run was specified)
|
||||
if [ "$DRY_RUN" == "false" ]; then
|
||||
kubectl create namespace "$NAMESPACE" --dry-run=client -o yaml | kubectl apply -f -
|
||||
fi
|
||||
|
||||
# Run the helm command
|
||||
echo "Installing chart: $CHART as $RELEASE_NAME in namespace $NAMESPACE"
|
||||
|
||||
if [ "$DRY_RUN" == "true" ]; then
|
||||
echo "==> DRY RUN: Would install the following resources:"
|
||||
"${SCRIPT_DIR}/chart-template" "$CHART" "$RELEASE_NAME" "$NAMESPACE"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# For actual installation, create a temporary file to capture template output
|
||||
TEMP_OUTPUT=$(mktemp)
|
||||
"${SCRIPT_DIR}/chart-template" "$CHART" "$RELEASE_NAME" "$NAMESPACE" > "$TEMP_OUTPUT"
|
||||
|
||||
# Apply the template to the cluster
|
||||
kubectl apply -f "$TEMP_OUTPUT"
|
||||
|
||||
# Clean up
|
||||
rm "$TEMP_OUTPUT"
|
||||
|
||||
# Print helpful information (if not dry run)
|
||||
if [ "$DRY_RUN" == "false" ]; then
|
||||
echo ""
|
||||
echo "✅ Successfully installed/upgraded $RELEASE_NAME in namespace $NAMESPACE"
|
||||
echo ""
|
||||
echo "To check the status:"
|
||||
echo " kubectl get all -n $NAMESPACE -l app.kubernetes.io/instance=$RELEASE_NAME"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Check for post-install instructions (only if not dry run)
|
||||
if [ "$DRY_RUN" == "false" ]; then
|
||||
INSTRUCTIONS_FILE="${CHART_PATH}/POST_INSTALL_NOTES.txt"
|
||||
if [[ -f "$INSTRUCTIONS_FILE" ]]; then
|
||||
# Process environment variables in instructions (using the same environment)
|
||||
echo ""
|
||||
echo "Post-installation instructions:"
|
||||
echo "==============================="
|
||||
gomplate -f "$INSTRUCTIONS_FILE"
|
||||
else
|
||||
echo "For more information, see the documentation in ${CHART_PATH}/README.md"
|
||||
fi
|
||||
fi
|
65
bin/chart-template
Executable file
65
bin/chart-template
Executable file
@@ -0,0 +1,65 @@
|
||||
#!/bin/bash
|
||||
# chart-template
|
||||
# Renders the template for a Helm chart using gomplate for values files
|
||||
# Usage: chart-template CHART [RELEASE_NAME] [NAMESPACE]
|
||||
|
||||
set -e
|
||||
|
||||
# Get script directories
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
|
||||
|
||||
# Load environment variables if needed
|
||||
if [[ -f "${REPO_ROOT}/load-env.sh" ]]; then
|
||||
source "${REPO_ROOT}/load-env.sh"
|
||||
fi
|
||||
|
||||
# Print usage
|
||||
if [[ $# -lt 1 || "$1" == "-h" || "$1" == "--help" ]]; then
|
||||
echo "Usage: $(basename $0) CHART [RELEASE_NAME] [NAMESPACE]"
|
||||
echo ""
|
||||
echo "Renders the Kubernetes templates for a chart with environment variable substitution."
|
||||
echo ""
|
||||
echo " CHART Chart name in charts/ directory"
|
||||
echo " RELEASE_NAME Release name (defaults to CHART)"
|
||||
echo " NAMESPACE Namespace (defaults to RELEASE_NAME)"
|
||||
echo ""
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CHART="$1"
|
||||
RELEASE_NAME="${2:-$CHART}"
|
||||
NAMESPACE="${3:-$RELEASE_NAME}"
|
||||
|
||||
# Check if chart exists
|
||||
CHART_PATH="${REPO_ROOT}/charts/${CHART}"
|
||||
if [[ ! -d "$CHART_PATH" ]]; then
|
||||
echo "Error: Chart not found at ${CHART_PATH}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if template values file exists
|
||||
TPL_VALUES_FILE="${CHART_PATH}/values.template.yaml"
|
||||
if [[ ! -f "$TPL_VALUES_FILE" ]]; then
|
||||
echo "Error: Template values file not found at ${TPL_VALUES_FILE}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Set variables needed for template
|
||||
export RELEASE_NAME="$RELEASE_NAME"
|
||||
export NAMESPACE="$NAMESPACE"
|
||||
export CHART="$CHART"
|
||||
|
||||
# No headers - just let helm template output the YAML
|
||||
|
||||
# Create temporary values file with gomplate
|
||||
TEMP_VALUES=$(mktemp)
|
||||
gomplate -f "$TPL_VALUES_FILE" > "$TEMP_VALUES"
|
||||
|
||||
# Run helm template
|
||||
helm template "$RELEASE_NAME" "$CHART_PATH" \
|
||||
--namespace "$NAMESPACE" \
|
||||
--values "$TEMP_VALUES"
|
||||
|
||||
# Clean up
|
||||
rm "$TEMP_VALUES"
|
30
bin/dashboard-token
Executable file
30
bin/dashboard-token
Executable file
@@ -0,0 +1,30 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Set up colors for better readability
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
PURPLE='\033[0;35m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# The namespace where the dashboard is installed
|
||||
NAMESPACE=kubernetes-dashboard
|
||||
|
||||
# Get the token
|
||||
TOKEN=$(kubectl -n $NAMESPACE get secret dashboard-admin-token -o jsonpath="{.data.token}" | base64 -d)
|
||||
|
||||
# Print instructions
|
||||
echo -e "${BLUE}╔════════════════════════════════════════════════════════════╗${NC}"
|
||||
echo -e "${BLUE}║ ${GREEN}Kubernetes Dashboard Token${BLUE} ║${NC}"
|
||||
echo -e "${BLUE}╚════════════════════════════════════════════════════════════╝${NC}"
|
||||
echo
|
||||
echo -e "${GREEN}Use this token to authenticate to the Kubernetes Dashboard:${NC}"
|
||||
echo
|
||||
echo -e "${PURPLE}$TOKEN${NC}"
|
||||
echo
|
||||
|
||||
# Save token to clipboard if xclip is available
|
||||
if command -v xclip &> /dev/null; then
|
||||
echo -n "$TOKEN" | xclip -selection clipboard
|
||||
echo -e "${GREEN}Token has been copied to your clipboard!${NC}"
|
||||
fi
|
137
bin/deploy-service
Executable file
137
bin/deploy-service
Executable file
@@ -0,0 +1,137 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Default values
|
||||
SERVICE_NAME=""
|
||||
DRY_RUN=false
|
||||
|
||||
# Source environment variables from load-env.sh
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
if [ -f "$REPO_DIR/load-env.sh" ]; then
|
||||
source "$REPO_DIR/load-env.sh"
|
||||
fi
|
||||
|
||||
function show_help {
|
||||
echo "Usage: $0 SERVICE_NAME [options]"
|
||||
echo ""
|
||||
echo "Arguments:"
|
||||
echo " SERVICE_NAME Name of the service to deploy (directory name in services/)"
|
||||
echo ""
|
||||
echo "Optional arguments:"
|
||||
echo " --dry-run Preview the processed configuration without applying"
|
||||
echo " --help Show this help message"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " $0 example-app"
|
||||
echo " $0 blog --dry-run"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Legacy mode check for type-based commands
|
||||
if [[ "$1" == "--type" ]]; then
|
||||
echo "Warning: Using legacy mode (generate and deploy in one step)"
|
||||
echo "Consider using generate-service followed by deploy-service instead."
|
||||
echo "Continuing with legacy mode..."
|
||||
echo ""
|
||||
|
||||
# Capture all arguments
|
||||
ALL_ARGS="$@"
|
||||
|
||||
# Extract service name from arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
key="$1"
|
||||
case $key in
|
||||
--name)
|
||||
SERVICE_NAME_LEGACY="$2"
|
||||
break
|
||||
;;
|
||||
*)
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Generate the service configuration first
|
||||
TMP_DIR=$(mktemp -d)
|
||||
TMP_FILE="$TMP_DIR/service.yaml"
|
||||
|
||||
$SCRIPT_DIR/generate-service $ALL_ARGS --output "$TMP_DIR"
|
||||
|
||||
# Now deploy it using the service name
|
||||
if [[ -n "$SERVICE_NAME_LEGACY" ]]; then
|
||||
exec $0 "$SERVICE_NAME_LEGACY"
|
||||
else
|
||||
echo "Error: Legacy mode requires --name parameter"
|
||||
exit 1
|
||||
fi
|
||||
exit $?
|
||||
fi
|
||||
|
||||
# Parse command-line arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
key="$1"
|
||||
case $key in
|
||||
--dry-run)
|
||||
DRY_RUN=true
|
||||
shift
|
||||
;;
|
||||
--help)
|
||||
show_help
|
||||
;;
|
||||
-*)
|
||||
echo "Unknown option: $1"
|
||||
show_help
|
||||
;;
|
||||
*)
|
||||
# First non-option argument is the service name
|
||||
SERVICE_NAME="$1"
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Validate service name
|
||||
if [[ -z "$SERVICE_NAME" ]]; then
|
||||
echo "Error: SERVICE_NAME must be provided"
|
||||
show_help
|
||||
fi
|
||||
|
||||
# Construct the service file path
|
||||
SERVICE_FILE="$REPO_DIR/services/$SERVICE_NAME/service.yaml"
|
||||
if [[ ! -f "$SERVICE_FILE" ]]; then
|
||||
echo "Error: Service file not found for $SERVICE_NAME at $SERVICE_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create temporary file for the processed manifest
|
||||
TEMP_FILE=$(mktemp)
|
||||
|
||||
# Ensure DOMAIN is exported for template substitution
|
||||
export DOMAIN="$DOMAIN"
|
||||
|
||||
# Process the service file with variable substitution
|
||||
echo "Processing service file: $SERVICE_FILE"
|
||||
cat "$SERVICE_FILE" | envsubst > "$TEMP_FILE"
|
||||
|
||||
# Handle dry run mode
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
cat "$TEMP_FILE"
|
||||
rm "$TEMP_FILE"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Extract namespace from the processed file (for creating it if needed)
|
||||
NAMESPACE=$(grep -o "namespace: [a-zA-Z0-9_-]\+" "$TEMP_FILE" | head -1 | cut -d' ' -f2)
|
||||
if [[ -n "$NAMESPACE" ]]; then
|
||||
# Create the namespace if it doesn't exist (using kubectl create which is idempotent with --dry-run=client)
|
||||
echo "Creating namespace $NAMESPACE if it doesn't exist..."
|
||||
kubectl create namespace "$NAMESPACE" --dry-run=client | kubectl create -f - 2>/dev/null || true
|
||||
fi
|
||||
|
||||
# Apply the service
|
||||
echo "Applying service configuration..."
|
||||
kubectl apply -f "$TEMP_FILE"
|
||||
rm "$TEMP_FILE"
|
||||
|
||||
echo "✅ Service deployed successfully!"
|
186
bin/generate-service
Executable file
186
bin/generate-service
Executable file
@@ -0,0 +1,186 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Source environment variables for defaults and domain settings
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
if [ -f "$SCRIPT_DIR/../load-env.sh" ]; then
|
||||
source "$SCRIPT_DIR/../load-env.sh"
|
||||
fi
|
||||
|
||||
# Default values
|
||||
SERVICE_TYPE=""
|
||||
SERVICE_NAME=""
|
||||
NAMESPACE=""
|
||||
IMAGE=""
|
||||
PORT=""
|
||||
SERVICE_DOMAIN=""
|
||||
OUTPUT_DIR=""
|
||||
|
||||
function show_help {
|
||||
echo "Usage: $0 --type [public|internal|database|microservice] --name SERVICE_NAME [options]"
|
||||
echo ""
|
||||
echo "Required arguments:"
|
||||
echo " --type TYPE Service type (public, internal, database, or microservice)"
|
||||
echo " --name NAME Service name"
|
||||
echo ""
|
||||
echo "Optional arguments:"
|
||||
echo " --namespace NAMESPACE Kubernetes namespace (defaults to service name)"
|
||||
echo " --image IMAGE Container image (defaults to nginx:latest for most types)"
|
||||
echo " --port PORT Container port (defaults to 80)"
|
||||
echo " --domain DOMAIN Custom domain (defaults to TYPE-specific domain)"
|
||||
echo " --output DIR Output directory (defaults to services/NAME)"
|
||||
echo " --help Show this help message"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " $0 --type public --name blog"
|
||||
echo " $0 --type internal --name admin --image my-admin:v1 --port 8080"
|
||||
echo " $0 --type database --name mysql --image mysql:8.0 --port 3306"
|
||||
echo " $0 --type microservice --name auth --image auth-service:v1 --port 9000"
|
||||
exit 1
|
||||
}
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
key="$1"
|
||||
case $key in
|
||||
--type)
|
||||
SERVICE_TYPE="$2"
|
||||
shift 2
|
||||
;;
|
||||
--name)
|
||||
SERVICE_NAME="$2"
|
||||
shift 2
|
||||
;;
|
||||
--namespace)
|
||||
NAMESPACE="$2"
|
||||
shift 2
|
||||
;;
|
||||
--image)
|
||||
IMAGE="$2"
|
||||
shift 2
|
||||
;;
|
||||
--port)
|
||||
PORT="$2"
|
||||
shift 2
|
||||
;;
|
||||
--domain)
|
||||
SERVICE_DOMAIN="$2"
|
||||
shift 2
|
||||
;;
|
||||
--output)
|
||||
OUTPUT_DIR="$2"
|
||||
shift 2
|
||||
;;
|
||||
--help)
|
||||
show_help
|
||||
;;
|
||||
*)
|
||||
echo "Unknown option: $1"
|
||||
show_help
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Validate required parameters
|
||||
if [[ -z "$SERVICE_TYPE" ]]; then
|
||||
echo "Error: Service type is required"
|
||||
show_help
|
||||
fi
|
||||
|
||||
if [[ -z "$SERVICE_NAME" ]]; then
|
||||
echo "Error: Service name is required"
|
||||
show_help
|
||||
fi
|
||||
|
||||
# Validate service type
|
||||
if [[ "$SERVICE_TYPE" != "public" && "$SERVICE_TYPE" != "internal" && "$SERVICE_TYPE" != "database" && "$SERVICE_TYPE" != "microservice" ]]; then
|
||||
echo "Error: Invalid service type. Must be public, internal, database, or microservice."
|
||||
show_help
|
||||
fi
|
||||
|
||||
# Set defaults
|
||||
if [[ -z "$NAMESPACE" ]]; then
|
||||
NAMESPACE="$SERVICE_NAME"
|
||||
fi
|
||||
|
||||
if [[ -z "$IMAGE" ]]; then
|
||||
if [[ "$SERVICE_TYPE" == "database" ]]; then
|
||||
IMAGE="mariadb:10.6"
|
||||
else
|
||||
IMAGE="nginx:latest"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -z "$PORT" ]]; then
|
||||
if [[ "$SERVICE_TYPE" == "database" ]]; then
|
||||
PORT="3306"
|
||||
else
|
||||
PORT="80"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -z "$SERVICE_DOMAIN" ]]; then
|
||||
if [[ "$SERVICE_TYPE" == "public" ]]; then
|
||||
SERVICE_DOMAIN="\${SERVICE_NAME}.\${DOMAIN}"
|
||||
elif [[ "$SERVICE_TYPE" == "internal" ]]; then
|
||||
SERVICE_DOMAIN="\${SERVICE_NAME}.internal.\${DOMAIN}"
|
||||
elif [[ "$SERVICE_TYPE" == "microservice" ]]; then
|
||||
SERVICE_DOMAIN="\${SERVICE_NAME}.svc.\${DOMAIN}"
|
||||
else
|
||||
SERVICE_DOMAIN="\${SERVICE_NAME}.db.\${DOMAIN}"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Set default output directory if not provided
|
||||
if [[ -z "$OUTPUT_DIR" ]]; then
|
||||
OUTPUT_DIR="$SCRIPT_DIR/../services/$SERVICE_NAME"
|
||||
fi
|
||||
|
||||
echo "Generating $SERVICE_TYPE service configuration for: $SERVICE_NAME"
|
||||
echo "Namespace: $NAMESPACE"
|
||||
echo "Image: $IMAGE"
|
||||
echo "Port: $PORT"
|
||||
echo "Domain Template: $SERVICE_DOMAIN"
|
||||
echo "Output Directory: $OUTPUT_DIR"
|
||||
echo
|
||||
|
||||
# Get the appropriate template
|
||||
if [[ "$SERVICE_TYPE" == "microservice" ]]; then
|
||||
TEMPLATE_FILE="$SCRIPT_DIR/../services/templates/microservice/service.yaml"
|
||||
else
|
||||
TEMPLATE_FILE="$SCRIPT_DIR/../services/templates/${SERVICE_TYPE}-service/service.yaml"
|
||||
fi
|
||||
|
||||
if [[ ! -f "$TEMPLATE_FILE" ]]; then
|
||||
echo "Error: Template file not found: $TEMPLATE_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create output directory if it doesn't exist
|
||||
mkdir -p "$OUTPUT_DIR"
|
||||
|
||||
# Create the service YAML
|
||||
echo "Creating service configuration..."
|
||||
|
||||
# Prepare variables for substitution
|
||||
export SERVICE_NAME="$SERVICE_NAME"
|
||||
export SERVICE_NAMESPACE="$NAMESPACE"
|
||||
export SERVICE_IMAGE="\"$IMAGE\""
|
||||
export SERVICE_PORT="$PORT"
|
||||
export SERVICE_DOMAIN="$SERVICE_DOMAIN"
|
||||
|
||||
# Process the template with variable substitution
|
||||
mkdir -p "$OUTPUT_DIR"
|
||||
|
||||
# Define which variables to replace - only those from command arguments
|
||||
VARS_TO_REPLACE='${SERVICE_NAME},${SERVICE_NAMESPACE},${SERVICE_IMAGE},${SERVICE_PORT},${SERVICE_DOMAIN}'
|
||||
|
||||
# Process the template, only substituting the variables from arguments
|
||||
cat "$TEMPLATE_FILE" | envsubst "$VARS_TO_REPLACE" > "$OUTPUT_DIR/service.yaml"
|
||||
|
||||
echo "✅ Service configuration generated successfully!"
|
||||
echo "Configuration file: $OUTPUT_DIR/service.yaml"
|
||||
echo ""
|
||||
echo "To deploy this service configuration:"
|
||||
echo " ./bin/deploy-service $SERVICE_NAME"
|
||||
echo ""
|
||||
echo "To customize further, edit the generated file before deployment."
|
67
bin/install-ca-ubuntu
Executable file
67
bin/install-ca-ubuntu
Executable file
@@ -0,0 +1,67 @@
|
||||
#!/bin/bash
|
||||
|
||||
# This script installs the local CA certificate on Ubuntu systems to avoid
|
||||
# certificate warnings in browsers when accessing internal cloud services.
|
||||
|
||||
# Set up error handling
|
||||
set -e
|
||||
|
||||
# Define colors for better readability
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
RED='\033[0;31m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
CA_DIR="/home/payne/repos/cloud.payne.io-setup/ca"
|
||||
CA_FILE="$CA_DIR/ca.crt"
|
||||
TARGET_DIR="/usr/local/share/ca-certificates"
|
||||
TARGET_FILE="cloud-payne-local-ca.crt"
|
||||
|
||||
echo -e "${BLUE}=== Installing Local CA Certificate on Ubuntu ===${NC}"
|
||||
echo
|
||||
|
||||
# Check if CA file exists
|
||||
if [ ! -f "$CA_FILE" ]; then
|
||||
echo -e "${RED}CA certificate not found at $CA_FILE${NC}"
|
||||
echo -e "${YELLOW}Please run the create-local-ca script first:${NC}"
|
||||
echo -e "${BLUE}./bin/create-local-ca${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Copy to the system certificate directory
|
||||
echo -e "${YELLOW}Copying CA certificate to $TARGET_DIR/$TARGET_FILE...${NC}"
|
||||
sudo cp "$CA_FILE" "$TARGET_DIR/$TARGET_FILE"
|
||||
|
||||
# Update the CA certificates
|
||||
echo -e "${YELLOW}Updating system CA certificates...${NC}"
|
||||
sudo update-ca-certificates
|
||||
|
||||
# Update browsers' CA store (optional, for Firefox)
|
||||
if [ -d "$HOME/.mozilla" ]; then
|
||||
echo -e "${YELLOW}You may need to manually import the certificate in Firefox:${NC}"
|
||||
echo -e "1. Open Firefox"
|
||||
echo -e "2. Go to Preferences > Privacy & Security > Certificates"
|
||||
echo -e "3. Click 'View Certificates' > 'Authorities' tab"
|
||||
echo -e "4. Click 'Import' and select $CA_FILE"
|
||||
echo -e "5. Check 'Trust this CA to identify websites' and click OK"
|
||||
fi
|
||||
|
||||
# Check popular browsers
|
||||
if command -v google-chrome &> /dev/null; then
|
||||
echo -e "${YELLOW}For Chrome, the system-wide certificate should now be recognized${NC}"
|
||||
echo -e "${YELLOW}You may need to restart the browser${NC}"
|
||||
fi
|
||||
|
||||
echo
|
||||
echo -e "${GREEN}=== CA Certificate Installation Complete ===${NC}"
|
||||
echo
|
||||
echo -e "${YELLOW}System-wide CA certificate has been installed.${NC}"
|
||||
echo -e "${YELLOW}You should now be able to access the Kubernetes Dashboard without certificate warnings:${NC}"
|
||||
echo -e "${BLUE}https://kubernetes-dashboard.in.cloud.payne.io${NC}"
|
||||
echo
|
||||
echo -e "${YELLOW}If you still see certificate warnings, try:${NC}"
|
||||
echo "1. Restart your browser"
|
||||
echo "2. Clear your browser's cache and cookies"
|
||||
echo "3. If using a non-standard browser, you may need to import the certificate manually"
|
||||
echo
|
211
bin/setup-systemd-resolved-dns
Executable file
211
bin/setup-systemd-resolved-dns
Executable file
@@ -0,0 +1,211 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Set up error handling
|
||||
set -e
|
||||
|
||||
# Define colors for better readability
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
RED='\033[0;31m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Load environment variables
|
||||
if [ -f "$(dirname "$0")/../load-env.sh" ]; then
|
||||
echo -e "${YELLOW}Loading environment variables...${NC}"
|
||||
source "$(dirname "$0")/../load-env.sh"
|
||||
fi
|
||||
|
||||
# Get cluster IP
|
||||
echo -e "${YELLOW}Getting cluster IP address...${NC}"
|
||||
CLUSTER_IP=$(kubectl get -n kube-system service traefik -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
|
||||
if [ -z "$CLUSTER_IP" ]; then
|
||||
echo -e "${RED}Failed to get cluster IP. Is Traefik running?${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "${YELLOW}Using cluster IP: ${CLUSTER_IP}${NC}"
|
||||
|
||||
# Domain settings
|
||||
DOMAIN="cloud.payne.io"
|
||||
INTERNAL_DOMAIN="in.${DOMAIN}"
|
||||
DASHBOARD_DOMAIN="kubernetes-dashboard.${INTERNAL_DOMAIN}"
|
||||
|
||||
echo -e "${BLUE}=== Setting up Split DNS with systemd-resolved ===${NC}"
|
||||
echo -e "${YELLOW}Internal Domain: ${INTERNAL_DOMAIN}${NC}"
|
||||
echo -e "${YELLOW}Dashboard Domain: ${DASHBOARD_DOMAIN}${NC}"
|
||||
echo
|
||||
|
||||
# Check if running as root
|
||||
if [ "$EUID" -ne 0 ]; then
|
||||
echo -e "${RED}This script must be run as root to configure systemd-resolved.${NC}"
|
||||
echo -e "${YELLOW}Please run: sudo $0${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create systemd-resolved configuration directory if it doesn't exist
|
||||
echo -e "${YELLOW}Creating systemd-resolved configuration directory...${NC}"
|
||||
mkdir -p /etc/systemd/resolved.conf.d/
|
||||
|
||||
# Create the configuration file for split DNS
|
||||
echo -e "${YELLOW}Creating split DNS configuration...${NC}"
|
||||
cat > /etc/systemd/resolved.conf.d/split-dns.conf << EOF
|
||||
[Resolve]
|
||||
# Use Google DNS servers as fallback
|
||||
FallbackDNS=8.8.8.8 8.8.4.4
|
||||
|
||||
# Define our domain for special handling
|
||||
Domains=~${INTERNAL_DOMAIN}
|
||||
|
||||
# Enable split DNS
|
||||
DNSStubListenerExtra=${CLUSTER_IP}
|
||||
EOF
|
||||
|
||||
# Create a static host entry for the dashboard domain
|
||||
echo -e "${YELLOW}Creating static address mapping for ${DASHBOARD_DOMAIN}...${NC}"
|
||||
mkdir -p /etc/systemd/resolved.conf.d/
|
||||
|
||||
cat > /etc/systemd/resolved.conf.d/static-domains.conf << EOF
|
||||
[Resolve]
|
||||
# Map our dashboard domain to the cluster IP
|
||||
$(echo "${DASHBOARD_DOMAIN} ${CLUSTER_IP}" | awk '{print "DNS=" $2 "#" $1}')
|
||||
EOF
|
||||
|
||||
# Restart systemd-resolved
|
||||
echo -e "${YELLOW}Restarting systemd-resolved...${NC}"
|
||||
systemctl restart systemd-resolved
|
||||
|
||||
# Remove immutable flag from resolv.conf if set
|
||||
if lsattr /etc/resolv.conf 2>/dev/null | grep -q 'i'; then
|
||||
echo -e "${YELLOW}Removing immutable flag from /etc/resolv.conf...${NC}"
|
||||
chattr -i /etc/resolv.conf
|
||||
fi
|
||||
|
||||
# Configure resolv.conf to use systemd-resolved
|
||||
echo -e "${YELLOW}Configuring /etc/resolv.conf to use systemd-resolved...${NC}"
|
||||
ln -sf /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf
|
||||
|
||||
# Now for the Kubernetes parts
|
||||
echo -e "${YELLOW}Setting up Kubernetes components...${NC}"
|
||||
|
||||
# Get email for Let's Encrypt
|
||||
echo -e "${YELLOW}Please enter an email address for Let's Encrypt registration:${NC}"
|
||||
read -p "Email: " EMAIL_ADDRESS
|
||||
|
||||
# Ensure cert-manager namespace exists
|
||||
kubectl get namespace cert-manager >/dev/null 2>&1 || kubectl create namespace cert-manager
|
||||
|
||||
# Install cert-manager if needed
|
||||
if ! kubectl get deployment -n cert-manager cert-manager >/dev/null 2>&1; then
|
||||
echo -e "${YELLOW}Installing cert-manager...${NC}"
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
helm upgrade --install cert-manager jetstack/cert-manager \
|
||||
--namespace cert-manager \
|
||||
--set installCRDs=true
|
||||
|
||||
# Wait for cert-manager to be ready
|
||||
echo -e "${YELLOW}Waiting for cert-manager to be ready (this may take a minute)...${NC}"
|
||||
sleep 30
|
||||
kubectl wait --for=condition=available --timeout=300s deployment/cert-manager -n cert-manager
|
||||
kubectl wait --for=condition=available --timeout=300s deployment/cert-manager-webhook -n cert-manager
|
||||
fi
|
||||
|
||||
# Ensure kubernetes-dashboard namespace exists
|
||||
kubectl get namespace kubernetes-dashboard >/dev/null 2>&1 || kubectl create namespace kubernetes-dashboard
|
||||
|
||||
# Install the dashboard if not already installed
|
||||
if ! kubectl get deployment -n kubernetes-dashboard kubernetes-dashboard >/dev/null 2>&1; then
|
||||
echo -e "${YELLOW}Installing Kubernetes Dashboard...${NC}"
|
||||
"$(dirname "$0")/install-simple-dashboard"
|
||||
fi
|
||||
|
||||
# Create a ClusterIssuer for Let's Encrypt
|
||||
echo -e "${YELLOW}Setting up Let's Encrypt ClusterIssuer...${NC}"
|
||||
cat << EOF | kubectl apply -f -
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-prod
|
||||
spec:
|
||||
acme:
|
||||
server: https://acme-v02.api.letsencrypt.org/directory
|
||||
email: ${EMAIL_ADDRESS}
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-prod
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
class: traefik
|
||||
EOF
|
||||
|
||||
# Check dashboard service name and port
|
||||
echo -e "${YELLOW}Checking kubernetes-dashboard service...${NC}"
|
||||
DASHBOARD_SERVICE=$(kubectl get svc -n kubernetes-dashboard -o name | grep kubernetes-dashboard | grep -v metrics-scraper | grep -v api | grep -v auth | head -1)
|
||||
|
||||
if [ -z "$DASHBOARD_SERVICE" ]; then
|
||||
echo -e "${RED}Kubernetes Dashboard service not found. Please check your installation.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get the service name without the "service/" prefix
|
||||
DASHBOARD_SERVICE_NAME=$(echo $DASHBOARD_SERVICE | cut -d'/' -f2)
|
||||
echo -e "${YELLOW}Found dashboard service: ${DASHBOARD_SERVICE_NAME}${NC}"
|
||||
|
||||
# Get the service port
|
||||
DASHBOARD_PORT=$(kubectl get svc $DASHBOARD_SERVICE_NAME -n kubernetes-dashboard -o jsonpath='{.spec.ports[0].port}')
|
||||
echo -e "${YELLOW}Dashboard port: ${DASHBOARD_PORT}${NC}"
|
||||
|
||||
# Create an Ingress with TLS
|
||||
echo -e "${YELLOW}Creating ingress with TLS for ${DASHBOARD_DOMAIN}...${NC}"
|
||||
cat << EOF | kubectl apply -f -
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: kubernetes-dashboard
|
||||
namespace: kubernetes-dashboard
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: traefik
|
||||
cert-manager.io/cluster-issuer: letsencrypt-prod
|
||||
traefik.ingress.kubernetes.io/service.serversscheme: https
|
||||
traefik.ingress.kubernetes.io/service.serverstransport.insecureskipverify: "true"
|
||||
spec:
|
||||
tls:
|
||||
- hosts:
|
||||
- ${DASHBOARD_DOMAIN}
|
||||
secretName: kubernetes-dashboard-tls
|
||||
rules:
|
||||
- host: ${DASHBOARD_DOMAIN}
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: ${DASHBOARD_SERVICE_NAME}
|
||||
port:
|
||||
number: ${DASHBOARD_PORT}
|
||||
EOF
|
||||
|
||||
echo
|
||||
echo -e "${GREEN}=== Split DNS Setup Complete! ===${NC}"
|
||||
echo
|
||||
echo -e "${YELLOW}Your Kubernetes Dashboard will be available at:${NC}"
|
||||
echo -e "${BLUE}https://${DASHBOARD_DOMAIN}${NC}"
|
||||
echo
|
||||
echo -e "${YELLOW}Key points:${NC}"
|
||||
echo "1. systemd-resolved is now configured to resolve ${INTERNAL_DOMAIN} domains locally"
|
||||
echo "2. The dashboard domain ${DASHBOARD_DOMAIN} is mapped to ${CLUSTER_IP}"
|
||||
echo "3. Let's Encrypt will issue a valid certificate for secure HTTPS (may take a few minutes)"
|
||||
echo "4. External users cannot access these domains (special DNS configuration required)"
|
||||
echo
|
||||
echo -e "${YELLOW}To test the DNS resolution:${NC}"
|
||||
echo -e "${BLUE}nslookup ${DASHBOARD_DOMAIN}${NC}"
|
||||
echo
|
||||
echo -e "${YELLOW}To verify systemd-resolved configuration:${NC}"
|
||||
echo -e "${BLUE}resolvectl status${NC}"
|
||||
echo
|
||||
echo -e "${YELLOW}Certificate status:${NC}"
|
||||
echo -e "${BLUE}kubectl get certificate -n kubernetes-dashboard${NC}"
|
||||
echo
|
3
dev/README.md
Normal file
3
dev/README.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# Soverign Cloud Development resources
|
||||
|
||||
These docs and resources are for the developers of the cloud project.
|
43
dev/TODO.md
Normal file
43
dev/TODO.md
Normal file
@@ -0,0 +1,43 @@
|
||||
# To Do
|
||||
|
||||
## Current Backlog
|
||||
|
||||
- Get custom data out of coredns config.
|
||||
- Ensure everything comes from .env and nowhere else. .env is the source of
|
||||
truth (not, e.g. the environment, though that will be set up).
|
||||
- Remove helm dependency in preference of kustomize and small scripts (declarative, unix philosopy).
|
||||
- Figure out how to manage docker dependencies. Really, the containers are the
|
||||
things that need to be updated regularly. The manifests only need to change if
|
||||
a docker version requires changes (e.g. a different env or secret required).
|
||||
- Can we rely on or join community efforts here? E.g.
|
||||
https://github.com/docker-library/official-images?
|
||||
- Template out the 192.168.8 addresses in infrastructure_setup.
|
||||
- Convert metallb from helm install to straight templates.
|
||||
- Change all tls references to sovereign-cloud-tls
|
||||
- Eliminate all `payne` references.
|
||||
|
||||
## Need to investigate
|
||||
|
||||
- k8s config and secrets
|
||||
- Longhorn
|
||||
|
||||
## App packs to develop
|
||||
|
||||
- Manager
|
||||
- Cockpit?
|
||||
- Databases?
|
||||
- Tailscale?
|
||||
- Backups.
|
||||
- SSO?
|
||||
- Productivity
|
||||
- Nextcloud?
|
||||
- Jellyfin?
|
||||
- Communications Stack
|
||||
- Matrix/Synapse.
|
||||
- Email
|
||||
- Blog platforms
|
||||
- Ghost
|
||||
- Web hosting
|
||||
- Static web sites
|
||||
- Intelligence stack
|
||||
- Set up cloud to utilize GPUs.
|
234
dev/kill.sh
Executable file
234
dev/kill.sh
Executable file
@@ -0,0 +1,234 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# kill.sh - Script to remove all cloud infrastructure resources
|
||||
|
||||
set -e
|
||||
|
||||
# Get script directory for relative paths
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
|
||||
|
||||
# Source environment variables if not already loaded
|
||||
if [[ -z "$ENVIRONMENT" ]]; then
|
||||
if [[ -f "${REPO_ROOT}/load-env.sh" ]]; then
|
||||
source "${REPO_ROOT}/load-env.sh"
|
||||
else
|
||||
echo "Warning: load-env.sh not found. Environment variables may not be available."
|
||||
fi
|
||||
fi
|
||||
|
||||
# Print header
|
||||
echo "====================================================="
|
||||
echo "Cloud Infrastructure Resource Removal Tool"
|
||||
echo "====================================================="
|
||||
echo
|
||||
echo "WARNING: This script will remove ALL cloud infrastructure components."
|
||||
echo "This includes:"
|
||||
echo " - MetalLB (Load Balancer)"
|
||||
echo " - Traefik (Ingress Controller)"
|
||||
echo " - cert-manager (Certificate Management)"
|
||||
echo " - CoreDNS (Internal DNS)"
|
||||
echo " - ExternalDNS (External DNS Management)"
|
||||
echo " - Kubernetes Dashboard"
|
||||
echo " - Any associated ClusterIssuers, Certificates, etc."
|
||||
echo
|
||||
echo "This is a destructive operation and cannot be undone."
|
||||
echo
|
||||
|
||||
# Ask for confirmation
|
||||
read -p "Are you sure you want to proceed? (y/N): " confirm
|
||||
if [[ "$confirm" != "y" && "$confirm" != "Y" ]]; then
|
||||
echo "Operation cancelled."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo
|
||||
echo "Starting removal process..."
|
||||
echo
|
||||
|
||||
# Function to safely remove resources
|
||||
remove_resource() {
|
||||
local resource_type="$1"
|
||||
local resource_name="$2"
|
||||
local namespace="${3:-}"
|
||||
local ns_flag=""
|
||||
|
||||
if [[ -n "$namespace" ]]; then
|
||||
ns_flag="-n $namespace"
|
||||
fi
|
||||
|
||||
echo "Removing $resource_type: $resource_name ${namespace:+in namespace $namespace}"
|
||||
|
||||
# Check if resource exists before trying to delete
|
||||
if kubectl get "$resource_type" "$resource_name" $ns_flag &>/dev/null; then
|
||||
kubectl delete "$resource_type" "$resource_name" $ns_flag
|
||||
echo " ✓ Removed $resource_type: $resource_name"
|
||||
else
|
||||
echo " ✓ Resource not found, skipping: $resource_type/$resource_name"
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to remove all resources of a type in a namespace
|
||||
remove_all_resources() {
|
||||
local resource_type="$1"
|
||||
local namespace="$2"
|
||||
|
||||
echo "Removing all $resource_type in namespace $namespace"
|
||||
|
||||
# Check if namespace exists before trying to list resources
|
||||
if ! kubectl get namespace "$namespace" &>/dev/null; then
|
||||
echo " ✓ Namespace $namespace not found, skipping"
|
||||
return
|
||||
fi
|
||||
|
||||
# Get resources of the specified type in the namespace
|
||||
local resources=$(kubectl get "$resource_type" -n "$namespace" -o name 2>/dev/null || echo "")
|
||||
|
||||
if [[ -z "$resources" ]]; then
|
||||
echo " ✓ No $resource_type found in namespace $namespace"
|
||||
return
|
||||
fi
|
||||
|
||||
# Delete each resource
|
||||
while IFS= read -r resource; do
|
||||
if [[ -n "$resource" ]]; then
|
||||
resource_name=$(echo "$resource" | cut -d/ -f2)
|
||||
kubectl delete "$resource_type" "$resource_name" -n "$namespace"
|
||||
echo " ✓ Removed $resource"
|
||||
fi
|
||||
done <<< "$resources"
|
||||
}
|
||||
|
||||
# Function to remove helm releases
|
||||
remove_helm_release() {
|
||||
local release_name="$1"
|
||||
local namespace="${2:-default}"
|
||||
|
||||
echo "Removing Helm release: $release_name from namespace $namespace"
|
||||
|
||||
# Check if release exists before trying to uninstall
|
||||
if helm status "$release_name" -n "$namespace" &>/dev/null; then
|
||||
helm uninstall "$release_name" -n "$namespace"
|
||||
echo " ✓ Uninstalled Helm release: $release_name"
|
||||
else
|
||||
echo " ✓ Helm release not found, skipping: $release_name"
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to safely remove namespaces
|
||||
remove_namespace() {
|
||||
local namespace="$1"
|
||||
|
||||
echo "Removing namespace: $namespace"
|
||||
|
||||
# Check if namespace exists before trying to delete
|
||||
if kubectl get namespace "$namespace" &>/dev/null; then
|
||||
kubectl delete namespace "$namespace" --wait=false
|
||||
echo " ✓ Namespace deletion initiated: $namespace"
|
||||
else
|
||||
echo " ✓ Namespace not found, skipping: $namespace"
|
||||
fi
|
||||
}
|
||||
|
||||
echo "=== Removing certificate and DNS resources ==="
|
||||
|
||||
# 1. Remove ClusterIssuers
|
||||
echo "Removing ClusterIssuers..."
|
||||
remove_resource clusterissuer letsencrypt-prod
|
||||
remove_resource clusterissuer letsencrypt-staging
|
||||
remove_resource clusterissuer selfsigned-issuer
|
||||
|
||||
# 2. Remove Certificates in various namespaces
|
||||
echo "Removing Certificates..."
|
||||
remove_all_resources certificates default
|
||||
remove_all_resources certificates internal
|
||||
remove_all_resources certificates kubernetes-dashboard
|
||||
|
||||
# 3. Remove Issuers
|
||||
echo "Removing Issuers..."
|
||||
remove_resource issuer kubernetes-dashboard-ca kubernetes-dashboard
|
||||
remove_resource issuer selfsigned-issuer kubernetes-dashboard
|
||||
|
||||
# 4. Cleanup ExternalDNS records (if possible)
|
||||
echo "Note: ExternalDNS records in CloudFlare will be orphaned. You may need to manually clean up DNS records."
|
||||
|
||||
echo "=== Removing MetalLB resources ==="
|
||||
|
||||
# 5. Remove MetalLB custom resources
|
||||
echo "Removing MetalLB IPAddressPools and L2Advertisements..."
|
||||
remove_all_resources ipaddresspools.metallb.io metallb-system
|
||||
remove_all_resources l2advertisements.metallb.io metallb-system
|
||||
|
||||
# 5.1. Remove MetalLB core components
|
||||
echo "Removing MetalLB core components..."
|
||||
remove_all_resources deployments.apps metallb-system
|
||||
remove_all_resources daemonsets.apps metallb-system
|
||||
remove_all_resources services metallb-system
|
||||
remove_all_resources serviceaccounts metallb-system
|
||||
remove_all_resources configmaps metallb-system
|
||||
|
||||
# 5.2. Remove MetalLB webhook configs
|
||||
echo "Removing MetalLB Webhook configurations..."
|
||||
remove_resource validatingwebhookconfiguration metallb-webhook-configuration
|
||||
|
||||
echo "=== Removing Traefik resources ==="
|
||||
|
||||
# 6. Remove Traefik IngressRoutes and Middlewares
|
||||
echo "Removing Traefik IngressRoutes and Middlewares..."
|
||||
remove_all_resources ingressroutes.traefik.containo.us kubernetes-dashboard
|
||||
remove_all_resources ingressroutes.traefik.containo.us default
|
||||
remove_all_resources ingressroutes.traefik.containo.us internal
|
||||
remove_all_resources middlewares.traefik.containo.us cloud-infra
|
||||
remove_all_resources middlewares.traefik.containo.us default
|
||||
|
||||
echo "=== Removing ExternalDNS resources ==="
|
||||
|
||||
# 6.1. Remove ExternalDNS resources
|
||||
echo "Removing ExternalDNS ClusterRole and ClusterRoleBinding..."
|
||||
remove_resource clusterrole external-dns
|
||||
remove_resource clusterrolebinding external-dns-viewer
|
||||
remove_resource secret cloudflare-api-token cloud-infra
|
||||
|
||||
echo "=== Removing Helm releases ==="
|
||||
|
||||
# 7. Uninstall Helm releases
|
||||
echo "Uninstalling Helm releases..."
|
||||
remove_helm_release metallb metallb-system
|
||||
remove_helm_release traefik cloud-infra
|
||||
remove_helm_release cert-manager cert-manager
|
||||
remove_helm_release coredns cloud-infra
|
||||
remove_helm_release externaldns cloud-infra
|
||||
remove_helm_release kubernetes-dashboard kubernetes-dashboard
|
||||
# remove_helm_release postgresql postgres
|
||||
# remove_helm_release mariadb mariadb
|
||||
|
||||
echo "=== Removing namespaces ==="
|
||||
|
||||
# 8. Remove namespaces
|
||||
echo "Removing namespaces..."
|
||||
remove_namespace cert-manager
|
||||
remove_namespace cloud-infra
|
||||
remove_namespace metallb-system
|
||||
remove_namespace kubernetes-dashboard
|
||||
remove_namespace internal
|
||||
# remove_namespace postgres
|
||||
# remove_namespace mariadb
|
||||
|
||||
echo
|
||||
echo "====================================================="
|
||||
echo "Cloud infrastructure resources removal completed!"
|
||||
echo "====================================================="
|
||||
echo
|
||||
echo "To reinstall the infrastructure using the recommended approach:"
|
||||
echo "1. Source environment variables:"
|
||||
echo " source load-env.sh"
|
||||
echo
|
||||
echo "2. Install components one by one:"
|
||||
echo " ./bin/helm-install metallb"
|
||||
echo " ./bin/helm-install traefik"
|
||||
echo " ./bin/helm-install cert-manager"
|
||||
echo " ./bin/helm-install coredns"
|
||||
echo " ./bin/helm-install externaldns"
|
||||
echo
|
||||
echo "Or use the unified setup script:"
|
||||
echo " ./bin/setup-cloud"
|
165
docs/APPS.md
Normal file
165
docs/APPS.md
Normal file
@@ -0,0 +1,165 @@
|
||||
# Deploying Applications
|
||||
|
||||
Once you have your personal cloud infrastructure up and running, you'll want to start deploying applications. This guide explains how to deploy and manage applications on your infrastructure.
|
||||
|
||||
## Application Charts
|
||||
|
||||
The `/charts` directory contains curated Helm charts for common applications that are ready to deploy on your personal cloud.
|
||||
|
||||
### Available Charts
|
||||
|
||||
| Chart | Description | Internal/Public |
|
||||
|-------|-------------|----------------|
|
||||
| mariadb | MariaDB database for applications | Internal |
|
||||
| postgres | PostgreSQL database for applications | Internal |
|
||||
|
||||
### Installing Charts
|
||||
|
||||
Use the `bin/helm-install` script to easily deploy charts with the right configuration:
|
||||
|
||||
```bash
|
||||
# Install PostgreSQL
|
||||
./bin/helm-install postgres
|
||||
|
||||
# Install MariaDB
|
||||
./bin/helm-install mariadb
|
||||
```
|
||||
|
||||
The script automatically:
|
||||
- Uses values from your environment variables
|
||||
- Creates the necessary namespace
|
||||
- Configures storage and networking
|
||||
- Sets up appropriate secrets
|
||||
|
||||
### Customizing Chart Values
|
||||
|
||||
Each chart can be customized by:
|
||||
|
||||
1. Editing environment variables in your `.env` file
|
||||
2. Creating a custom values file:
|
||||
|
||||
```bash
|
||||
# Create a custom values file
|
||||
cp charts/postgres/values.yaml my-postgres-values.yaml
|
||||
nano my-postgres-values.yaml
|
||||
|
||||
# Install with custom values
|
||||
./bin/helm-install postgres --values my-postgres-values.yaml
|
||||
```
|
||||
|
||||
### Creating Your Own Charts
|
||||
|
||||
You can add your own applications to the charts directory:
|
||||
|
||||
1. Create a new directory: `mkdir -p charts/my-application`
|
||||
2. Add the necessary templates and values
|
||||
3. Document any required environment variables
|
||||
|
||||
## Deploying Custom Services
|
||||
|
||||
For simpler applications or services without existing charts, use the `deploy-service` script to quickly deploy from templates.
|
||||
|
||||
### Service Types
|
||||
|
||||
The system supports four types of services:
|
||||
|
||||
1. **Public** - Accessible from the internet
|
||||
2. **Internal** - Only accessible within your local network
|
||||
3. **Database** - Internal database services
|
||||
4. **Microservice** - Services that are only accessible by other services
|
||||
|
||||
### Deployment Examples
|
||||
|
||||
```bash
|
||||
# Deploy a public blog using Ghost
|
||||
./bin/deploy-service --type public --name blog --image ghost:4.12 --port 2368
|
||||
|
||||
# Deploy an internal admin dashboard
|
||||
./bin/deploy-service --type internal --name admin --image my-admin:v1 --port 8080
|
||||
|
||||
# Deploy a database service
|
||||
./bin/deploy-service --type database --name postgres --image postgres:15 --port 5432
|
||||
|
||||
# Deploy a microservice
|
||||
./bin/deploy-service --type microservice --name auth --image auth-service:v1 --port 9000
|
||||
```
|
||||
|
||||
### Service Structure
|
||||
|
||||
When you deploy a service, a directory is created at `services/[service-name]/` containing:
|
||||
|
||||
- `service.yaml` - The Kubernetes manifest for your service
|
||||
|
||||
You can modify this file directly and reapply it with `kubectl apply -f services/[service-name]/service.yaml` to update your service.
|
||||
|
||||
## Accessing Services
|
||||
|
||||
Services are automatically configured with proper URLs and TLS certificates.
|
||||
|
||||
### URL Patterns
|
||||
|
||||
- **Public services**: `https://[service-name].[domain]`
|
||||
- **Internal services**: `https://[service-name].internal.[domain]`
|
||||
- **Microservices**: `https://[service-name].svc.[domain]`
|
||||
- **Databases**: `[service-name].[namespace].svc.cluster.local:[port]`
|
||||
|
||||
### Dashboard Access
|
||||
|
||||
Access the Kubernetes Dashboard at `https://dashboard.internal.[domain]`:
|
||||
|
||||
```bash
|
||||
# Get the dashboard token
|
||||
./bin/dashboard-token
|
||||
```
|
||||
|
||||
### Service Management
|
||||
|
||||
Monitor your running services with:
|
||||
|
||||
```bash
|
||||
# List all services
|
||||
kubectl get services -A
|
||||
|
||||
# View detailed information about a service
|
||||
kubectl describe service [service-name] -n [namespace]
|
||||
|
||||
# Check pods for a service
|
||||
kubectl get pods -n [namespace] -l app=[service-name]
|
||||
|
||||
# View logs for a service
|
||||
kubectl logs -n [namespace] -l app=[service-name]
|
||||
```
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Scaling Services
|
||||
|
||||
Scale your services by editing the deployment:
|
||||
|
||||
```bash
|
||||
kubectl scale deployment [service-name] --replicas=3 -n [namespace]
|
||||
```
|
||||
|
||||
### Adding Environment Variables
|
||||
|
||||
Add environment variables to your service by editing the service YAML file and adding entries to the `env` section:
|
||||
|
||||
```yaml
|
||||
env:
|
||||
- name: DATABASE_URL
|
||||
value: "postgres://user:password@postgres:5432/db"
|
||||
```
|
||||
|
||||
### Persistent Storage
|
||||
|
||||
For services that need persistent storage, add a PersistentVolumeClaim to your service YAML.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If a service isn't working correctly:
|
||||
|
||||
1. Check pod status: `kubectl get pods -n [namespace]`
|
||||
2. View logs: `kubectl logs [pod-name] -n [namespace]`
|
||||
3. Describe the pod: `kubectl describe pod [pod-name] -n [namespace]`
|
||||
4. Verify the service: `kubectl get svc [service-name] -n [namespace]`
|
||||
5. Check the ingress: `kubectl get ingress [service-name] -n [namespace]`
|
328
docs/MAINTENANCE.md
Normal file
328
docs/MAINTENANCE.md
Normal file
@@ -0,0 +1,328 @@
|
||||
# Maintenance Guide
|
||||
|
||||
This guide covers essential maintenance tasks for your personal cloud infrastructure, including troubleshooting, backups, updates, and security best practices.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### General Troubleshooting Steps
|
||||
|
||||
1. **Check Component Status**:
|
||||
```bash
|
||||
# Check all pods across all namespaces
|
||||
kubectl get pods -A
|
||||
|
||||
# Look for pods that aren't Running or Ready
|
||||
kubectl get pods -A | grep -v "Running\|Completed"
|
||||
```
|
||||
|
||||
2. **View Detailed Pod Information**:
|
||||
```bash
|
||||
# Get detailed info about problematic pods
|
||||
kubectl describe pod <pod-name> -n <namespace>
|
||||
|
||||
# Check pod logs
|
||||
kubectl logs <pod-name> -n <namespace>
|
||||
```
|
||||
|
||||
3. **Run Validation Script**:
|
||||
```bash
|
||||
./infrastructure_setup/validate_setup.sh
|
||||
```
|
||||
|
||||
4. **Check Node Status**:
|
||||
```bash
|
||||
kubectl get nodes
|
||||
kubectl describe node <node-name>
|
||||
```
|
||||
|
||||
### Common Issues
|
||||
|
||||
#### Certificate Problems
|
||||
|
||||
If services show invalid certificates:
|
||||
|
||||
1. Check certificate status:
|
||||
```bash
|
||||
kubectl get certificates -A
|
||||
```
|
||||
|
||||
2. Examine certificate details:
|
||||
```bash
|
||||
kubectl describe certificate <cert-name> -n <namespace>
|
||||
```
|
||||
|
||||
3. Check for cert-manager issues:
|
||||
```bash
|
||||
kubectl get pods -n cert-manager
|
||||
kubectl logs -l app=cert-manager -n cert-manager
|
||||
```
|
||||
|
||||
4. Verify the Cloudflare API token is correctly set up:
|
||||
```bash
|
||||
kubectl get secret cloudflare-api-token -n internal
|
||||
```
|
||||
|
||||
#### DNS Issues
|
||||
|
||||
If DNS resolution isn't working properly:
|
||||
|
||||
1. Check CoreDNS status:
|
||||
```bash
|
||||
kubectl get pods -n kube-system -l k8s-app=kube-dns
|
||||
kubectl logs -l k8s-app=kube-dns -n kube-system
|
||||
```
|
||||
|
||||
2. Verify CoreDNS configuration:
|
||||
```bash
|
||||
kubectl get configmap -n kube-system coredns -o yaml
|
||||
```
|
||||
|
||||
3. Test DNS resolution from inside the cluster:
|
||||
```bash
|
||||
kubectl run -i --tty --rm debug --image=busybox --restart=Never -- nslookup kubernetes.default
|
||||
```
|
||||
|
||||
#### Service Connectivity
|
||||
|
||||
If services can't communicate:
|
||||
|
||||
1. Check network policies:
|
||||
```bash
|
||||
kubectl get networkpolicies -A
|
||||
```
|
||||
|
||||
2. Verify service endpoints:
|
||||
```bash
|
||||
kubectl get endpoints -n <namespace>
|
||||
```
|
||||
|
||||
3. Test connectivity from within the cluster:
|
||||
```bash
|
||||
kubectl run -i --tty --rm debug --image=busybox --restart=Never -- wget -O- <service-name>.<namespace>
|
||||
```
|
||||
|
||||
## Backup and Restore
|
||||
|
||||
### What to Back Up
|
||||
|
||||
1. **Persistent Data**:
|
||||
- Database volumes
|
||||
- Application storage
|
||||
- Configuration files
|
||||
|
||||
2. **Kubernetes Resources**:
|
||||
- Custom Resource Definitions (CRDs)
|
||||
- Deployments, Services, Ingresses
|
||||
- Secrets and ConfigMaps
|
||||
|
||||
### Backup Methods
|
||||
|
||||
#### Simple Backup Script
|
||||
|
||||
Create a backup script at `bin/backup.sh` (to be implemented):
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Simple backup script for your personal cloud
|
||||
# This is a placeholder for future implementation
|
||||
|
||||
BACKUP_DIR="/path/to/backups/$(date +%Y-%m-%d)"
|
||||
mkdir -p "$BACKUP_DIR"
|
||||
|
||||
# Back up Kubernetes resources
|
||||
kubectl get all -A -o yaml > "$BACKUP_DIR/all-resources.yaml"
|
||||
kubectl get secrets -A -o yaml > "$BACKUP_DIR/secrets.yaml"
|
||||
kubectl get configmaps -A -o yaml > "$BACKUP_DIR/configmaps.yaml"
|
||||
|
||||
# Back up persistent volumes
|
||||
# TODO: Add logic to back up persistent volume data
|
||||
|
||||
echo "Backup completed: $BACKUP_DIR"
|
||||
```
|
||||
|
||||
#### Using Velero (Recommended for Future)
|
||||
|
||||
[Velero](https://velero.io/) is a powerful backup solution for Kubernetes:
|
||||
|
||||
```bash
|
||||
# Install Velero (future implementation)
|
||||
helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
|
||||
helm install velero vmware-tanzu/velero --namespace velero --create-namespace
|
||||
|
||||
# Create a backup
|
||||
velero backup create my-backup --include-namespaces default,internal
|
||||
|
||||
# Restore from backup
|
||||
velero restore create --from-backup my-backup
|
||||
```
|
||||
|
||||
### Database Backups
|
||||
|
||||
For database services, set up regular dumps:
|
||||
|
||||
```bash
|
||||
# PostgreSQL backup (placeholder)
|
||||
kubectl exec <postgres-pod> -n <namespace> -- pg_dump -U <username> <database> > backup.sql
|
||||
|
||||
# MariaDB/MySQL backup (placeholder)
|
||||
kubectl exec <mariadb-pod> -n <namespace> -- mysqldump -u root -p<password> <database> > backup.sql
|
||||
```
|
||||
|
||||
## Updates
|
||||
|
||||
### Updating Kubernetes (K3s)
|
||||
|
||||
1. Check current version:
|
||||
```bash
|
||||
k3s --version
|
||||
```
|
||||
|
||||
2. Update K3s:
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
```
|
||||
|
||||
3. Verify the update:
|
||||
```bash
|
||||
k3s --version
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
### Updating Infrastructure Components
|
||||
|
||||
1. Update the repository:
|
||||
```bash
|
||||
git pull
|
||||
```
|
||||
|
||||
2. Re-run the setup script:
|
||||
```bash
|
||||
./infrastructure_setup/setup-all.sh
|
||||
```
|
||||
|
||||
3. Or update specific components:
|
||||
```bash
|
||||
./infrastructure_setup/setup-cert-manager.sh
|
||||
./infrastructure_setup/setup-dashboard.sh
|
||||
# etc.
|
||||
```
|
||||
|
||||
### Updating Applications
|
||||
|
||||
For Helm chart applications:
|
||||
|
||||
```bash
|
||||
# Update Helm repositories
|
||||
helm repo update
|
||||
|
||||
# Upgrade a specific application
|
||||
./bin/helm-install <chart-name> --upgrade
|
||||
```
|
||||
|
||||
For services deployed with `deploy-service`:
|
||||
|
||||
```bash
|
||||
# Edit the service YAML
|
||||
nano services/<service-name>/service.yaml
|
||||
|
||||
# Apply changes
|
||||
kubectl apply -f services/<service-name>/service.yaml
|
||||
```
|
||||
|
||||
## Security
|
||||
|
||||
### Best Practices
|
||||
|
||||
1. **Keep Everything Updated**:
|
||||
- Regularly update K3s
|
||||
- Update all infrastructure components
|
||||
- Keep application images up to date
|
||||
|
||||
2. **Network Security**:
|
||||
- Use internal services whenever possible
|
||||
- Limit exposed services to only what's necessary
|
||||
- Configure your home router's firewall properly
|
||||
|
||||
3. **Access Control**:
|
||||
- Use strong passwords for all services
|
||||
- Implement a secrets management strategy
|
||||
- Rotate API tokens and keys regularly
|
||||
|
||||
4. **Regular Audits**:
|
||||
- Review running services periodically
|
||||
- Check for unused or outdated deployments
|
||||
- Monitor resource usage for anomalies
|
||||
|
||||
### Security Scanning (Future Implementation)
|
||||
|
||||
Tools to consider implementing:
|
||||
|
||||
1. **Trivy** for image scanning:
|
||||
```bash
|
||||
# Example Trivy usage (placeholder)
|
||||
trivy image <your-image>
|
||||
```
|
||||
|
||||
2. **kube-bench** for Kubernetes security checks:
|
||||
```bash
|
||||
# Example kube-bench usage (placeholder)
|
||||
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml
|
||||
```
|
||||
|
||||
3. **Falco** for runtime security monitoring:
|
||||
```bash
|
||||
# Example Falco installation (placeholder)
|
||||
helm repo add falcosecurity https://falcosecurity.github.io/charts
|
||||
helm install falco falcosecurity/falco --namespace falco --create-namespace
|
||||
```
|
||||
|
||||
## System Health Monitoring
|
||||
|
||||
### Basic Monitoring
|
||||
|
||||
Check system health with:
|
||||
|
||||
```bash
|
||||
# Node resource usage
|
||||
kubectl top nodes
|
||||
|
||||
# Pod resource usage
|
||||
kubectl top pods -A
|
||||
|
||||
# Persistent volume claims
|
||||
kubectl get pvc -A
|
||||
```
|
||||
|
||||
### Advanced Monitoring (Future Implementation)
|
||||
|
||||
Consider implementing:
|
||||
|
||||
1. **Prometheus + Grafana** for comprehensive monitoring:
|
||||
```bash
|
||||
# Placeholder for future implementation
|
||||
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
|
||||
helm install prometheus prometheus-community/kube-prometheus-stack --namespace monitoring --create-namespace
|
||||
```
|
||||
|
||||
2. **Loki** for log aggregation:
|
||||
```bash
|
||||
# Placeholder for future implementation
|
||||
helm repo add grafana https://grafana.github.io/helm-charts
|
||||
helm install loki grafana/loki-stack --namespace logging --create-namespace
|
||||
```
|
||||
|
||||
## Additional Resources
|
||||
|
||||
This document will be expanded in the future with:
|
||||
|
||||
- Detailed backup and restore procedures
|
||||
- Monitoring setup instructions
|
||||
- Comprehensive security hardening guide
|
||||
- Automated maintenance scripts
|
||||
|
||||
For now, refer to the following external resources:
|
||||
|
||||
- [K3s Documentation](https://docs.k3s.io/)
|
||||
- [Kubernetes Troubleshooting Guide](https://kubernetes.io/docs/tasks/debug/)
|
||||
- [Velero Backup Documentation](https://velero.io/docs/latest/)
|
||||
- [Kubernetes Security Best Practices](https://kubernetes.io/docs/concepts/security/)
|
112
docs/SETUP.md
Normal file
112
docs/SETUP.md
Normal file
@@ -0,0 +1,112 @@
|
||||
# Setting Up Your Personal Cloud
|
||||
|
||||
Welcome to your journey toward digital independence! This guide will walk you through setting up your own personal cloud infrastructure using Kubernetes, providing you with privacy, control, and flexibility.
|
||||
|
||||
## Hardware Recommendations
|
||||
|
||||
For a pleasant experience, we recommend:
|
||||
|
||||
- A dedicated mini PC, NUC, or old laptop with at least:
|
||||
- 4 CPU cores
|
||||
- 8GB RAM (16GB recommended)
|
||||
- 128GB SSD (256GB or more recommended)
|
||||
- A stable internet connection
|
||||
- Optional: additional nodes for high availability
|
||||
|
||||
## Initial Setup
|
||||
|
||||
### 1. Prepare Environment Variables
|
||||
|
||||
First, create your environment configuration:
|
||||
|
||||
```bash
|
||||
# Copy the example file and edit with your details
|
||||
cp .env.example .env
|
||||
nano .env
|
||||
|
||||
# Then load the environment variables
|
||||
source load-env.sh
|
||||
```
|
||||
|
||||
Important variables to set in your `.env` file:
|
||||
- `DOMAIN`: Your domain name (e.g., `cloud.example.com`)
|
||||
- `EMAIL`: Your email for Let's Encrypt certificates
|
||||
- `CLOUDFLARE_API_TOKEN`: If using Cloudflare for DNS
|
||||
|
||||
### 2. Install K3s (Lightweight Kubernetes)
|
||||
|
||||
K3s provides a fully-compliant Kubernetes distribution in a small footprint:
|
||||
|
||||
```bash
|
||||
# Install K3s without the default load balancer (we'll use MetalLB)
|
||||
curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode=644 --disable servicelb
|
||||
|
||||
# Set up kubectl configuration
|
||||
mkdir -p ~/.kube
|
||||
sudo cat /etc/rancher/k3s/k3s.yaml > ~/.kube/config
|
||||
chmod 600 ~/.kube/config
|
||||
```
|
||||
|
||||
### 3. Install Infrastructure Components
|
||||
|
||||
One command sets up your entire cloud infrastructure:
|
||||
|
||||
```bash
|
||||
./infrastructure_setup/setup-all.sh
|
||||
```
|
||||
|
||||
This installs and configures:
|
||||
- **MetalLB**: Provides IP addresses for services
|
||||
- **Traefik**: Handles ingress (routing) with automatic HTTPS
|
||||
- **cert-manager**: Manages TLS certificates automatically
|
||||
- **CoreDNS**: Provides internal DNS resolution
|
||||
- **ExternalDNS**: Updates DNS records automatically
|
||||
- **Kubernetes Dashboard**: Web UI for managing your cluster
|
||||
|
||||
## Adding Additional Nodes (Optional)
|
||||
|
||||
For larger workloads or high availability, you can add more nodes:
|
||||
|
||||
```bash
|
||||
# On your master node, get the node token
|
||||
sudo cat /var/lib/rancher/k3s/server/node-token
|
||||
|
||||
# On each new node, join the cluster
|
||||
curl -sfL https://get.k3s.io | K3S_URL=https://MASTER_IP:6443 K3S_TOKEN=NODE_TOKEN sh -
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
Now that your infrastructure is set up, you can:
|
||||
|
||||
1. **Deploy Applications**: See [Applications Guide](./APPS.md) for deploying services and applications
|
||||
2. **Access Dashboard**: Visit `https://dashboard.internal.yourdomain.com` and use the token from `./bin/dashboard-token`
|
||||
3. **Validate Setup**: Run `./infrastructure_setup/validate_setup.sh` to ensure everything is working
|
||||
|
||||
## Validation and Troubleshooting
|
||||
|
||||
Run the validation script to ensure everything is working correctly:
|
||||
|
||||
```bash
|
||||
./infrastructure_setup/validate_setup.sh
|
||||
```
|
||||
|
||||
This script checks:
|
||||
- All infrastructure components
|
||||
- DNS resolution
|
||||
- Service connectivity
|
||||
- Certificate issuance
|
||||
- Network configuration
|
||||
|
||||
If issues are found, the script provides specific remediation steps.
|
||||
|
||||
## What's Next?
|
||||
|
||||
Now that your personal cloud is running, consider:
|
||||
|
||||
- Setting up backups with [Velero](https://velero.io/)
|
||||
- Adding monitoring with Prometheus and Grafana
|
||||
- Deploying applications like Nextcloud, Home Assistant, or Gitea
|
||||
- Exploring the Kubernetes Dashboard to monitor your services
|
||||
|
||||
Welcome to your personal cloud journey! You now have the foundation for hosting your own services and taking control of your digital life.
|
331
docs/learning/visibility.md
Normal file
331
docs/learning/visibility.md
Normal file
@@ -0,0 +1,331 @@
|
||||
# Understanding Network Visibility in Kubernetes
|
||||
|
||||
This guide explains how applications deployed on our Kubernetes cluster become accessible from both internal and external networks. Whether you're deploying a public-facing website or an internal admin panel, this document will help you understand the journey from deployment to accessibility.
|
||||
|
||||
## The Visibility Pipeline
|
||||
|
||||
When you deploy an application to the cluster, making it accessible involves several coordinated components working together:
|
||||
|
||||
1. **Kubernetes Services** - Direct traffic to your application pods
|
||||
2. **Ingress Controllers** - Route external HTTP/HTTPS traffic to services
|
||||
3. **Load Balancers** - Assign external IPs to services
|
||||
4. **DNS Management** - Map domain names to IPs
|
||||
5. **TLS Certificates** - Secure connections with HTTPS
|
||||
|
||||
Let's walk through how each part works and how they interconnect.
|
||||
|
||||
## From Deployment to Visibility
|
||||
|
||||
### 1. Application Deployment
|
||||
|
||||
Your journey begins with deploying your application on Kubernetes. This typically involves:
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: my-app
|
||||
namespace: my-namespace
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: my-app
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: my-app
|
||||
spec:
|
||||
containers:
|
||||
- name: my-app
|
||||
image: myapp:latest
|
||||
ports:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
This creates pods running your application, but they're not yet accessible outside their namespace.
|
||||
|
||||
### 2. Kubernetes Service: Internal Connectivity
|
||||
|
||||
A Kubernetes Service provides a stable endpoint to access your pods:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: my-app
|
||||
namespace: my-namespace
|
||||
spec:
|
||||
selector:
|
||||
app: my-app
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 80
|
||||
type: ClusterIP
|
||||
```
|
||||
|
||||
With this `ClusterIP` service, your application is accessible within the cluster at `my-app.my-namespace.svc.cluster.local`, but not from outside.
|
||||
|
||||
### 3. Ingress: Defining HTTP Routes
|
||||
|
||||
For HTTP/HTTPS traffic, an Ingress resource defines routing rules:
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: my-app
|
||||
namespace: my-namespace
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: "traefik"
|
||||
external-dns.alpha.kubernetes.io/target: "CLOUD_DOMAIN"
|
||||
external-dns.alpha.kubernetes.io/ttl: "60"
|
||||
spec:
|
||||
rules:
|
||||
- host: my-app.CLOUD_DOMAIN
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: my-app
|
||||
port:
|
||||
number: 80
|
||||
tls:
|
||||
- hosts:
|
||||
- my-app.CLOUD_DOMAIN
|
||||
secretName: wildcard-sovereign-cloud-tls
|
||||
```
|
||||
|
||||
This Ingress tells the cluster to route requests for `my-app.CLOUD_DOMAIN` to your service. The annotations provide hints to other systems like ExternalDNS.
|
||||
|
||||
### 4. Traefik: The Ingress Controller
|
||||
|
||||
Our cluster uses Traefik as the ingress controller. Traefik watches for Ingress resources and configures itself to handle the routing rules. It acts as a reverse proxy and edge router, handling:
|
||||
|
||||
- HTTP/HTTPS routing
|
||||
- TLS termination
|
||||
- Load balancing
|
||||
- Path-based routing
|
||||
- Host-based routing
|
||||
|
||||
Traefik runs as a service in the cluster with its own external IP (provided by MetalLB).
|
||||
|
||||
### 5. MetalLB: Assigning External IPs
|
||||
|
||||
Since we're running on-premises (not in a cloud that provides load balancers), we use MetalLB to assign external IPs to services. MetalLB manages a pool of IP addresses from our local network:
|
||||
|
||||
```yaml
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: IPAddressPool
|
||||
metadata:
|
||||
name: default
|
||||
namespace: metallb-system
|
||||
spec:
|
||||
addresses:
|
||||
- 192.168.8.240-192.168.8.250
|
||||
```
|
||||
|
||||
This allows Traefik and any other LoadBalancer services to receive a real IP address from our network.
|
||||
|
||||
### 6. ExternalDNS: Automated DNS Management
|
||||
|
||||
ExternalDNS automatically creates and updates DNS records in our CloudFlare DNS zone:
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: external-dns
|
||||
namespace: externaldns
|
||||
spec:
|
||||
# ...
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: external-dns
|
||||
image: registry.k8s.io/external-dns/external-dns
|
||||
args:
|
||||
- --source=service
|
||||
- --source=ingress
|
||||
- --provider=cloudflare
|
||||
- --txt-owner-id=sovereign-cloud
|
||||
```
|
||||
|
||||
ExternalDNS watches Kubernetes Services and Ingresses with appropriate annotations, then creates corresponding DNS records in CloudFlare, making your applications discoverable by domain name.
|
||||
|
||||
### 7. Cert-Manager: TLS Certificate Automation
|
||||
|
||||
To secure connections with HTTPS, we use cert-manager to automatically obtain and renew TLS certificates:
|
||||
|
||||
```yaml
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: wildcard-sovereign-cloud-io
|
||||
namespace: default
|
||||
spec:
|
||||
secretName: wildcard-sovereign-cloud-tls
|
||||
dnsNames:
|
||||
- "*.CLOUD_DOMAIN"
|
||||
- "CLOUD_DOMAIN"
|
||||
issuerRef:
|
||||
name: letsencrypt-prod
|
||||
kind: ClusterIssuer
|
||||
```
|
||||
|
||||
Cert-manager handles:
|
||||
|
||||
- Certificate request and issuance
|
||||
- DNS validation (for wildcard certificates)
|
||||
- Automatic renewal
|
||||
- Secret storage of certificates
|
||||
|
||||
## The Two Visibility Paths
|
||||
|
||||
In our infrastructure, we support two primary visibility paths:
|
||||
|
||||
### Public Services (External Access)
|
||||
|
||||
Public services are those meant to be accessible from the public internet:
|
||||
|
||||
1. **Service**: Kubernetes ClusterIP service (internal)
|
||||
2. **Ingress**: Defines routing with hostname like `service-name.CLOUD_DOMAIN`
|
||||
3. **DNS**: ExternalDNS creates a CNAME record pointing to `CLOUD_DOMAIN`
|
||||
4. **TLS**: Uses wildcard certificate for `*.CLOUD_DOMAIN`
|
||||
5. **IP Addressing**: Traffic reaches the MetalLB-assigned IP for Traefik
|
||||
6. **Network**: Traffic flows from external internet → router → MetalLB IP → Traefik → Kubernetes Service → Application Pods
|
||||
|
||||
**Deploy a public service with:**
|
||||
|
||||
```bash
|
||||
./bin/deploy-service --type public --name myservice
|
||||
```
|
||||
|
||||
### Internal Services (Private Access)
|
||||
|
||||
Internal services are restricted to the internal network:
|
||||
|
||||
1. **Service**: Kubernetes ClusterIP service (internal)
|
||||
2. **Ingress**: Defines routing with hostname like `service-name.internal.CLOUD_DOMAIN`
|
||||
3. **DNS**: ExternalDNS creates an A record pointing to the internal load balancer IP
|
||||
4. **TLS**: Uses wildcard certificate for `*.internal.CLOUD_DOMAIN`
|
||||
5. **IP Addressing**: Traffic reaches the MetalLB-assigned IP for Traefik
|
||||
6. **Network**: Traffic flows from internal network → MetalLB IP → Traefik → Service → Pods
|
||||
7. **Security**: Traefik middleware restricts access to internal network IPs
|
||||
|
||||
**Deploy an internal service with:**
|
||||
|
||||
```bash
|
||||
./bin/deploy-service --type internal --name adminpanel
|
||||
```
|
||||
|
||||
## How It All Works Together
|
||||
|
||||
1. **You deploy** an application using our deploy-service script
|
||||
2. **Kubernetes** schedules and runs your application pods
|
||||
3. **Services** provide a stable endpoint for your pods
|
||||
4. **Traefik** configures routing based on Ingress definitions
|
||||
5. **MetalLB** assigns real network IPs to LoadBalancer services
|
||||
6. **ExternalDNS** creates DNS records for your services
|
||||
7. **Cert-Manager** ensures valid TLS certificates for HTTPS
|
||||
|
||||
### Network Flow Diagram
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
subgraph Internet["Internet"]
|
||||
User("User Browser")
|
||||
CloudDNS("CloudFlare DNS")
|
||||
end
|
||||
subgraph Cluster["Cluster"]
|
||||
Router("Router")
|
||||
MetalLB("MetalLB")
|
||||
Traefik("Traefik Ingress")
|
||||
IngSvc("Service")
|
||||
IngPods("Application Pods")
|
||||
Ingress("Ingress")
|
||||
CertManager("cert-manager")
|
||||
WildcardCert("Wildcard Certificate")
|
||||
ExtDNS("ExternalDNS")
|
||||
end
|
||||
User -- "1\. DNS Query" --> CloudDNS
|
||||
CloudDNS -- "2\. IP Address" --> User
|
||||
User -- "3\. HTTPS Request" --> Router
|
||||
Router -- "4\. Forward" --> MetalLB
|
||||
MetalLB -- "5\. Route" --> Traefik
|
||||
Traefik -- "6\. Route" --> Ingress
|
||||
Ingress -- "7\. Forward" --> IngSvc
|
||||
IngSvc -- "8\. Balance" --> IngPods
|
||||
ExtDNS -- "A. Update DNS" --> CloudDNS
|
||||
Ingress -- "B. Configure" --> ExtDNS
|
||||
CertManager -- "C. Issue Cert" --> WildcardCert
|
||||
Ingress -- "D. Use" --> WildcardCert
|
||||
|
||||
User:::internet
|
||||
CloudDNS:::internet
|
||||
Router:::cluster
|
||||
MetalLB:::cluster
|
||||
Traefik:::cluster
|
||||
IngSvc:::cluster
|
||||
IngPods:::cluster
|
||||
Ingress:::cluster
|
||||
CertManager:::cluster
|
||||
WildcardCert:::cluster
|
||||
ExtDNS:::cluster
|
||||
classDef internet fill:#fcfcfc,stroke:#333
|
||||
classDef cluster fill:#a6f3ff,stroke:#333
|
||||
style User fill:#C8E6C9
|
||||
style CloudDNS fill:#C8E6C9
|
||||
style Router fill:#C8E6C9
|
||||
style MetalLB fill:#C8E6C9
|
||||
style Traefik fill:#C8E6C9
|
||||
style IngSvc fill:#C8E6C9
|
||||
style IngPods fill:#C8E6C9
|
||||
style Ingress fill:#C8E6C9
|
||||
style CertManager fill:#C8E6C9
|
||||
style WildcardCert fill:#C8E6C9
|
||||
style ExtDNS fill:#C8E6C9
|
||||
```
|
||||
|
||||
A successful deployment creates a chain of connections:
|
||||
|
||||
```
|
||||
Internet → DNS (domain name) → External IP → Traefik → Kubernetes Service → Application Pod
|
||||
```
|
||||
|
||||
## Behind the Scenes: The Technical Magic
|
||||
|
||||
When you use our `deploy-service` script, several things happen:
|
||||
|
||||
1. **Template Processing**: The script processes a YAML template for your service type, using environment variables to customize it
|
||||
2. **Namespace Management**: Creates or uses your service's namespace
|
||||
3. **Resource Application**: Applies the generated YAML to create/update all Kubernetes resources
|
||||
4. **DNS Configuration**: ExternalDNS detects the new resources and creates DNS records
|
||||
5. **Certificate Management**: Cert-manager ensures TLS certificates exist or creates new ones
|
||||
6. **Secret Distribution**: For internal services, certificates are copied to the appropriate namespaces
|
||||
|
||||
## Troubleshooting Visibility Issues
|
||||
|
||||
When services aren't accessible, the issue usually lies in one of these areas:
|
||||
|
||||
1. **DNS Resolution**: Domain not resolving to the correct IP
|
||||
2. **Certificate Problems**: Invalid, expired, or missing TLS certificates
|
||||
3. **Ingress Configuration**: Incorrect routing rules or annotations
|
||||
4. **Network Issues**: Firewall rules or internal/external network segregation
|
||||
|
||||
Our [Visibility Troubleshooting Guide](/docs/troubleshooting/VISIBILITY.md) provides detailed steps for diagnosing these issues.
|
||||
|
||||
## Conclusion
|
||||
|
||||
The visibility layer in our infrastructure represents a sophisticated interplay of multiple systems working together. While complex under the hood, it provides a streamlined experience for developers to deploy applications with proper networking, DNS, and security.
|
||||
|
||||
By understanding these components and their relationships, you'll be better equipped to deploy applications and diagnose any visibility issues that arise.
|
||||
|
||||
## Further Reading
|
||||
|
||||
- [Traefik Documentation](https://doc.traefik.io/traefik/)
|
||||
- [ExternalDNS Project](https://github.com/kubernetes-sigs/external-dns)
|
||||
- [Cert-Manager Documentation](https://cert-manager.io/docs/)
|
||||
- [MetalLB Project](https://metallb.universe.tf/)
|
246
docs/troubleshooting/VISIBILITY.md
Normal file
246
docs/troubleshooting/VISIBILITY.md
Normal file
@@ -0,0 +1,246 @@
|
||||
# Troubleshooting Service Visibility
|
||||
|
||||
This guide covers common issues with accessing services from outside the cluster and how to diagnose and fix them.
|
||||
|
||||
## Common Issues
|
||||
|
||||
External access to your services might fail for several reasons:
|
||||
|
||||
1. **DNS Resolution Issues** - Domain names not resolving to the correct IP address
|
||||
2. **Network Connectivity Issues** - Traffic can't reach the cluster's external IP
|
||||
3. **TLS Certificate Issues** - Invalid or missing certificates
|
||||
4. **Ingress/Service Configuration Issues** - Incorrectly configured routing
|
||||
|
||||
## Diagnostic Steps
|
||||
|
||||
### 1. Check DNS Resolution
|
||||
|
||||
**Symptoms:**
|
||||
|
||||
- Browser shows "site cannot be reached" or "server IP address could not be found"
|
||||
- `ping` or `nslookup` commands fail for your domain
|
||||
- Your service DNS records don't appear in CloudFlare or your DNS provider
|
||||
|
||||
**Checks:**
|
||||
|
||||
```bash
|
||||
# Check if your domain resolves (from outside the cluster)
|
||||
nslookup yourservice.yourdomain.com
|
||||
|
||||
# Check if ExternalDNS is running
|
||||
kubectl get pods -n externaldns
|
||||
|
||||
# Check ExternalDNS logs for errors
|
||||
kubectl logs -n externaldns -l app=external-dns < /dev/null | grep -i error
|
||||
kubectl logs -n externaldns -l app=external-dns | grep -i "your-service-name"
|
||||
|
||||
# Check if CloudFlare API token is configured correctly
|
||||
kubectl get secret cloudflare-api-token -n externaldns
|
||||
```
|
||||
|
||||
**Common Issues:**
|
||||
|
||||
a) **ExternalDNS Not Running**: The ExternalDNS pod is not running or has errors.
|
||||
|
||||
b) **Cloudflare API Token Issues**: The API token is invalid, expired, or doesn't have the right permissions.
|
||||
|
||||
c) **Domain Filter Mismatch**: ExternalDNS is configured with a `--domain-filter` that doesn't match your domain.
|
||||
|
||||
d) **Annotations Missing**: Service or Ingress is missing the required ExternalDNS annotations.
|
||||
|
||||
**Solutions:**
|
||||
|
||||
```bash
|
||||
# 1. Recreate CloudFlare API token secret
|
||||
kubectl create secret generic cloudflare-api-token \
|
||||
--namespace externaldns \
|
||||
--from-literal=api-token="your-api-token" \
|
||||
--dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# 2. Check and set proper annotations on your Ingress:
|
||||
kubectl annotate ingress your-ingress -n your-namespace \
|
||||
external-dns.alpha.kubernetes.io/hostname=your-service.your-domain.com
|
||||
|
||||
# 3. Restart ExternalDNS
|
||||
kubectl rollout restart deployment -n externaldns external-dns
|
||||
```
|
||||
|
||||
### 2. Check Network Connectivity
|
||||
|
||||
**Symptoms:**
|
||||
|
||||
- DNS resolves to the correct IP but the service is still unreachable
|
||||
- Only some services are unreachable while others work
|
||||
- Network timeout errors
|
||||
|
||||
**Checks:**
|
||||
|
||||
```bash
|
||||
# Check if MetalLB is running
|
||||
kubectl get pods -n metallb-system
|
||||
|
||||
# Check MetalLB IP address pool
|
||||
kubectl get ipaddresspools.metallb.io -n metallb-system
|
||||
|
||||
# Verify the service has an external IP
|
||||
kubectl get svc -n your-namespace your-service
|
||||
```
|
||||
|
||||
**Common Issues:**
|
||||
|
||||
a) **MetalLB Configuration**: The IP pool doesn't match your network or is exhausted.
|
||||
|
||||
b) **Firewall Issues**: Firewall is blocking traffic to your cluster's external IP.
|
||||
|
||||
c) **Router Configuration**: NAT or port forwarding issues if using a router.
|
||||
|
||||
**Solutions:**
|
||||
|
||||
```bash
|
||||
# 1. Check and update MetalLB configuration
|
||||
kubectl apply -f infrastructure_setup/metallb/metallb-pool.yaml
|
||||
|
||||
# 2. Check service external IP assignment
|
||||
kubectl describe svc -n your-namespace your-service
|
||||
```
|
||||
|
||||
### 3. Check TLS Certificates
|
||||
|
||||
**Symptoms:**
|
||||
|
||||
- Browser shows certificate errors
|
||||
- "Your connection is not private" warnings
|
||||
- Cert-manager logs show errors
|
||||
|
||||
**Checks:**
|
||||
|
||||
```bash
|
||||
# Check certificate status
|
||||
kubectl get certificates -A
|
||||
|
||||
# Check cert-manager logs
|
||||
kubectl logs -n cert-manager -l app=cert-manager
|
||||
|
||||
# Check if your ingress is using the correct certificate
|
||||
kubectl get ingress -n your-namespace your-ingress -o yaml
|
||||
```
|
||||
|
||||
**Common Issues:**
|
||||
|
||||
a) **Certificate Issuance Failures**: DNS validation or HTTP validation failing.
|
||||
|
||||
b) **Wrong Secret Referenced**: Ingress is referencing a non-existent certificate secret.
|
||||
|
||||
c) **Expired Certificate**: Certificate has expired and wasn't renewed.
|
||||
|
||||
**Solutions:**
|
||||
|
||||
```bash
|
||||
# 1. Check and recreate certificates
|
||||
kubectl apply -f infrastructure_setup/cert-manager/wildcard-certificate.yaml
|
||||
|
||||
# 2. Update ingress to use correct secret
|
||||
kubectl patch ingress your-ingress -n your-namespace --type=json \
|
||||
-p='[{"op": "replace", "path": "/spec/tls/0/secretName", "value": "correct-secret-name"}]'
|
||||
```
|
||||
|
||||
### 4. Check Ingress Configuration
|
||||
|
||||
**Symptoms:**
|
||||
|
||||
- HTTP 404, 503, or other error codes
|
||||
- Service accessible from inside cluster but not outside
|
||||
- Traffic routed to wrong service
|
||||
|
||||
**Checks:**
|
||||
|
||||
```bash
|
||||
# Check ingress status
|
||||
kubectl get ingress -n your-namespace
|
||||
|
||||
# Check Traefik logs
|
||||
kubectl logs -n kube-system -l app.kubernetes.io/name=traefik
|
||||
|
||||
# Check ingress configuration
|
||||
kubectl describe ingress -n your-namespace your-ingress
|
||||
```
|
||||
|
||||
**Common Issues:**
|
||||
|
||||
a) **Incorrect Service Targeting**: Ingress is pointing to wrong service or port.
|
||||
|
||||
b) **Traefik Configuration**: IngressClass or middleware issues.
|
||||
|
||||
c) **Path Configuration**: Incorrect path prefixes or regex.
|
||||
|
||||
**Solutions:**
|
||||
|
||||
```bash
|
||||
# 1. Verify ingress configuration
|
||||
kubectl edit ingress -n your-namespace your-ingress
|
||||
|
||||
# 2. Check that the referenced service exists
|
||||
kubectl get svc -n your-namespace
|
||||
|
||||
# 3. Restart Traefik if needed
|
||||
kubectl rollout restart deployment -n kube-system traefik
|
||||
```
|
||||
|
||||
## Advanced Diagnostics
|
||||
|
||||
For more complex issues, you can use port-forwarding to test services directly:
|
||||
|
||||
```bash
|
||||
# Port-forward the service directly
|
||||
kubectl port-forward -n your-namespace svc/your-service 8080:80
|
||||
|
||||
# Then test locally
|
||||
curl http://localhost:8080
|
||||
```
|
||||
|
||||
You can also deploy a debug pod to test connectivity from inside the cluster:
|
||||
|
||||
```bash
|
||||
# Start a debug pod
|
||||
kubectl run -i --tty --rm debug --image=busybox --restart=Never -- sh
|
||||
|
||||
# Inside the pod, test DNS and connectivity
|
||||
nslookup your-service.your-namespace.svc.cluster.local
|
||||
wget -O- http://your-service.your-namespace.svc.cluster.local
|
||||
```
|
||||
|
||||
## ExternalDNS Specifics
|
||||
|
||||
ExternalDNS can be particularly troublesome. Here are specific debugging steps:
|
||||
|
||||
1. **Check Log Level**: Set `--log-level=debug` for more detailed logs
|
||||
2. **Check Domain Filter**: Ensure `--domain-filter` includes your domain
|
||||
3. **Check Provider**: Ensure `--provider=cloudflare` (or your DNS provider)
|
||||
4. **Verify API Permissions**: CloudFlare token needs Zone.Zone and Zone.DNS permissions
|
||||
5. **Check TXT Records**: ExternalDNS uses TXT records for ownership tracking
|
||||
|
||||
```bash
|
||||
# Restart with verbose logging
|
||||
kubectl set env deployment/external-dns -n externaldns -- --log-level=debug
|
||||
|
||||
# Check for specific domain errors
|
||||
kubectl logs -n externaldns -l app=external-dns | grep -i yourservice.yourdomain.com
|
||||
```
|
||||
|
||||
## CloudFlare Specific Issues
|
||||
|
||||
When using CloudFlare, additional issues may arise:
|
||||
|
||||
1. **API Rate Limiting**: CloudFlare may rate limit frequent API calls
|
||||
2. **DNS Propagation**: Changes may take time to propagate through CloudFlare's CDN
|
||||
3. **Proxied Records**: The `external-dns.alpha.kubernetes.io/cloudflare-proxied` annotation controls whether CloudFlare proxies traffic
|
||||
4. **Access Restrictions**: CloudFlare Access or Page Rules may restrict access
|
||||
5. **API Token Permissions**: The token must have Zone:Zone:Read and Zone:DNS:Edit permissions
|
||||
6. **Zone Detection**: If using subdomains, ensure the parent domain is included in the domain filter
|
||||
|
||||
Check CloudFlare dashboard for:
|
||||
|
||||
- DNS record existence
|
||||
- API access logs
|
||||
- DNS settings including proxy status
|
||||
- Any error messages or rate limit warnings
|
46
infrastructure_setup/README.md
Normal file
46
infrastructure_setup/README.md
Normal file
@@ -0,0 +1,46 @@
|
||||
# Infrastructure setup scripts
|
||||
|
||||
Creates a fully functional personal cloud infrastructure on a bare metal Kubernetes (k3s) cluster that provides:
|
||||
|
||||
1. **External access** to services via configured domain names (using ${DOMAIN})
|
||||
2. **Internal-only access** to admin interfaces (via internal.${DOMAIN} subdomains)
|
||||
3. **Secure traffic routing** with automatic TLS
|
||||
4. **Reliable networking** with proper load balancing
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
Internet → External DNS → MetalLB LoadBalancer → Traefik → Kubernetes Services
|
||||
↑
|
||||
Internal DNS
|
||||
↑
|
||||
Internal Network
|
||||
```
|
||||
|
||||
## Key Components
|
||||
|
||||
- **MetalLB** - Provides load balancing for bare metal clusters
|
||||
- **Traefik** - Handles ingress traffic, TLS termination, and routing
|
||||
- **cert-manager** - Manages TLS certificates
|
||||
- **CoreDNS** - Provides DNS resolution for services
|
||||
- **Kubernetes Dashboard** - Web UI for cluster management (accessible via https://dashboard.internal.${DOMAIN})
|
||||
|
||||
## Configuration Approach
|
||||
|
||||
All infrastructure components use a consistent configuration approach:
|
||||
|
||||
1. **Environment Variables** - All configuration settings are managed using environment variables loaded by running `source load-env.sh`
|
||||
2. **Template Files** - Configuration files use templates with `${VARIABLE}` syntax
|
||||
3. **Setup Scripts** - Each component has a dedicated script in `infrastructure_setup/` for installation and configuration
|
||||
|
||||
## Idempotent Design
|
||||
|
||||
All setup scripts are designed to be idempotent:
|
||||
|
||||
- Scripts can be run multiple times without causing harm
|
||||
- Each script checks for existing resources before creating new ones
|
||||
- Configuration updates are applied cleanly without duplication
|
||||
- Failed or interrupted setups can be safely retried
|
||||
- Changes to configuration will be properly applied on subsequent runs
|
||||
|
||||
This idempotent approach ensures consistent, reliable infrastructure setup and allows for incremental changes without requiring a complete teardown and rebuild.
|
@@ -0,0 +1,19 @@
|
||||
---
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: wildcard-internal-sovereign-cloud
|
||||
namespace: internal
|
||||
spec:
|
||||
secretName: wildcard-internal-sovereign-cloud-tls
|
||||
dnsNames:
|
||||
- "*.internal.${DOMAIN}"
|
||||
- "internal.${DOMAIN}"
|
||||
issuerRef:
|
||||
name: letsencrypt-prod
|
||||
kind: ClusterIssuer
|
||||
duration: 2160h # 90 days
|
||||
renewBefore: 360h # 15 days
|
||||
privateKey:
|
||||
algorithm: RSA
|
||||
size: 2048
|
@@ -0,0 +1,26 @@
|
||||
---
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-prod
|
||||
spec:
|
||||
acme:
|
||||
email: ${EMAIL}
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-prod
|
||||
server: https://acme-v02.api.letsencrypt.org/directory
|
||||
solvers:
|
||||
# DNS-01 solver for wildcard certificates
|
||||
- dns01:
|
||||
cloudflare:
|
||||
email: ${EMAIL}
|
||||
apiTokenSecretRef:
|
||||
name: cloudflare-api-token
|
||||
key: api-token
|
||||
selector:
|
||||
dnsZones:
|
||||
- "${CLOUDFLARE_DOMAIN}" # This will cover all subdomains
|
||||
# Keep the HTTP-01 solver for non-wildcard certificates
|
||||
- http01:
|
||||
ingress:
|
||||
class: traefik
|
@@ -0,0 +1,26 @@
|
||||
---
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-staging
|
||||
spec:
|
||||
acme:
|
||||
email: ${EMAIL}
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-staging
|
||||
server: https://acme-staging-v02.api.letsencrypt.org/directory
|
||||
solvers:
|
||||
# DNS-01 solver for wildcard certificates
|
||||
- dns01:
|
||||
cloudflare:
|
||||
email: ${EMAIL}
|
||||
apiTokenSecretRef:
|
||||
name: cloudflare-api-token
|
||||
key: api-token
|
||||
selector:
|
||||
dnsZones:
|
||||
- "${DOMAIN}" # This will cover all subdomains
|
||||
# Keep the HTTP-01 solver for non-wildcard certificates
|
||||
- http01:
|
||||
ingress:
|
||||
class: traefik
|
19
infrastructure_setup/cert-manager/wildcard-certificate.yaml
Normal file
19
infrastructure_setup/cert-manager/wildcard-certificate.yaml
Normal file
@@ -0,0 +1,19 @@
|
||||
---
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: wildcard-sovereign-cloud
|
||||
namespace: default
|
||||
spec:
|
||||
secretName: wildcard-sovereign-cloud-tls
|
||||
dnsNames:
|
||||
- "*.${DOMAIN}"
|
||||
- "${DOMAIN}"
|
||||
issuerRef:
|
||||
name: letsencrypt-prod
|
||||
kind: ClusterIssuer
|
||||
duration: 2160h # 90 days
|
||||
renewBefore: 360h # 15 days
|
||||
privateKey:
|
||||
algorithm: RSA
|
||||
size: 2048
|
48
infrastructure_setup/coredns/coredns-config.yaml
Normal file
48
infrastructure_setup/coredns/coredns-config.yaml
Normal file
@@ -0,0 +1,48 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: coredns
|
||||
namespace: kube-system
|
||||
data:
|
||||
Corefile: |
|
||||
.:53 {
|
||||
errors
|
||||
health
|
||||
ready
|
||||
kubernetes cluster.local in-addr.arpa ip6.arpa {
|
||||
pods insecure
|
||||
fallthrough in-addr.arpa ip6.arpa
|
||||
}
|
||||
hosts {
|
||||
192.168.8.218 box-01
|
||||
192.168.8.222 civil
|
||||
192.168.8.240 traefik.${DOMAIN}
|
||||
192.168.8.241 dns.internal.${DOMAIN}
|
||||
|
||||
# Test records
|
||||
192.168.8.240 test.${DOMAIN}
|
||||
192.168.8.240 example-app.${DOMAIN}
|
||||
192.168.8.240 civilsociety.${DOMAIN}
|
||||
192.168.8.241 test.internal.${DOMAIN}
|
||||
192.168.8.240 example-admin.internal.${DOMAIN}
|
||||
192.168.8.240 dashboard.internal.${DOMAIN}
|
||||
192.168.8.240 kubernetes-dashboard.internal.${DOMAIN}
|
||||
|
||||
ttl 60
|
||||
reload 15s
|
||||
fallthrough
|
||||
}
|
||||
prometheus :9153
|
||||
forward . 8.8.8.8 8.8.4.4 {
|
||||
max_concurrent 1000
|
||||
}
|
||||
cache 30
|
||||
loop
|
||||
reload
|
||||
loadbalance
|
||||
import /etc/coredns/custom/*.override
|
||||
}
|
||||
import /etc/coredns/custom/*.server
|
||||
NodeHosts: |
|
||||
# This field needs to remain for compatibility, even if empty
|
||||
# Host entries are now in the Corefile hosts section
|
25
infrastructure_setup/coredns/coredns-service.yaml
Normal file
25
infrastructure_setup/coredns/coredns-service.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: coredns-lb
|
||||
namespace: kube-system
|
||||
annotations:
|
||||
metallb.universe.tf/loadBalancerIPs: "192.168.8.241"
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- name: dns
|
||||
port: 53
|
||||
protocol: UDP
|
||||
targetPort: 53
|
||||
- name: dns-tcp
|
||||
port: 53
|
||||
protocol: TCP
|
||||
targetPort: 53
|
||||
- name: metrics
|
||||
port: 9153
|
||||
protocol: TCP
|
||||
targetPort: 9153
|
||||
selector:
|
||||
k8s-app: kube-dns
|
41
infrastructure_setup/coredns/split-horizon.yaml
Normal file
41
infrastructure_setup/coredns/split-horizon.yaml
Normal file
@@ -0,0 +1,41 @@
|
||||
---
|
||||
# Split-horizon DNS configuration for CoreDNS
|
||||
# This allows different DNS responses for internal vs external domains
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: coredns-custom
|
||||
namespace: kube-system
|
||||
data:
|
||||
internal-zones.server: |
|
||||
# Internal zone configuration for *.internal.${DOMAIN}
|
||||
internal.${DOMAIN} {
|
||||
errors
|
||||
log
|
||||
hosts {
|
||||
192.168.8.240 example-admin.internal.${DOMAIN}
|
||||
192.168.8.240 dashboard.internal.${DOMAIN}
|
||||
192.168.8.241 test.internal.${DOMAIN}
|
||||
fallthrough
|
||||
}
|
||||
cache 30
|
||||
# Use kubernetes service discovery for internal services
|
||||
kubernetes cluster.local {
|
||||
pods insecure
|
||||
fallthrough in-addr.arpa ip6.arpa
|
||||
}
|
||||
# Forward to Google DNS if not found locally
|
||||
forward . 8.8.8.8 8.8.4.4
|
||||
}
|
||||
|
||||
external-zones.server: |
|
||||
# External zone configuration for *.${DOMAIN}
|
||||
${DOMAIN} {
|
||||
errors
|
||||
log
|
||||
cache 30
|
||||
# For external services, forward to Cloudflare for correct public resolution
|
||||
forward . 1.1.1.1 8.8.8.8 {
|
||||
max_concurrent 1000
|
||||
}
|
||||
}
|
69
infrastructure_setup/externaldns/externaldns.yaml
Normal file
69
infrastructure_setup/externaldns/externaldns.yaml
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: external-dns
|
||||
namespace: externaldns
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: external-dns
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["services", "endpoints", "pods"]
|
||||
verbs: ["get", "watch", "list"]
|
||||
- apiGroups: ["extensions", "networking.k8s.io"]
|
||||
resources: ["ingresses"]
|
||||
verbs: ["get", "watch", "list"]
|
||||
- apiGroups: [""]
|
||||
resources: ["nodes"]
|
||||
verbs: ["list"]
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: external-dns-viewer
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: external-dns
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: external-dns
|
||||
namespace: externaldns
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: external-dns
|
||||
namespace: externaldns
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: external-dns
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: external-dns
|
||||
spec:
|
||||
serviceAccountName: external-dns
|
||||
containers:
|
||||
- name: external-dns
|
||||
image: registry.k8s.io/external-dns/external-dns:v0.13.4
|
||||
args:
|
||||
- --source=service
|
||||
- --source=ingress
|
||||
- --provider=cloudflare
|
||||
- --txt-owner-id=${CLUSTER_ID}
|
||||
- --log-level=debug
|
||||
- --publish-internal-services # Also publish internal services
|
||||
- --no-cloudflare-proxied
|
||||
env:
|
||||
- name: CF_API_TOKEN
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: cloudflare-api-token
|
||||
key: api-token
|
347
infrastructure_setup/get_helm.sh
Executable file
347
infrastructure_setup/get_helm.sh
Executable file
@@ -0,0 +1,347 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright The Helm Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# The install script is based off of the MIT-licensed script from glide,
|
||||
# the package manager for Go: https://github.com/Masterminds/glide.sh/blob/master/get
|
||||
|
||||
: ${BINARY_NAME:="helm"}
|
||||
: ${USE_SUDO:="true"}
|
||||
: ${DEBUG:="false"}
|
||||
: ${VERIFY_CHECKSUM:="true"}
|
||||
: ${VERIFY_SIGNATURES:="false"}
|
||||
: ${HELM_INSTALL_DIR:="/usr/local/bin"}
|
||||
: ${GPG_PUBRING:="pubring.kbx"}
|
||||
|
||||
HAS_CURL="$(type "curl" &> /dev/null && echo true || echo false)"
|
||||
HAS_WGET="$(type "wget" &> /dev/null && echo true || echo false)"
|
||||
HAS_OPENSSL="$(type "openssl" &> /dev/null && echo true || echo false)"
|
||||
HAS_GPG="$(type "gpg" &> /dev/null && echo true || echo false)"
|
||||
HAS_GIT="$(type "git" &> /dev/null && echo true || echo false)"
|
||||
HAS_TAR="$(type "tar" &> /dev/null && echo true || echo false)"
|
||||
|
||||
# initArch discovers the architecture for this system.
|
||||
initArch() {
|
||||
ARCH=$(uname -m)
|
||||
case $ARCH in
|
||||
armv5*) ARCH="armv5";;
|
||||
armv6*) ARCH="armv6";;
|
||||
armv7*) ARCH="arm";;
|
||||
aarch64) ARCH="arm64";;
|
||||
x86) ARCH="386";;
|
||||
x86_64) ARCH="amd64";;
|
||||
i686) ARCH="386";;
|
||||
i386) ARCH="386";;
|
||||
esac
|
||||
}
|
||||
|
||||
# initOS discovers the operating system for this system.
|
||||
initOS() {
|
||||
OS=$(echo `uname`|tr '[:upper:]' '[:lower:]')
|
||||
|
||||
case "$OS" in
|
||||
# Minimalist GNU for Windows
|
||||
mingw*|cygwin*) OS='windows';;
|
||||
esac
|
||||
}
|
||||
|
||||
# runs the given command as root (detects if we are root already)
|
||||
runAsRoot() {
|
||||
if [ $EUID -ne 0 -a "$USE_SUDO" = "true" ]; then
|
||||
sudo "${@}"
|
||||
else
|
||||
"${@}"
|
||||
fi
|
||||
}
|
||||
|
||||
# verifySupported checks that the os/arch combination is supported for
|
||||
# binary builds, as well whether or not necessary tools are present.
|
||||
verifySupported() {
|
||||
local supported="darwin-amd64\ndarwin-arm64\nlinux-386\nlinux-amd64\nlinux-arm\nlinux-arm64\nlinux-ppc64le\nlinux-s390x\nlinux-riscv64\nwindows-amd64\nwindows-arm64"
|
||||
if ! echo "${supported}" | grep -q "${OS}-${ARCH}"; then
|
||||
echo "No prebuilt binary for ${OS}-${ARCH}."
|
||||
echo "To build from source, go to https://github.com/helm/helm"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "${HAS_CURL}" != "true" ] && [ "${HAS_WGET}" != "true" ]; then
|
||||
echo "Either curl or wget is required"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "${VERIFY_CHECKSUM}" == "true" ] && [ "${HAS_OPENSSL}" != "true" ]; then
|
||||
echo "In order to verify checksum, openssl must first be installed."
|
||||
echo "Please install openssl or set VERIFY_CHECKSUM=false in your environment."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "${VERIFY_SIGNATURES}" == "true" ]; then
|
||||
if [ "${HAS_GPG}" != "true" ]; then
|
||||
echo "In order to verify signatures, gpg must first be installed."
|
||||
echo "Please install gpg or set VERIFY_SIGNATURES=false in your environment."
|
||||
exit 1
|
||||
fi
|
||||
if [ "${OS}" != "linux" ]; then
|
||||
echo "Signature verification is currently only supported on Linux."
|
||||
echo "Please set VERIFY_SIGNATURES=false or verify the signatures manually."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "${HAS_GIT}" != "true" ]; then
|
||||
echo "[WARNING] Could not find git. It is required for plugin installation."
|
||||
fi
|
||||
|
||||
if [ "${HAS_TAR}" != "true" ]; then
|
||||
echo "[ERROR] Could not find tar. It is required to extract the helm binary archive."
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# checkDesiredVersion checks if the desired version is available.
|
||||
checkDesiredVersion() {
|
||||
if [ "x$DESIRED_VERSION" == "x" ]; then
|
||||
# Get tag from release URL
|
||||
local latest_release_url="https://get.helm.sh/helm-latest-version"
|
||||
local latest_release_response=""
|
||||
if [ "${HAS_CURL}" == "true" ]; then
|
||||
latest_release_response=$( curl -L --silent --show-error --fail "$latest_release_url" 2>&1 || true )
|
||||
elif [ "${HAS_WGET}" == "true" ]; then
|
||||
latest_release_response=$( wget "$latest_release_url" -q -O - 2>&1 || true )
|
||||
fi
|
||||
TAG=$( echo "$latest_release_response" | grep '^v[0-9]' )
|
||||
if [ "x$TAG" == "x" ]; then
|
||||
printf "Could not retrieve the latest release tag information from %s: %s\n" "${latest_release_url}" "${latest_release_response}"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
TAG=$DESIRED_VERSION
|
||||
fi
|
||||
}
|
||||
|
||||
# checkHelmInstalledVersion checks which version of helm is installed and
|
||||
# if it needs to be changed.
|
||||
checkHelmInstalledVersion() {
|
||||
if [[ -f "${HELM_INSTALL_DIR}/${BINARY_NAME}" ]]; then
|
||||
local version=$("${HELM_INSTALL_DIR}/${BINARY_NAME}" version --template="{{ .Version }}")
|
||||
if [[ "$version" == "$TAG" ]]; then
|
||||
echo "Helm ${version} is already ${DESIRED_VERSION:-latest}"
|
||||
return 0
|
||||
else
|
||||
echo "Helm ${TAG} is available. Changing from version ${version}."
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# downloadFile downloads the latest binary package and also the checksum
|
||||
# for that binary.
|
||||
downloadFile() {
|
||||
HELM_DIST="helm-$TAG-$OS-$ARCH.tar.gz"
|
||||
DOWNLOAD_URL="https://get.helm.sh/$HELM_DIST"
|
||||
CHECKSUM_URL="$DOWNLOAD_URL.sha256"
|
||||
HELM_TMP_ROOT="$(mktemp -dt helm-installer-XXXXXX)"
|
||||
HELM_TMP_FILE="$HELM_TMP_ROOT/$HELM_DIST"
|
||||
HELM_SUM_FILE="$HELM_TMP_ROOT/$HELM_DIST.sha256"
|
||||
echo "Downloading $DOWNLOAD_URL"
|
||||
if [ "${HAS_CURL}" == "true" ]; then
|
||||
curl -SsL "$CHECKSUM_URL" -o "$HELM_SUM_FILE"
|
||||
curl -SsL "$DOWNLOAD_URL" -o "$HELM_TMP_FILE"
|
||||
elif [ "${HAS_WGET}" == "true" ]; then
|
||||
wget -q -O "$HELM_SUM_FILE" "$CHECKSUM_URL"
|
||||
wget -q -O "$HELM_TMP_FILE" "$DOWNLOAD_URL"
|
||||
fi
|
||||
}
|
||||
|
||||
# verifyFile verifies the SHA256 checksum of the binary package
|
||||
# and the GPG signatures for both the package and checksum file
|
||||
# (depending on settings in environment).
|
||||
verifyFile() {
|
||||
if [ "${VERIFY_CHECKSUM}" == "true" ]; then
|
||||
verifyChecksum
|
||||
fi
|
||||
if [ "${VERIFY_SIGNATURES}" == "true" ]; then
|
||||
verifySignatures
|
||||
fi
|
||||
}
|
||||
|
||||
# installFile installs the Helm binary.
|
||||
installFile() {
|
||||
HELM_TMP="$HELM_TMP_ROOT/$BINARY_NAME"
|
||||
mkdir -p "$HELM_TMP"
|
||||
tar xf "$HELM_TMP_FILE" -C "$HELM_TMP"
|
||||
HELM_TMP_BIN="$HELM_TMP/$OS-$ARCH/helm"
|
||||
echo "Preparing to install $BINARY_NAME into ${HELM_INSTALL_DIR}"
|
||||
runAsRoot cp "$HELM_TMP_BIN" "$HELM_INSTALL_DIR/$BINARY_NAME"
|
||||
echo "$BINARY_NAME installed into $HELM_INSTALL_DIR/$BINARY_NAME"
|
||||
}
|
||||
|
||||
# verifyChecksum verifies the SHA256 checksum of the binary package.
|
||||
verifyChecksum() {
|
||||
printf "Verifying checksum... "
|
||||
local sum=$(openssl sha1 -sha256 ${HELM_TMP_FILE} | awk '{print $2}')
|
||||
local expected_sum=$(cat ${HELM_SUM_FILE})
|
||||
if [ "$sum" != "$expected_sum" ]; then
|
||||
echo "SHA sum of ${HELM_TMP_FILE} does not match. Aborting."
|
||||
exit 1
|
||||
fi
|
||||
echo "Done."
|
||||
}
|
||||
|
||||
# verifySignatures obtains the latest KEYS file from GitHub main branch
|
||||
# as well as the signature .asc files from the specific GitHub release,
|
||||
# then verifies that the release artifacts were signed by a maintainer's key.
|
||||
verifySignatures() {
|
||||
printf "Verifying signatures... "
|
||||
local keys_filename="KEYS"
|
||||
local github_keys_url="https://raw.githubusercontent.com/helm/helm/main/${keys_filename}"
|
||||
if [ "${HAS_CURL}" == "true" ]; then
|
||||
curl -SsL "${github_keys_url}" -o "${HELM_TMP_ROOT}/${keys_filename}"
|
||||
elif [ "${HAS_WGET}" == "true" ]; then
|
||||
wget -q -O "${HELM_TMP_ROOT}/${keys_filename}" "${github_keys_url}"
|
||||
fi
|
||||
local gpg_keyring="${HELM_TMP_ROOT}/keyring.gpg"
|
||||
local gpg_homedir="${HELM_TMP_ROOT}/gnupg"
|
||||
mkdir -p -m 0700 "${gpg_homedir}"
|
||||
local gpg_stderr_device="/dev/null"
|
||||
if [ "${DEBUG}" == "true" ]; then
|
||||
gpg_stderr_device="/dev/stderr"
|
||||
fi
|
||||
gpg --batch --quiet --homedir="${gpg_homedir}" --import "${HELM_TMP_ROOT}/${keys_filename}" 2> "${gpg_stderr_device}"
|
||||
gpg --batch --no-default-keyring --keyring "${gpg_homedir}/${GPG_PUBRING}" --export > "${gpg_keyring}"
|
||||
local github_release_url="https://github.com/helm/helm/releases/download/${TAG}"
|
||||
if [ "${HAS_CURL}" == "true" ]; then
|
||||
curl -SsL "${github_release_url}/helm-${TAG}-${OS}-${ARCH}.tar.gz.sha256.asc" -o "${HELM_TMP_ROOT}/helm-${TAG}-${OS}-${ARCH}.tar.gz.sha256.asc"
|
||||
curl -SsL "${github_release_url}/helm-${TAG}-${OS}-${ARCH}.tar.gz.asc" -o "${HELM_TMP_ROOT}/helm-${TAG}-${OS}-${ARCH}.tar.gz.asc"
|
||||
elif [ "${HAS_WGET}" == "true" ]; then
|
||||
wget -q -O "${HELM_TMP_ROOT}/helm-${TAG}-${OS}-${ARCH}.tar.gz.sha256.asc" "${github_release_url}/helm-${TAG}-${OS}-${ARCH}.tar.gz.sha256.asc"
|
||||
wget -q -O "${HELM_TMP_ROOT}/helm-${TAG}-${OS}-${ARCH}.tar.gz.asc" "${github_release_url}/helm-${TAG}-${OS}-${ARCH}.tar.gz.asc"
|
||||
fi
|
||||
local error_text="If you think this might be a potential security issue,"
|
||||
error_text="${error_text}\nplease see here: https://github.com/helm/community/blob/master/SECURITY.md"
|
||||
local num_goodlines_sha=$(gpg --verify --keyring="${gpg_keyring}" --status-fd=1 "${HELM_TMP_ROOT}/helm-${TAG}-${OS}-${ARCH}.tar.gz.sha256.asc" 2> "${gpg_stderr_device}" | grep -c -E '^\[GNUPG:\] (GOODSIG|VALIDSIG)')
|
||||
if [[ ${num_goodlines_sha} -lt 2 ]]; then
|
||||
echo "Unable to verify the signature of helm-${TAG}-${OS}-${ARCH}.tar.gz.sha256!"
|
||||
echo -e "${error_text}"
|
||||
exit 1
|
||||
fi
|
||||
local num_goodlines_tar=$(gpg --verify --keyring="${gpg_keyring}" --status-fd=1 "${HELM_TMP_ROOT}/helm-${TAG}-${OS}-${ARCH}.tar.gz.asc" 2> "${gpg_stderr_device}" | grep -c -E '^\[GNUPG:\] (GOODSIG|VALIDSIG)')
|
||||
if [[ ${num_goodlines_tar} -lt 2 ]]; then
|
||||
echo "Unable to verify the signature of helm-${TAG}-${OS}-${ARCH}.tar.gz!"
|
||||
echo -e "${error_text}"
|
||||
exit 1
|
||||
fi
|
||||
echo "Done."
|
||||
}
|
||||
|
||||
# fail_trap is executed if an error occurs.
|
||||
fail_trap() {
|
||||
result=$?
|
||||
if [ "$result" != "0" ]; then
|
||||
if [[ -n "$INPUT_ARGUMENTS" ]]; then
|
||||
echo "Failed to install $BINARY_NAME with the arguments provided: $INPUT_ARGUMENTS"
|
||||
help
|
||||
else
|
||||
echo "Failed to install $BINARY_NAME"
|
||||
fi
|
||||
echo -e "\tFor support, go to https://github.com/helm/helm."
|
||||
fi
|
||||
cleanup
|
||||
exit $result
|
||||
}
|
||||
|
||||
# testVersion tests the installed client to make sure it is working.
|
||||
testVersion() {
|
||||
set +e
|
||||
HELM="$(command -v $BINARY_NAME)"
|
||||
if [ "$?" = "1" ]; then
|
||||
echo "$BINARY_NAME not found. Is $HELM_INSTALL_DIR on your "'$PATH?'
|
||||
exit 1
|
||||
fi
|
||||
set -e
|
||||
}
|
||||
|
||||
# help provides possible cli installation arguments
|
||||
help () {
|
||||
echo "Accepted cli arguments are:"
|
||||
echo -e "\t[--help|-h ] ->> prints this help"
|
||||
echo -e "\t[--version|-v <desired_version>] . When not defined it fetches the latest release tag from the Helm CDN"
|
||||
echo -e "\te.g. --version v3.0.0 or -v canary"
|
||||
echo -e "\t[--no-sudo] ->> install without sudo"
|
||||
}
|
||||
|
||||
# cleanup temporary files to avoid https://github.com/helm/helm/issues/2977
|
||||
cleanup() {
|
||||
if [[ -d "${HELM_TMP_ROOT:-}" ]]; then
|
||||
rm -rf "$HELM_TMP_ROOT"
|
||||
fi
|
||||
}
|
||||
|
||||
# Execution
|
||||
|
||||
#Stop execution on any error
|
||||
trap "fail_trap" EXIT
|
||||
set -e
|
||||
|
||||
# Set debug if desired
|
||||
if [ "${DEBUG}" == "true" ]; then
|
||||
set -x
|
||||
fi
|
||||
|
||||
# Parsing input arguments (if any)
|
||||
export INPUT_ARGUMENTS="${@}"
|
||||
set -u
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
'--version'|-v)
|
||||
shift
|
||||
if [[ $# -ne 0 ]]; then
|
||||
export DESIRED_VERSION="${1}"
|
||||
if [[ "$1" != "v"* ]]; then
|
||||
echo "Expected version arg ('${DESIRED_VERSION}') to begin with 'v', fixing..."
|
||||
export DESIRED_VERSION="v${1}"
|
||||
fi
|
||||
else
|
||||
echo -e "Please provide the desired version. e.g. --version v3.0.0 or -v canary"
|
||||
exit 0
|
||||
fi
|
||||
;;
|
||||
'--no-sudo')
|
||||
USE_SUDO="false"
|
||||
;;
|
||||
'--help'|-h)
|
||||
help
|
||||
exit 0
|
||||
;;
|
||||
*) exit 1
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
set +u
|
||||
|
||||
initArch
|
||||
initOS
|
||||
verifySupported
|
||||
checkDesiredVersion
|
||||
if ! checkHelmInstalledVersion; then
|
||||
downloadFile
|
||||
verifyFile
|
||||
installFile
|
||||
fi
|
||||
testVersion
|
||||
cleanup
|
@@ -0,0 +1,103 @@
|
||||
---
|
||||
# Certificate for the dashboard
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: kubernetes-dashboard-tls
|
||||
namespace: kubernetes-dashboard
|
||||
spec:
|
||||
secretName: kubernetes-dashboard-tls
|
||||
issuerRef:
|
||||
name: letsencrypt-prod
|
||||
kind: ClusterIssuer
|
||||
dnsNames:
|
||||
- "dashboard.internal.${DOMAIN}"
|
||||
duration: 2160h # 90 days
|
||||
renewBefore: 360h # 15 days
|
||||
privateKey:
|
||||
algorithm: RSA
|
||||
size: 2048
|
||||
|
||||
---
|
||||
# Internal-only middleware
|
||||
apiVersion: traefik.containo.us/v1alpha1
|
||||
kind: Middleware
|
||||
metadata:
|
||||
name: internal-only
|
||||
namespace: kubernetes-dashboard
|
||||
spec:
|
||||
ipWhiteList:
|
||||
# Restrict to local private network ranges
|
||||
sourceRange:
|
||||
- 127.0.0.1/32 # localhost
|
||||
- 10.0.0.0/8 # Private network
|
||||
- 172.16.0.0/12 # Private network
|
||||
- 192.168.0.0/16 # Private network
|
||||
|
||||
---
|
||||
# HTTPS redirect middleware
|
||||
apiVersion: traefik.containo.us/v1alpha1
|
||||
kind: Middleware
|
||||
metadata:
|
||||
name: dashboard-redirect-scheme
|
||||
namespace: kubernetes-dashboard
|
||||
spec:
|
||||
redirectScheme:
|
||||
scheme: https
|
||||
permanent: true
|
||||
|
||||
---
|
||||
# IngressRoute for Dashboard
|
||||
apiVersion: traefik.containo.us/v1alpha1
|
||||
kind: IngressRoute
|
||||
metadata:
|
||||
name: kubernetes-dashboard-https
|
||||
namespace: kubernetes-dashboard
|
||||
spec:
|
||||
entryPoints:
|
||||
- websecure
|
||||
routes:
|
||||
- match: Host(`dashboard.internal.${DOMAIN}`)
|
||||
kind: Rule
|
||||
middlewares:
|
||||
- name: internal-only
|
||||
namespace: kubernetes-dashboard
|
||||
services:
|
||||
- name: kubernetes-dashboard
|
||||
port: 443
|
||||
serversTransport: dashboard-transport
|
||||
tls:
|
||||
secretName: kubernetes-dashboard-tls
|
||||
|
||||
---
|
||||
# HTTP to HTTPS redirect
|
||||
apiVersion: traefik.containo.us/v1alpha1
|
||||
kind: IngressRoute
|
||||
metadata:
|
||||
name: kubernetes-dashboard-http
|
||||
namespace: kubernetes-dashboard
|
||||
spec:
|
||||
entryPoints:
|
||||
- web
|
||||
routes:
|
||||
- match: Host(`dashboard.internal.${DOMAIN}`)
|
||||
kind: Rule
|
||||
middlewares:
|
||||
- name: dashboard-redirect-scheme
|
||||
namespace: kubernetes-dashboard
|
||||
services:
|
||||
- name: kubernetes-dashboard
|
||||
port: 443
|
||||
serversTransport: dashboard-transport
|
||||
|
||||
---
|
||||
# ServersTransport for HTTPS backend with skip verify
|
||||
apiVersion: traefik.containo.us/v1alpha1
|
||||
kind: ServersTransport
|
||||
metadata:
|
||||
name: dashboard-transport
|
||||
namespace: kubernetes-dashboard
|
||||
spec:
|
||||
insecureSkipVerify: true
|
||||
serverName: dashboard.internal.${DOMAIN}
|
||||
|
21
infrastructure_setup/metallb/metallb-config.yaml
Normal file
21
infrastructure_setup/metallb/metallb-config.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
---
|
||||
# Define IP address pool for MetalLB
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: IPAddressPool
|
||||
metadata:
|
||||
name: production
|
||||
namespace: metallb-system
|
||||
spec:
|
||||
addresses:
|
||||
- ${CLUSTER_LOAD_BALANCER_RANGE}
|
||||
|
||||
---
|
||||
# Define Layer 2 advertisement for the IP pool
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: L2Advertisement
|
||||
metadata:
|
||||
name: l2-advertisement
|
||||
namespace: metallb-system
|
||||
spec:
|
||||
ipAddressPools:
|
||||
- production
|
16
infrastructure_setup/metallb/metallb-helm-config.yaml
Normal file
16
infrastructure_setup/metallb/metallb-helm-config.yaml
Normal file
@@ -0,0 +1,16 @@
|
||||
apiVersion: helm.cattle.io/v1
|
||||
kind: HelmChartConfig
|
||||
metadata:
|
||||
name: metallb
|
||||
namespace: kube-system
|
||||
spec:
|
||||
valuesContent: |-
|
||||
# The new configuration format for MetalLB v0.13.0+
|
||||
apiVersion: v1
|
||||
# We'll use IPAddressPool and L2Advertisement CRs instead of the deprecated configInline
|
||||
# Need to install the CRDs separately
|
||||
crds:
|
||||
enabled: true
|
||||
# Disable controller.configInline since it's deprecated
|
||||
controller:
|
||||
configInline: null
|
21
infrastructure_setup/metallb/metallb-pool.yaml
Normal file
21
infrastructure_setup/metallb/metallb-pool.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
---
|
||||
# Define IP address pool for MetalLB using the new format
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: IPAddressPool
|
||||
metadata:
|
||||
name: production
|
||||
namespace: metallb-system
|
||||
spec:
|
||||
addresses:
|
||||
- 192.168.8.240-192.168.8.250
|
||||
|
||||
---
|
||||
# Define Layer 2 advertisement for the IP pool using the new format
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: L2Advertisement
|
||||
metadata:
|
||||
name: l2-advertisement
|
||||
namespace: metallb-system
|
||||
spec:
|
||||
ipAddressPools:
|
||||
- production
|
46
infrastructure_setup/setup-all.sh
Executable file
46
infrastructure_setup/setup-all.sh
Executable file
@@ -0,0 +1,46 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Navigate to script directory
|
||||
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
echo "Setting up infrastructure components for k3s..."
|
||||
|
||||
# Make all script files executable
|
||||
chmod +x *.sh
|
||||
|
||||
# Utils
|
||||
./setup-utils.sh
|
||||
|
||||
# Setup MetalLB (must be first for IP allocation)
|
||||
./setup-metallb.sh
|
||||
|
||||
# Setup Traefik
|
||||
./setup-traefik.sh
|
||||
|
||||
# Setup CoreDNS
|
||||
./setup-coredns.sh
|
||||
|
||||
# Setup cert-manager
|
||||
./setup-cert-manager.sh
|
||||
|
||||
# Setup ExternalDNS
|
||||
./setup-externaldns.sh
|
||||
|
||||
# Setup Kubernetes Dashboard
|
||||
./setup-dashboard.sh
|
||||
|
||||
echo "Infrastructure setup complete!"
|
||||
echo
|
||||
echo "Next steps:"
|
||||
echo "1. Install Helm charts for non-infrastructure components"
|
||||
echo "2. Access the dashboard at: https://dashboard.internal.${DOMAIN}"
|
||||
echo "3. Get the dashboard token with: ./bin/dashboard-token"
|
||||
echo
|
||||
echo "To verify components, run:"
|
||||
echo "- kubectl get pods -n cert-manager"
|
||||
echo "- kubectl get pods -n externaldns"
|
||||
echo "- kubectl get pods -n kubernetes-dashboard"
|
||||
echo "- kubectl get clusterissuers"
|
102
infrastructure_setup/setup-cert-manager.sh
Executable file
102
infrastructure_setup/setup-cert-manager.sh
Executable file
@@ -0,0 +1,102 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Navigate to script directory
|
||||
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
# Source environment variables
|
||||
if [[ -f "../load-env.sh" ]]; then
|
||||
source ../load-env.sh
|
||||
fi
|
||||
|
||||
echo "Setting up cert-manager..."
|
||||
|
||||
# Create cert-manager namespace
|
||||
kubectl create namespace cert-manager --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Install cert-manager using the official installation method
|
||||
# This installs CRDs, controllers, and webhook components
|
||||
echo "Installing cert-manager components..."
|
||||
# Using stable URL for cert-manager installation
|
||||
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.1/cert-manager.yaml || \
|
||||
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.13.1/cert-manager.yaml
|
||||
|
||||
# Wait for cert-manager to be ready
|
||||
echo "Waiting for cert-manager to be ready..."
|
||||
kubectl wait --for=condition=Available deployment/cert-manager -n cert-manager --timeout=120s
|
||||
kubectl wait --for=condition=Available deployment/cert-manager-cainjector -n cert-manager --timeout=120s
|
||||
kubectl wait --for=condition=Available deployment/cert-manager-webhook -n cert-manager --timeout=120s
|
||||
|
||||
# Add delay to allow webhook to be fully ready
|
||||
echo "Waiting additional time for cert-manager webhook to be fully operational..."
|
||||
sleep 30
|
||||
|
||||
# Setup Cloudflare API token for DNS01 challenges
|
||||
if [[ -n "${CLOUDFLARE_API_TOKEN}" ]]; then
|
||||
echo "Creating Cloudflare API token secret in cert-manager namespace..."
|
||||
kubectl create secret generic cloudflare-api-token \
|
||||
--namespace cert-manager \
|
||||
--from-literal=api-token="${CLOUDFLARE_API_TOKEN}" \
|
||||
--dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Create internal namespace if it doesn't exist
|
||||
echo "Creating internal namespace if it doesn't exist..."
|
||||
kubectl create namespace internal --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Create the same secret in the internal namespace
|
||||
echo "Creating Cloudflare API token secret in internal namespace..."
|
||||
kubectl create secret generic cloudflare-api-token \
|
||||
--namespace internal \
|
||||
--from-literal=api-token="${CLOUDFLARE_API_TOKEN}" \
|
||||
--dry-run=client -o yaml | kubectl apply -f -
|
||||
else
|
||||
echo "Warning: CLOUDFLARE_API_TOKEN not set. DNS01 challenges will not work."
|
||||
fi
|
||||
|
||||
# Apply Let's Encrypt issuers
|
||||
echo "Creating Let's Encrypt issuers..."
|
||||
cat ${SCRIPT_DIR}/cert-manager/letsencrypt-staging-dns01.yaml | envsubst | kubectl apply -f -
|
||||
cat ${SCRIPT_DIR}/cert-manager/letsencrypt-prod-dns01.yaml | envsubst | kubectl apply -f -
|
||||
|
||||
# Wait for issuers to be ready
|
||||
echo "Waiting for Let's Encrypt issuers to be ready..."
|
||||
sleep 10
|
||||
|
||||
# Apply wildcard certificates
|
||||
echo "Creating wildcard certificates..."
|
||||
cat ${SCRIPT_DIR}/cert-manager/internal-wildcard-certificate.yaml | envsubst | kubectl apply -f -
|
||||
cat ${SCRIPT_DIR}/cert-manager/wildcard-certificate.yaml | envsubst | kubectl apply -f -
|
||||
echo "Wildcard certificate creation initiated. This may take some time to complete depending on DNS propagation."
|
||||
|
||||
# Wait for the certificates to be issued (with a timeout)
|
||||
echo "Waiting for wildcard certificates to be ready (this may take several minutes)..."
|
||||
kubectl wait --for=condition=Ready certificate wildcard-soverign-cloud -n default --timeout=300s || true
|
||||
kubectl wait --for=condition=Ready certificate wildcard-internal-sovereign-cloud -n internal --timeout=300s || true
|
||||
|
||||
# Copy the internal wildcard certificate to example-admin namespace
|
||||
echo "Copying internal wildcard certificate to example-admin namespace..."
|
||||
if kubectl get namespace example-admin &>/dev/null; then
|
||||
# Create example-admin namespace if it doesn't exist
|
||||
kubectl create namespace example-admin --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Get the internal wildcard certificate secret and copy it to example-admin namespace
|
||||
if kubectl get secret wildcard-internal-sovereign-cloud-tls -n internal &>/dev/null; then
|
||||
kubectl get secret wildcard-internal-sovereign-cloud-tls -n internal -o yaml | \
|
||||
sed 's/namespace: internal/namespace: example-admin/' | \
|
||||
kubectl apply -f -
|
||||
echo "Certificate copied to example-admin namespace"
|
||||
else
|
||||
echo "Internal wildcard certificate not ready yet. Please manually copy it later with:"
|
||||
echo " kubectl get secret wildcard-internal-sovereign-cloud-tls -n internal -o yaml | \\"
|
||||
echo " sed 's/namespace: internal/namespace: example-admin/' | \\"
|
||||
echo " kubectl apply -f -"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "cert-manager setup complete!"
|
||||
echo ""
|
||||
echo "To verify the installation:"
|
||||
echo " kubectl get pods -n cert-manager"
|
||||
echo " kubectl get clusterissuers"
|
35
infrastructure_setup/setup-coredns.sh
Executable file
35
infrastructure_setup/setup-coredns.sh
Executable file
@@ -0,0 +1,35 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
# Source environment variables
|
||||
if [[ -f "../load-env.sh" ]]; then
|
||||
source ../load-env.sh
|
||||
fi
|
||||
|
||||
echo "Setting up CoreDNS for k3s..."
|
||||
echo "Script directory: ${SCRIPT_DIR}"
|
||||
echo "Current directory: $(pwd)"
|
||||
|
||||
# Apply the custom config for the k3s-provided CoreDNS
|
||||
echo "Applying CoreDNS configuration..."
|
||||
echo "Looking for file: ${SCRIPT_DIR}/coredns/coredns-config.yaml"
|
||||
# Simply use envsubst for variable expansion and apply
|
||||
cat "${SCRIPT_DIR}/coredns/coredns-config.yaml" | envsubst | kubectl apply -f -
|
||||
|
||||
# Apply the split-horizon configuration
|
||||
echo "Applying split-horizon DNS configuration..."
|
||||
cat "${SCRIPT_DIR}/coredns/split-horizon.yaml" | envsubst | kubectl apply -f -
|
||||
|
||||
# Apply the LoadBalancer service for external access to CoreDNS
|
||||
echo "Applying CoreDNS service configuration..."
|
||||
cat "${SCRIPT_DIR}/coredns/coredns-service.yaml" | envsubst | kubectl apply -f -
|
||||
|
||||
# Restart CoreDNS pods to apply the changes
|
||||
echo "Restarting CoreDNS pods to apply changes..."
|
||||
kubectl delete pod -n kube-system -l k8s-app=kube-dns
|
||||
|
||||
echo "CoreDNS setup complete!"
|
94
infrastructure_setup/setup-dashboard.sh
Executable file
94
infrastructure_setup/setup-dashboard.sh
Executable file
@@ -0,0 +1,94 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Store the script directory path for later use
|
||||
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
# Source environment variables
|
||||
if [[ -f "../load-env.sh" ]]; then
|
||||
source ../load-env.sh
|
||||
fi
|
||||
|
||||
echo "Setting up Kubernetes Dashboard..."
|
||||
|
||||
# Apply the official dashboard installation
|
||||
echo "Installing Kubernetes Dashboard core components..."
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
|
||||
|
||||
# Create admin service account and token
|
||||
cat << EOF | kubectl apply -f -
|
||||
---
|
||||
# Service Account and RBAC
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: dashboard-admin
|
||||
namespace: kubernetes-dashboard
|
||||
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: dashboard-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: dashboard-admin
|
||||
namespace: kubernetes-dashboard
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
|
||||
---
|
||||
# Token for dashboard-admin
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: dashboard-admin-token
|
||||
namespace: kubernetes-dashboard
|
||||
annotations:
|
||||
kubernetes.io/service-account.name: dashboard-admin
|
||||
type: kubernetes.io/service-account-token
|
||||
EOF
|
||||
|
||||
# Clean up any existing IngressRoute resources that might conflict
|
||||
echo "Cleaning up any existing dashboard resources to prevent conflicts..."
|
||||
# Clean up all IngressRoutes related to dashboard in both namespaces
|
||||
kubectl delete ingressroute -n kubernetes-dashboard --all --ignore-not-found
|
||||
kubectl delete ingressroute -n kube-system kubernetes-dashboard --ignore-not-found
|
||||
kubectl delete ingressroute -n kube-system kubernetes-dashboard-alt --ignore-not-found
|
||||
kubectl delete ingressroute -n kube-system kubernetes-dashboard-http --ignore-not-found
|
||||
kubectl delete ingressroute -n kube-system kubernetes-dashboard-alt-http --ignore-not-found
|
||||
|
||||
# Clean up middleware in both namespaces
|
||||
kubectl delete middleware -n kubernetes-dashboard --all --ignore-not-found
|
||||
kubectl delete middleware -n kube-system dashboard-internal-only --ignore-not-found
|
||||
kubectl delete middleware -n kube-system dashboard-redirect-scheme --ignore-not-found
|
||||
|
||||
# Clean up ServersTransport in both namespaces
|
||||
kubectl delete serverstransport -n kubernetes-dashboard dashboard-transport --ignore-not-found
|
||||
kubectl delete serverstransport -n kube-system dashboard-transport --ignore-not-found
|
||||
|
||||
# Apply the dashboard configuration
|
||||
echo "Applying dashboard configuration in kube-system namespace..."
|
||||
# Use just the kube-system version since it works better with Traefik
|
||||
cat "${SCRIPT_DIR}/kubernetes-dashboard/dashboard-kube-system.yaml" | envsubst | kubectl apply -f -
|
||||
|
||||
# No need to manually update the CoreDNS ConfigMap anymore
|
||||
# The setup-coredns.sh script now handles variable substitution correctly
|
||||
|
||||
# Restart CoreDNS to pick up the changes
|
||||
kubectl delete pods -n kube-system -l k8s-app=kube-dns
|
||||
echo "Restarted CoreDNS to pick up DNS changes"
|
||||
|
||||
# Wait for dashboard to be ready
|
||||
echo "Waiting for Kubernetes Dashboard to be ready..."
|
||||
kubectl rollout status deployment/kubernetes-dashboard -n kubernetes-dashboard --timeout=60s
|
||||
|
||||
echo "Kubernetes Dashboard setup complete!"
|
||||
echo "Access the dashboard at: https://dashboard.internal.${DOMAIN}"
|
||||
echo ""
|
||||
echo "To get the authentication token, run:"
|
||||
echo "./bin/dashboard-token"
|
55
infrastructure_setup/setup-externaldns.sh
Executable file
55
infrastructure_setup/setup-externaldns.sh
Executable file
@@ -0,0 +1,55 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Navigate to script directory
|
||||
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
# Source environment variables
|
||||
if [[ -f "../load-env.sh" ]]; then
|
||||
source ../load-env.sh
|
||||
fi
|
||||
|
||||
echo "Setting up ExternalDNS..."
|
||||
|
||||
# Create externaldns namespace
|
||||
kubectl create namespace externaldns --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Setup Cloudflare API token secret for ExternalDNS
|
||||
if [[ -n "${CLOUDFLARE_API_TOKEN}" ]]; then
|
||||
echo "Creating Cloudflare API token secret..."
|
||||
kubectl create secret generic cloudflare-api-token \
|
||||
--namespace externaldns \
|
||||
--from-literal=api-token="${CLOUDFLARE_API_TOKEN}" \
|
||||
--dry-run=client -o yaml | kubectl apply -f -
|
||||
else
|
||||
echo "Error: CLOUDFLARE_API_TOKEN not set. ExternalDNS will not work correctly."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Apply ExternalDNS manifests with environment variables
|
||||
echo "Deploying ExternalDNS..."
|
||||
cat ${SCRIPT_DIR}/externaldns/externaldns.yaml | envsubst | kubectl apply -f -
|
||||
|
||||
# Wait for ExternalDNS to be ready
|
||||
echo "Waiting for ExternalDNS to be ready..."
|
||||
kubectl rollout status deployment/external-dns -n externaldns --timeout=60s
|
||||
|
||||
# Deploy test services if --test flag is provided
|
||||
if [[ "$1" == "--test" ]]; then
|
||||
echo "Deploying test services to verify ExternalDNS..."
|
||||
cat ${SCRIPT_DIR}/externaldns/test-service.yaml | envsubst | kubectl apply -f -
|
||||
cat ${SCRIPT_DIR}/externaldns/test-cname-service.yaml | envsubst | kubectl apply -f -
|
||||
|
||||
echo "Test services deployed at:"
|
||||
echo "- test.${DOMAIN}"
|
||||
echo "- test-cname.${DOMAIN} (CNAME record)"
|
||||
echo "DNS records should be automatically created in Cloudflare within a few minutes."
|
||||
fi
|
||||
|
||||
echo "ExternalDNS setup complete!"
|
||||
echo ""
|
||||
echo "To verify the installation:"
|
||||
echo " kubectl get pods -n externaldns"
|
||||
echo " kubectl logs -n externaldns -l app=external-dns -f"
|
36
infrastructure_setup/setup-metallb.sh
Executable file
36
infrastructure_setup/setup-metallb.sh
Executable file
@@ -0,0 +1,36 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
# Source environment variables
|
||||
if [[ -f "../load-env.sh" ]]; then
|
||||
source ../load-env.sh
|
||||
fi
|
||||
|
||||
echo "Setting up MetalLB..."
|
||||
|
||||
# TODO: Remove the helm config in preference to a native config.
|
||||
|
||||
echo "Deploying MetalLB..."
|
||||
cat ${SCRIPT_DIR}/metallb/metallb-helm-config.yaml | envsubst | kubectl apply -f -
|
||||
|
||||
echo "Waiting for MetalLB to be deployed..."
|
||||
kubectl wait --for=condition=complete job -l helm.sh/chart=metallb -n kube-system --timeout=120s || echo "Warning: Timeout waiting for MetalLB Helm job"
|
||||
|
||||
echo "Waiting for MetalLB controller to be ready..."
|
||||
kubectl get namespace metallb-system &>/dev/null || (echo "Waiting for metallb-system namespace to be created..." && sleep 30)
|
||||
kubectl wait --for=condition=Available deployment -l app.kubernetes.io/instance=metallb -n metallb-system --timeout=60s || echo "Warning: Timeout waiting for controller deployment"
|
||||
|
||||
echo "Configuring MetalLB IP address pool..."
|
||||
kubectl get namespace metallb-system &>/dev/null && \
|
||||
kubectl apply -f "${SCRIPT_DIR}/metallb/metallb-pool.yaml" || \
|
||||
echo "Warning: metallb-system namespace not ready yet. Pool configuration will be skipped. Run this script again in a few minutes."
|
||||
|
||||
echo "✅ MetalLB installed and configured"
|
||||
echo ""
|
||||
echo "To verify the installation:"
|
||||
echo " kubectl get pods -n metallb-system"
|
||||
echo " kubectl get ipaddresspools.metallb.io -n metallb-system"
|
18
infrastructure_setup/setup-traefik.sh
Executable file
18
infrastructure_setup/setup-traefik.sh
Executable file
@@ -0,0 +1,18 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
# Source environment variables
|
||||
if [[ -f "../load-env.sh" ]]; then
|
||||
source ../load-env.sh
|
||||
fi
|
||||
|
||||
echo "Setting up Traefik service and middleware for k3s..."
|
||||
|
||||
cat ${SCRIPT_DIR}/traefik/traefik-service.yaml | envsubst | kubectl apply -f -
|
||||
cat ${SCRIPT_DIR}/traefik/internal-middleware.yaml | envsubst | kubectl apply -f -
|
||||
|
||||
echo "Traefik setup complete!"
|
15
infrastructure_setup/setup-utils.sh
Executable file
15
infrastructure_setup/setup-utils.sh
Executable file
@@ -0,0 +1,15 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
# Install gomplate
|
||||
if command -v gomplate &> /dev/null; then
|
||||
echo "gomplate is already installed."
|
||||
exit 0
|
||||
fi
|
||||
curl -sSL https://github.com/hairyhenderson/gomplate/releases/latest/download/gomplate_linux-amd64 -o $HOME/.local/bin/gomplate
|
||||
chmod +x $HOME/.local/bin/gomplate
|
||||
echo "gomplate installed successfully."
|
13
infrastructure_setup/traefik/internal-middleware.yaml
Normal file
13
infrastructure_setup/traefik/internal-middleware.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
apiVersion: traefik.containo.us/v1alpha1
|
||||
kind: Middleware
|
||||
metadata:
|
||||
name: internal-only
|
||||
namespace: kube-system
|
||||
spec:
|
||||
ipWhiteList:
|
||||
# Restrict to local private network ranges - adjust these to match your network
|
||||
sourceRange:
|
||||
- 127.0.0.1/32 # localhost
|
||||
- 10.0.0.0/8 # Private network
|
||||
- 172.16.0.0/12 # Private network
|
||||
- 192.168.0.0/16 # Private network
|
27
infrastructure_setup/traefik/traefik-service.yaml
Normal file
27
infrastructure_setup/traefik/traefik-service.yaml
Normal file
@@ -0,0 +1,27 @@
|
||||
---
|
||||
# Traefik service configuration with static LoadBalancer IP
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: traefik
|
||||
namespace: kube-system
|
||||
annotations:
|
||||
metallb.universe.tf/address-pool: production
|
||||
metallb.universe.tf/allow-shared-ip: traefik-lb
|
||||
labels:
|
||||
app.kubernetes.io/instance: traefik-kube-system
|
||||
app.kubernetes.io/name: traefik
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
loadBalancerIP: 192.168.8.240
|
||||
selector:
|
||||
app.kubernetes.io/instance: traefik-kube-system
|
||||
app.kubernetes.io/name: traefik
|
||||
ports:
|
||||
- name: web
|
||||
port: 80
|
||||
targetPort: web
|
||||
- name: websecure
|
||||
port: 443
|
||||
targetPort: websecure
|
||||
externalTrafficPolicy: Local
|
1073
infrastructure_setup/validate_setup.sh
Executable file
1073
infrastructure_setup/validate_setup.sh
Executable file
File diff suppressed because it is too large
Load Diff
16
load-env.sh
Executable file
16
load-env.sh
Executable file
@@ -0,0 +1,16 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
PROJECT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
ENV_FILE="$PROJECT_DIR/.env"
|
||||
BIN_DIR="$PROJECT_DIR/bin"
|
||||
|
||||
if [ ! -f "$ENV_FILE" ]; then
|
||||
echo "Error: Environment file not found: $ENV_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
set -a
|
||||
source "$ENV_FILE"
|
||||
set +a
|
||||
|
||||
export PATH="$BIN_DIR:$PATH"
|
Reference in New Issue
Block a user