Settle on v1 setup method. Test run completed successfully from bootstrap to service setup.
- Refactor dnsmasq configuration and scripts for improved variable handling and clarity - Updated dnsmasq configuration files to use direct variable references instead of data source functions for better readability. - Modified setup scripts to ensure they are run from the correct environment and directory, checking for the WC_HOME variable. - Changed paths in README and scripts to reflect the new directory structure. - Enhanced error handling in setup scripts to provide clearer guidance on required configurations. - Adjusted kernel and initramfs URLs in boot.ipxe to use the updated variable references.
This commit is contained in:
@@ -19,22 +19,16 @@ Internet → External DNS → MetalLB LoadBalancer → Traefik → Kubernetes Se
|
||||
|
||||
## Key Components
|
||||
|
||||
- **MetalLB** - Provides load balancing for bare metal clusters
|
||||
- **Traefik** - Handles ingress traffic, TLS termination, and routing
|
||||
- **cert-manager** - Manages TLS certificates
|
||||
- **CoreDNS** - Provides DNS resolution for services
|
||||
- **Longhorn** - Distributed storage system for persistent volumes
|
||||
- **NFS** - Network file system for shared media storage (optional)
|
||||
- **Kubernetes Dashboard** - Web UI for cluster management (accessible via https://dashboard.internal.${DOMAIN})
|
||||
- **Docker Registry** - Private container registry for custom images
|
||||
|
||||
## Configuration Approach
|
||||
|
||||
All infrastructure components use a consistent configuration approach:
|
||||
|
||||
1. **Environment Variables** - All configuration settings are managed using environment variables loaded by running `source load-env.sh`
|
||||
2. **Template Files** - Configuration files use templates with `${VARIABLE}` syntax
|
||||
3. **Setup Scripts** - Each component has a dedicated script in `infrastructure_setup/` for installation and configuration
|
||||
- **[MetalLB](metallb/README.md)** - Provides load balancing for bare metal clusters
|
||||
- **[Traefik](traefik/README.md)** - Handles ingress traffic, TLS termination, and routing
|
||||
- **[cert-manager](cert-manager/README.md)** - Manages TLS certificates
|
||||
- **[CoreDNS](coredns/README.md)** - Provides DNS resolution for services
|
||||
- **[ExternalDNS](externaldns/README.md)** - Automatic DNS record management
|
||||
- **[Longhorn](longhorn/README.md)** - Distributed storage system for persistent volumes
|
||||
- **[NFS](nfs/README.md)** - Network file system for shared media storage (optional)
|
||||
- **[Kubernetes Dashboard](kubernetes-dashboard/README.md)** - Web UI for cluster management (accessible via https://dashboard.internal.${DOMAIN})
|
||||
- **[Docker Registry](docker-registry/README.md)** - Private container registry for custom images
|
||||
- **[Utils](utils/README.md)** - Cluster utilities and debugging tools
|
||||
|
||||
## Idempotent Design
|
||||
|
||||
@@ -47,55 +41,3 @@ All setup scripts are designed to be idempotent:
|
||||
- Changes to configuration will be properly applied on subsequent runs
|
||||
|
||||
This idempotent approach ensures consistent, reliable infrastructure setup and allows for incremental changes without requiring a complete teardown and rebuild.
|
||||
|
||||
## NFS Setup (Optional)
|
||||
|
||||
The infrastructure supports optional NFS (Network File System) for shared media storage across the cluster:
|
||||
|
||||
### Host Setup
|
||||
|
||||
First, set up the NFS server on your chosen host:
|
||||
|
||||
```bash
|
||||
# Set required environment variables
|
||||
export NFS_HOST=box-01 # Hostname or IP of NFS server
|
||||
export NFS_MEDIA_PATH=/data/media # Path to media directory
|
||||
export NFS_STORAGE_CAPACITY=1Ti # Optional: PV size (default: 250Gi)
|
||||
|
||||
# Run host setup script on the NFS server
|
||||
./infrastructure_setup/setup-nfs-host.sh
|
||||
```
|
||||
|
||||
### Cluster Integration
|
||||
|
||||
Then integrate NFS with your Kubernetes cluster:
|
||||
|
||||
```bash
|
||||
# Run cluster setup (part of setup-all.sh or standalone)
|
||||
./infrastructure_setup/setup-nfs.sh
|
||||
```
|
||||
|
||||
### Features
|
||||
|
||||
- **Automatic IP detection** - Uses network IP even when hostname resolves to localhost
|
||||
- **Cluster-wide access** - Any pod can mount the NFS share regardless of node placement
|
||||
- **Configurable capacity** - Set PersistentVolume size via `NFS_STORAGE_CAPACITY`
|
||||
- **ReadWriteMany** - Multiple pods can simultaneously access the same storage
|
||||
|
||||
### Usage
|
||||
|
||||
Applications can use NFS storage by setting `storageClassName: nfs` in their PVCs:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: media-pvc
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
storageClassName: nfs
|
||||
resources:
|
||||
requests:
|
||||
storage: 100Gi
|
||||
```
|
||||
|
0
setup/cluster/cert-manager/README.md
Normal file
0
setup/cluster/cert-manager/README.md
Normal file
@@ -1,20 +1,19 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Navigate to script directory
|
||||
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
# Source environment variables
|
||||
if [[ -f "../load-env.sh" ]]; then
|
||||
source ../load-env.sh
|
||||
if [ -z "${WC_HOME}" ]; then
|
||||
echo "Please source the wildcloud environment first. (e.g., \`source ./env.sh\`)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Setting up cert-manager..."
|
||||
CLUSTER_SETUP_DIR="${WC_HOME}/setup/cluster"
|
||||
CERT_MANAGER_DIR="${CLUSTER_SETUP_DIR}/cert-manager"
|
||||
|
||||
# Create cert-manager namespace
|
||||
kubectl create namespace cert-manager --dry-run=client -o yaml | kubectl apply -f -
|
||||
# Process templates with wild-compile-template-dir
|
||||
echo "Processing cert-manager templates..."
|
||||
wild-compile-template-dir --clean ${CERT_MANAGER_DIR}/kustomize.template ${CERT_MANAGER_DIR}/kustomize
|
||||
|
||||
echo "Setting up cert-manager..."
|
||||
|
||||
# Install cert-manager using the official installation method
|
||||
# This installs CRDs, controllers, and webhook components
|
||||
@@ -34,23 +33,12 @@ echo "Waiting additional time for cert-manager webhook to be fully operational..
|
||||
sleep 30
|
||||
|
||||
# Setup Cloudflare API token for DNS01 challenges
|
||||
if [[ -n "${CLOUDFLARE_API_TOKEN}" ]]; then
|
||||
echo "Creating Cloudflare API token secret in cert-manager namespace..."
|
||||
kubectl create secret generic cloudflare-api-token \
|
||||
--namespace cert-manager \
|
||||
--from-literal=api-token="${CLOUDFLARE_API_TOKEN}" \
|
||||
--dry-run=client -o yaml | kubectl apply -f -
|
||||
else
|
||||
echo "Warning: CLOUDFLARE_API_TOKEN not set. DNS01 challenges will not work."
|
||||
fi
|
||||
|
||||
echo "Creating Let's Encrypt issuers..."
|
||||
cat ${SCRIPT_DIR}/cert-manager/letsencrypt-staging-dns01.yaml | envsubst | kubectl apply -f -
|
||||
cat ${SCRIPT_DIR}/cert-manager/letsencrypt-prod-dns01.yaml | envsubst | kubectl apply -f -
|
||||
|
||||
# Wait for issuers to be ready
|
||||
echo "Waiting for Let's Encrypt issuers to be ready..."
|
||||
sleep 10
|
||||
echo "Creating Cloudflare API token secret..."
|
||||
CLOUDFLARE_API_TOKEN=$(wild-secret cluster.certManager.cloudflare.apiToken) || exit 1
|
||||
kubectl create secret generic cloudflare-api-token \
|
||||
--namespace cert-manager \
|
||||
--from-literal=api-token="${CLOUDFLARE_API_TOKEN}" \
|
||||
--dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Configure cert-manager to use external DNS for challenge verification
|
||||
echo "Configuring cert-manager to use external DNS servers..."
|
||||
@@ -75,10 +63,13 @@ spec:
|
||||
echo "Waiting for cert-manager to restart with new DNS configuration..."
|
||||
kubectl rollout status deployment/cert-manager -n cert-manager --timeout=120s
|
||||
|
||||
# Apply wildcard certificates
|
||||
echo "Creating wildcard certificates..."
|
||||
cat ${SCRIPT_DIR}/cert-manager/internal-wildcard-certificate.yaml | envsubst | kubectl apply -f -
|
||||
cat ${SCRIPT_DIR}/cert-manager/wildcard-certificate.yaml | envsubst | kubectl apply -f -
|
||||
# Apply Let's Encrypt issuers and certificates using kustomize
|
||||
echo "Creating Let's Encrypt issuers and certificates..."
|
||||
kubectl apply -k ${CERT_MANAGER_DIR}/kustomize
|
||||
|
||||
# Wait for issuers to be ready
|
||||
echo "Waiting for Let's Encrypt issuers to be ready..."
|
||||
sleep 10
|
||||
echo "Wildcard certificate creation initiated. This may take some time to complete depending on DNS propagation."
|
||||
|
||||
# Wait for the certificates to be issued (with a timeout)
|
||||
@@ -91,3 +82,4 @@ echo ""
|
||||
echo "To verify the installation:"
|
||||
echo " kubectl get pods -n cert-manager"
|
||||
echo " kubectl get clusterissuers"
|
||||
echo " kubectl get certificates -n cert-manager"
|
@@ -0,0 +1,19 @@
|
||||
---
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: wildcard-internal-wild-cloud
|
||||
namespace: cert-manager
|
||||
spec:
|
||||
secretName: wildcard-internal-wild-cloud-tls
|
||||
dnsNames:
|
||||
- "*.{{ .cloud.internalDomain }}"
|
||||
- "{{ .cloud.internalDomain }}"
|
||||
issuerRef:
|
||||
name: letsencrypt-prod
|
||||
kind: ClusterIssuer
|
||||
duration: 2160h # 90 days
|
||||
renewBefore: 360h # 15 days
|
||||
privateKey:
|
||||
algorithm: RSA
|
||||
size: 2048
|
@@ -0,0 +1,12 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- namespace.yaml
|
||||
- letsencrypt-staging-dns01.yaml
|
||||
- letsencrypt-prod-dns01.yaml
|
||||
- internal-wildcard-certificate.yaml
|
||||
- wildcard-certificate.yaml
|
||||
|
||||
# Note: cert-manager.yaml contains the main installation manifests
|
||||
# but is applied separately via URL in the install script
|
@@ -0,0 +1,26 @@
|
||||
---
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-prod
|
||||
spec:
|
||||
acme:
|
||||
email: {{ .operator.email }}
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-prod
|
||||
server: https://acme-v02.api.letsencrypt.org/directory
|
||||
solvers:
|
||||
# DNS-01 solver for wildcard certificates
|
||||
- dns01:
|
||||
cloudflare:
|
||||
email: {{ .operator.email }}
|
||||
apiTokenSecretRef:
|
||||
name: cloudflare-api-token
|
||||
key: api-token
|
||||
selector:
|
||||
dnsZones:
|
||||
- "{{ .cluster.certManager.cloudflare.domain }}"
|
||||
# Keep the HTTP-01 solver for non-wildcard certificates
|
||||
- http01:
|
||||
ingress:
|
||||
class: traefik
|
@@ -0,0 +1,26 @@
|
||||
---
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-staging
|
||||
spec:
|
||||
acme:
|
||||
email: {{ .operator.email }}
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-staging
|
||||
server: https://acme-staging-v02.api.letsencrypt.org/directory
|
||||
solvers:
|
||||
# DNS-01 solver for wildcard certificates
|
||||
- dns01:
|
||||
cloudflare:
|
||||
email: {{ .operator.email }}
|
||||
apiTokenSecretRef:
|
||||
name: cloudflare-api-token
|
||||
key: api-token
|
||||
selector:
|
||||
dnsZones:
|
||||
- "{{ .cluster.certManager.cloudflare.domain }}"
|
||||
# Keep the HTTP-01 solver for non-wildcard certificates
|
||||
- http01:
|
||||
ingress:
|
||||
class: traefik
|
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: cert-manager
|
@@ -0,0 +1,19 @@
|
||||
---
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: wildcard-wild-cloud
|
||||
namespace: cert-manager
|
||||
spec:
|
||||
secretName: wildcard-wild-cloud-tls
|
||||
dnsNames:
|
||||
- "*.{{ .cloud.domain }}"
|
||||
- "{{ .cloud.domain }}"
|
||||
issuerRef:
|
||||
name: letsencrypt-prod
|
||||
kind: ClusterIssuer
|
||||
duration: 2160h # 90 days
|
||||
renewBefore: 360h # 15 days
|
||||
privateKey:
|
||||
algorithm: RSA
|
||||
size: 2048
|
5623
setup/cluster/cert-manager/kustomize/cert-manager.yaml
Normal file
5623
setup/cluster/cert-manager/kustomize/cert-manager.yaml
Normal file
File diff suppressed because it is too large
Load Diff
@@ -7,8 +7,8 @@ metadata:
|
||||
spec:
|
||||
secretName: wildcard-internal-wild-cloud-tls
|
||||
dnsNames:
|
||||
- "*.internal.${DOMAIN}"
|
||||
- "internal.${DOMAIN}"
|
||||
- "*.internal.cloud2.payne.io"
|
||||
- "internal.cloud2.payne.io"
|
||||
issuerRef:
|
||||
name: letsencrypt-prod
|
||||
kind: ClusterIssuer
|
12
setup/cluster/cert-manager/kustomize/kustomization.yaml
Normal file
12
setup/cluster/cert-manager/kustomize/kustomization.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- namespace.yaml
|
||||
- letsencrypt-staging-dns01.yaml
|
||||
- letsencrypt-prod-dns01.yaml
|
||||
- internal-wildcard-certificate.yaml
|
||||
- wildcard-certificate.yaml
|
||||
|
||||
# Note: cert-manager.yaml contains the main installation manifests
|
||||
# but is applied separately via URL in the install script
|
@@ -5,7 +5,7 @@ metadata:
|
||||
name: letsencrypt-prod
|
||||
spec:
|
||||
acme:
|
||||
email: ${EMAIL}
|
||||
email: paul@payne.io
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-prod
|
||||
server: https://acme-v02.api.letsencrypt.org/directory
|
||||
@@ -13,13 +13,13 @@ spec:
|
||||
# DNS-01 solver for wildcard certificates
|
||||
- dns01:
|
||||
cloudflare:
|
||||
email: ${EMAIL}
|
||||
email: paul@payne.io
|
||||
apiTokenSecretRef:
|
||||
name: cloudflare-api-token
|
||||
key: api-token
|
||||
selector:
|
||||
dnsZones:
|
||||
- "${CLOUDFLARE_DOMAIN}"
|
||||
- "payne.io"
|
||||
# Keep the HTTP-01 solver for non-wildcard certificates
|
||||
- http01:
|
||||
ingress:
|
@@ -5,7 +5,7 @@ metadata:
|
||||
name: letsencrypt-staging
|
||||
spec:
|
||||
acme:
|
||||
email: ${EMAIL}
|
||||
email: paul@payne.io
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-staging
|
||||
server: https://acme-staging-v02.api.letsencrypt.org/directory
|
||||
@@ -13,13 +13,13 @@ spec:
|
||||
# DNS-01 solver for wildcard certificates
|
||||
- dns01:
|
||||
cloudflare:
|
||||
email: ${EMAIL}
|
||||
email: paul@payne.io
|
||||
apiTokenSecretRef:
|
||||
name: cloudflare-api-token
|
||||
key: api-token
|
||||
selector:
|
||||
dnsZones:
|
||||
- "${CLOUDFLARE_DOMAIN}"
|
||||
- "payne.io"
|
||||
# Keep the HTTP-01 solver for non-wildcard certificates
|
||||
- http01:
|
||||
ingress:
|
4
setup/cluster/cert-manager/kustomize/namespace.yaml
Normal file
4
setup/cluster/cert-manager/kustomize/namespace.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: cert-manager
|
@@ -7,8 +7,8 @@ metadata:
|
||||
spec:
|
||||
secretName: wildcard-wild-cloud-tls
|
||||
dnsNames:
|
||||
- "*.${DOMAIN}"
|
||||
- "${DOMAIN}"
|
||||
- "*.cloud2.payne.io"
|
||||
- "cloud2.payne.io"
|
||||
issuerRef:
|
||||
name: letsencrypt-prod
|
||||
kind: ClusterIssuer
|
@@ -19,31 +19,27 @@ Any query for a resource in the `internal.$DOMAIN` domain will be given the IP o
|
||||
|
||||
## Default CoreDNS Configuration
|
||||
|
||||
Found at: https://github.com/k3s-io/k3s/blob/master/manifests/coredns.yaml
|
||||
|
||||
This is k3s default CoreDNS configuration, for reference:
|
||||
This is the default CoreDNS configuration, for reference:
|
||||
|
||||
```txt
|
||||
.:53 {
|
||||
errors
|
||||
health
|
||||
health { lameduck 5s }
|
||||
ready
|
||||
kubernetes %{CLUSTER_DOMAIN}% in-addr.arpa ip6.arpa {
|
||||
pods insecure
|
||||
fallthrough in-addr.arpa ip6.arpa
|
||||
}
|
||||
hosts /etc/coredns/NodeHosts {
|
||||
ttl 60
|
||||
reload 15s
|
||||
fallthrough
|
||||
}
|
||||
log . { class error }
|
||||
prometheus :9153
|
||||
forward . /etc/resolv.conf
|
||||
cache 30
|
||||
kubernetes cluster.local in-addr.arpa ip6.arpa {
|
||||
pods insecure
|
||||
fallthrough in-addr.arpa ip6.arpa
|
||||
ttl 30
|
||||
}
|
||||
forward . /etc/resolv.conf { max_concurrent 1000 }
|
||||
cache 30 {
|
||||
disable success cluster.local
|
||||
disable denial cluster.local
|
||||
}
|
||||
loop
|
||||
reload
|
||||
loadbalance
|
||||
import /etc/coredns/custom/*.override
|
||||
}
|
||||
import /etc/coredns/custom/*.server
|
||||
```
|
||||
|
37
setup/cluster/coredns/install.sh
Executable file
37
setup/cluster/coredns/install.sh
Executable file
@@ -0,0 +1,37 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
if [ -z "${WC_HOME}" ]; then
|
||||
echo "Please source the wildcloud environment first. (e.g., \`source ./env.sh\`)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CLUSTER_SETUP_DIR="${WC_HOME}/setup/cluster"
|
||||
COREDNS_DIR="${CLUSTER_SETUP_DIR}/coredns"
|
||||
|
||||
echo "Setting up CoreDNS for k3s..."
|
||||
|
||||
# Process templates with wild-compile-template-dir
|
||||
echo "Processing CoreDNS templates..."
|
||||
wild-compile-template-dir --clean ${COREDNS_DIR}/kustomize.template ${COREDNS_DIR}/kustomize
|
||||
|
||||
# Apply the k3s-compatible custom DNS override (k3s will preserve this)
|
||||
echo "Applying CoreDNS custom override configuration..."
|
||||
kubectl apply -f "${COREDNS_DIR}/kustomize/coredns-custom-config.yaml"
|
||||
|
||||
# Apply the LoadBalancer service for external access to CoreDNS
|
||||
echo "Applying CoreDNS service configuration..."
|
||||
kubectl apply -f "${COREDNS_DIR}/kustomize/coredns-lb-service.yaml"
|
||||
|
||||
# Restart CoreDNS pods to apply the changes
|
||||
echo "Restarting CoreDNS pods to apply changes..."
|
||||
kubectl rollout restart deployment/coredns -n kube-system
|
||||
kubectl rollout status deployment/coredns -n kube-system
|
||||
|
||||
echo "CoreDNS setup complete!"
|
||||
echo
|
||||
echo "To verify the installation:"
|
||||
echo " kubectl get pods -n kube-system"
|
||||
echo " kubectl get svc -n kube-system coredns"
|
||||
echo " kubectl describe svc -n kube-system coredns"
|
||||
echo " kubectl logs -n kube-system -l k8s-app=kube-dns -f"
|
@@ -0,0 +1,28 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: coredns-custom
|
||||
namespace: kube-system
|
||||
data:
|
||||
# Custom server block for internal domains. All internal domains should
|
||||
# resolve to the cluster proxy.
|
||||
internal.server: |
|
||||
{{ .cloud.internalDomain }} {
|
||||
errors
|
||||
cache 30
|
||||
reload
|
||||
template IN A {
|
||||
match (.*)\.{{ .cloud.internalDomain | strings.ReplaceAll "." "\\." }}\.
|
||||
answer "{{`{{ .Name }}`}} 60 IN A {{ .cluster.loadBalancerIp }}"
|
||||
}
|
||||
template IN AAAA {
|
||||
match (.*)\.{{ .cloud.internalDomain | strings.ReplaceAll "." "\\." }}\.
|
||||
rcode NXDOMAIN
|
||||
}
|
||||
}
|
||||
# Custom override to set external resolvers.
|
||||
external.override: |
|
||||
forward . {{ .cloud.dns.externalResolver }} {
|
||||
max_concurrent 1000
|
||||
}
|
@@ -0,0 +1,25 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: coredns-lb
|
||||
namespace: kube-system
|
||||
annotations:
|
||||
metallb.universe.tf/loadBalancerIPs: "{{ .cluster.loadBalancerIp }}"
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- name: dns
|
||||
port: 53
|
||||
protocol: UDP
|
||||
targetPort: 53
|
||||
- name: dns-tcp
|
||||
port: 53
|
||||
protocol: TCP
|
||||
targetPort: 53
|
||||
- name: metrics
|
||||
port: 9153
|
||||
protocol: TCP
|
||||
targetPort: 9153
|
||||
selector:
|
||||
k8s-app: kube-dns
|
@@ -8,21 +8,21 @@ data:
|
||||
# Custom server block for internal domains. All internal domains should
|
||||
# resolve to the cluster proxy.
|
||||
internal.server: |
|
||||
internal.cloud.payne.io {
|
||||
internal.cloud2.payne.io {
|
||||
errors
|
||||
cache 30
|
||||
reload
|
||||
template IN A {
|
||||
match (.*)\.internal\.cloud\.payne\.io\.
|
||||
answer "{{ .Name }} 60 IN A 192.168.8.240"
|
||||
match (.*)\.internal\.cloud2\.payne\.io\.
|
||||
answer "{{ .Name }} 60 IN A 192.168.8.20"
|
||||
}
|
||||
template IN AAAA {
|
||||
match (.*)\.internal\.cloud\.payne\.io\.
|
||||
match (.*)\.internal\.cloud2\.payne\.io\.
|
||||
rcode NXDOMAIN
|
||||
}
|
||||
}
|
||||
# Custom override to set external resolvers.
|
||||
external.override: |
|
||||
forward . 1.1.1.1 8.8.8.8 {
|
||||
forward . 1.1.1.1 {
|
||||
max_concurrent 1000
|
||||
}
|
@@ -5,7 +5,7 @@ metadata:
|
||||
name: coredns-lb
|
||||
namespace: kube-system
|
||||
annotations:
|
||||
metallb.universe.tf/loadBalancerIPs: "192.168.8.241"
|
||||
metallb.universe.tf/loadBalancerIPs: "192.168.8.20"
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
0
setup/cluster/docker-registry/README.md
Normal file
0
setup/cluster/docker-registry/README.md
Normal file
@@ -1,2 +0,0 @@
|
||||
DOCKER_REGISTRY_STORAGE=10Gi
|
||||
DOCKER_REGISTRY_HOST=docker-registry.$INTERNAL_DOMAIN
|
28
setup/cluster/docker-registry/install.sh
Executable file
28
setup/cluster/docker-registry/install.sh
Executable file
@@ -0,0 +1,28 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
if [ -z "${WC_HOME}" ]; then
|
||||
echo "Please source the wildcloud environment first. (e.g., \`source ./env.sh\`)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CLUSTER_SETUP_DIR="${WC_HOME}/setup/cluster"
|
||||
DOCKER_REGISTRY_DIR="${CLUSTER_SETUP_DIR}/docker-registry"
|
||||
|
||||
echo "Setting up Docker Registry..."
|
||||
|
||||
# Process templates with wild-compile-template-dir
|
||||
echo "Processing Docker Registry templates..."
|
||||
wild-compile-template-dir --clean ${DOCKER_REGISTRY_DIR}/kustomize.template ${DOCKER_REGISTRY_DIR}/kustomize
|
||||
|
||||
# Apply the docker registry manifests using kustomize
|
||||
kubectl apply -k "${DOCKER_REGISTRY_DIR}/kustomize"
|
||||
|
||||
echo "Waiting for Docker Registry to be ready..."
|
||||
kubectl wait --for=condition=available --timeout=300s deployment/docker-registry -n docker-registry
|
||||
|
||||
echo "Docker Registry setup complete!"
|
||||
|
||||
# Show deployment status
|
||||
kubectl get pods -n docker-registry
|
||||
kubectl get services -n docker-registry
|
@@ -1,40 +0,0 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
namespace: docker-registry
|
||||
labels:
|
||||
- includeSelectors: true
|
||||
pairs:
|
||||
app: docker-registry
|
||||
managedBy: wild-cloud
|
||||
resources:
|
||||
- deployment.yaml
|
||||
- ingress.yaml
|
||||
- service.yaml
|
||||
- namespace.yaml
|
||||
- pvc.yaml
|
||||
configMapGenerator:
|
||||
- name: docker-registry-config
|
||||
envs:
|
||||
- config/config.env
|
||||
replacements:
|
||||
- source:
|
||||
kind: ConfigMap
|
||||
name: docker-registry-config
|
||||
fieldPath: data.DOCKER_REGISTRY_STORAGE
|
||||
targets:
|
||||
- select:
|
||||
kind: PersistentVolumeClaim
|
||||
name: docker-registry-pvc
|
||||
fieldPaths:
|
||||
- spec.resources.requests.storage
|
||||
- source:
|
||||
kind: ConfigMap
|
||||
name: docker-registry-config
|
||||
fieldPath: data.DOCKER_REGISTRY_HOST
|
||||
targets:
|
||||
- select:
|
||||
kind: Ingress
|
||||
name: docker-registry
|
||||
fieldPaths:
|
||||
- spec.rules.0.host
|
||||
- spec.tls.0.hosts.0
|
@@ -4,7 +4,7 @@ metadata:
|
||||
name: docker-registry
|
||||
spec:
|
||||
rules:
|
||||
- host: docker-registry.internal.${DOMAIN}
|
||||
- host: {{ .cloud.dockerRegistryHost }}
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
@@ -16,5 +16,5 @@ spec:
|
||||
number: 5000
|
||||
tls:
|
||||
- hosts:
|
||||
- docker-registry.internal.${DOMAIN}
|
||||
- {{ .cloud.dockerRegistryHost }}
|
||||
secretName: wildcard-internal-wild-cloud-tls
|
@@ -0,0 +1,14 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
namespace: docker-registry
|
||||
labels:
|
||||
- includeSelectors: true
|
||||
pairs:
|
||||
app: docker-registry
|
||||
managedBy: wild-cloud
|
||||
resources:
|
||||
- deployment.yaml
|
||||
- ingress.yaml
|
||||
- service.yaml
|
||||
- namespace.yaml
|
||||
- pvc.yaml
|
12
setup/cluster/docker-registry/kustomize.template/pvc.yaml
Normal file
12
setup/cluster/docker-registry/kustomize.template/pvc.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: docker-registry-pvc
|
||||
spec:
|
||||
storageClassName: longhorn
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
volumeMode: Filesystem
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .cluster.dockerRegistry.storage }}
|
36
setup/cluster/docker-registry/kustomize/deployment.yaml
Normal file
36
setup/cluster/docker-registry/kustomize/deployment.yaml
Normal file
@@ -0,0 +1,36 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: docker-registry
|
||||
labels:
|
||||
app: docker-registry
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: docker-registry
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxSurge: 0
|
||||
maxUnavailable: 1
|
||||
type: RollingUpdate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: docker-registry
|
||||
spec:
|
||||
containers:
|
||||
- image: registry:3.0.0
|
||||
name: docker-registry
|
||||
ports:
|
||||
- containerPort: 5000
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/registry
|
||||
name: docker-registry-storage
|
||||
readOnly: false
|
||||
volumes:
|
||||
- name: docker-registry-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: docker-registry-pvc
|
20
setup/cluster/docker-registry/kustomize/ingress.yaml
Normal file
20
setup/cluster/docker-registry/kustomize/ingress.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: docker-registry
|
||||
spec:
|
||||
rules:
|
||||
- host: docker-registry.internal.cloud2.payne.io
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: docker-registry
|
||||
port:
|
||||
number: 5000
|
||||
tls:
|
||||
- hosts:
|
||||
- docker-registry.internal.cloud2.payne.io
|
||||
secretName: wildcard-internal-wild-cloud-tls
|
14
setup/cluster/docker-registry/kustomize/kustomization.yaml
Normal file
14
setup/cluster/docker-registry/kustomize/kustomization.yaml
Normal file
@@ -0,0 +1,14 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
namespace: docker-registry
|
||||
labels:
|
||||
- includeSelectors: true
|
||||
pairs:
|
||||
app: docker-registry
|
||||
managedBy: wild-cloud
|
||||
resources:
|
||||
- deployment.yaml
|
||||
- ingress.yaml
|
||||
- service.yaml
|
||||
- namespace.yaml
|
||||
- pvc.yaml
|
4
setup/cluster/docker-registry/kustomize/namespace.yaml
Normal file
4
setup/cluster/docker-registry/kustomize/namespace.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: docker-registry
|
13
setup/cluster/docker-registry/kustomize/service.yaml
Normal file
13
setup/cluster/docker-registry/kustomize/service.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: docker-registry
|
||||
labels:
|
||||
app: docker-registry
|
||||
spec:
|
||||
ports:
|
||||
- port: 5000
|
||||
targetPort: 5000
|
||||
selector:
|
||||
app: docker-registry
|
42
setup/cluster/externaldns/install.sh
Executable file
42
setup/cluster/externaldns/install.sh
Executable file
@@ -0,0 +1,42 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
if [ -z "${WC_HOME}" ]; then
|
||||
echo "Please source the wildcloud environment first. (e.g., \`source ./env.sh\`)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CLUSTER_SETUP_DIR="${WC_HOME}/setup/cluster"
|
||||
EXTERNALDNS_DIR="${CLUSTER_SETUP_DIR}/externaldns"
|
||||
|
||||
# Process templates with wild-compile-template-dir
|
||||
echo "Processing ExternalDNS templates..."
|
||||
wild-compile-template-dir --clean ${EXTERNALDNS_DIR}/kustomize.template ${EXTERNALDNS_DIR}/kustomize
|
||||
|
||||
echo "Setting up ExternalDNS..."
|
||||
|
||||
# Apply ExternalDNS manifests using kustomize
|
||||
echo "Deploying ExternalDNS..."
|
||||
kubectl apply -k ${EXTERNALDNS_DIR}/kustomize
|
||||
|
||||
# Setup Cloudflare API token secret
|
||||
echo "Creating Cloudflare API token secret..."
|
||||
CLOUDFLARE_API_TOKEN=$(wild-secret cluster.certManager.cloudflare.apiToken) || exit 1
|
||||
kubectl create secret generic cloudflare-api-token \
|
||||
--namespace externaldns \
|
||||
--from-literal=api-token="${CLOUDFLARE_API_TOKEN}" \
|
||||
--dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Wait for ExternalDNS to be ready
|
||||
echo "Waiting for Cloudflare ExternalDNS to be ready..."
|
||||
kubectl rollout status deployment/external-dns -n externaldns --timeout=60s
|
||||
|
||||
# echo "Waiting for CoreDNS ExternalDNS to be ready..."
|
||||
# kubectl rollout status deployment/external-dns-coredns -n externaldns --timeout=60s
|
||||
|
||||
echo "ExternalDNS setup complete!"
|
||||
echo ""
|
||||
echo "To verify the installation:"
|
||||
echo " kubectl get pods -n externaldns"
|
||||
echo " kubectl logs -n externaldns -l app=external-dns -f"
|
||||
echo " kubectl logs -n externaldns -l app=external-dns-coredns -f"
|
@@ -0,0 +1,39 @@
|
||||
---
|
||||
# CloudFlare provider for ExternalDNS
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: external-dns
|
||||
namespace: externaldns
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: external-dns
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: external-dns
|
||||
spec:
|
||||
serviceAccountName: external-dns
|
||||
containers:
|
||||
- name: external-dns
|
||||
image: registry.k8s.io/external-dns/external-dns:v0.13.4
|
||||
args:
|
||||
- --source=service
|
||||
- --source=ingress
|
||||
- --txt-owner-id={{ .cluster.externalDns.ownerId }}
|
||||
- --provider=cloudflare
|
||||
- --domain-filter=payne.io
|
||||
#- --exclude-domains=internal.${DOMAIN}
|
||||
- --cloudflare-dns-records-per-page=5000
|
||||
- --publish-internal-services
|
||||
- --no-cloudflare-proxied
|
||||
- --log-level=debug
|
||||
env:
|
||||
- name: CF_API_TOKEN
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: cloudflare-api-token
|
||||
key: api-token
|
@@ -0,0 +1,7 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- namespace.yaml
|
||||
- externaldns-rbac.yaml
|
||||
- externaldns-cloudflare.yaml
|
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: externaldns
|
@@ -23,7 +23,7 @@ spec:
|
||||
args:
|
||||
- --source=service
|
||||
- --source=ingress
|
||||
- --txt-owner-id=${OWNER_ID}
|
||||
- --txt-owner-id=cloud-payne-io-cluster
|
||||
- --provider=cloudflare
|
||||
- --domain-filter=payne.io
|
||||
#- --exclude-domains=internal.${DOMAIN}
|
35
setup/cluster/externaldns/kustomize/externaldns-rbac.yaml
Normal file
35
setup/cluster/externaldns/kustomize/externaldns-rbac.yaml
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
# Common RBAC resources for all ExternalDNS deployments
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: external-dns
|
||||
namespace: externaldns
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: external-dns
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["services", "endpoints", "pods"]
|
||||
verbs: ["get", "watch", "list"]
|
||||
- apiGroups: ["extensions", "networking.k8s.io"]
|
||||
resources: ["ingresses"]
|
||||
verbs: ["get", "watch", "list"]
|
||||
- apiGroups: [""]
|
||||
resources: ["nodes"]
|
||||
verbs: ["list"]
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: external-dns-viewer
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: external-dns
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: external-dns
|
||||
namespace: externaldns
|
7
setup/cluster/externaldns/kustomize/kustomization.yaml
Normal file
7
setup/cluster/externaldns/kustomize/kustomization.yaml
Normal file
@@ -0,0 +1,7 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- namespace.yaml
|
||||
- externaldns-rbac.yaml
|
||||
- externaldns-cloudflare.yaml
|
4
setup/cluster/externaldns/kustomize/namespace.yaml
Normal file
4
setup/cluster/externaldns/kustomize/namespace.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: externaldns
|
34
setup/cluster/install-all.sh
Executable file
34
setup/cluster/install-all.sh
Executable file
@@ -0,0 +1,34 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Navigate to script directory
|
||||
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
echo "Setting up your wild-cloud cluster services..."
|
||||
echo
|
||||
|
||||
./metallb/install.sh
|
||||
./longhorn/install.sh
|
||||
./traefik/install.sh
|
||||
./coredns/install.sh
|
||||
./cert-manager/install.sh
|
||||
./externaldns/install.sh
|
||||
./kubernetes-dashboard/install.sh
|
||||
./nfs/install.sh
|
||||
./docker-registry/install.sh
|
||||
|
||||
echo "Infrastructure setup complete!"
|
||||
echo
|
||||
echo "Next steps:"
|
||||
echo "1. Install Helm charts for non-infrastructure components"
|
||||
INTERNAL_DOMAIN=$(wild-config cloud.internalDomain)
|
||||
echo "2. Access the dashboard at: https://dashboard.${INTERNAL_DOMAIN}"
|
||||
echo "3. Get the dashboard token with: ./bin/dashboard-token"
|
||||
echo
|
||||
echo "To verify components, run:"
|
||||
echo "- kubectl get pods -n cert-manager"
|
||||
echo "- kubectl get pods -n externaldns"
|
||||
echo "- kubectl get pods -n kubernetes-dashboard"
|
||||
echo "- kubectl get clusterissuers"
|
0
setup/cluster/kubernetes-dashboard/README.md
Normal file
0
setup/cluster/kubernetes-dashboard/README.md
Normal file
60
setup/cluster/kubernetes-dashboard/install.sh
Executable file
60
setup/cluster/kubernetes-dashboard/install.sh
Executable file
@@ -0,0 +1,60 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
if [ -z "${WC_HOME}" ]; then
|
||||
echo "Please source the wildcloud environment first. (e.g., \`source ./env.sh\`)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CLUSTER_SETUP_DIR="${WC_HOME}/setup/cluster"
|
||||
KUBERNETES_DASHBOARD_DIR="${CLUSTER_SETUP_DIR}/kubernetes-dashboard"
|
||||
|
||||
echo "Setting up Kubernetes Dashboard..."
|
||||
|
||||
# Process templates with wild-compile-template-dir
|
||||
echo "Processing Dashboard templates..."
|
||||
wild-compile-template-dir --clean ${KUBERNETES_DASHBOARD_DIR}/kustomize.template ${KUBERNETES_DASHBOARD_DIR}/kustomize
|
||||
|
||||
NAMESPACE="kubernetes-dashboard"
|
||||
|
||||
# Apply the official dashboard installation
|
||||
echo "Installing Kubernetes Dashboard core components..."
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
|
||||
|
||||
# Wait for cert-manager certificates to be ready
|
||||
echo "Waiting for cert-manager certificates to be ready..."
|
||||
kubectl wait --for=condition=Ready certificate wildcard-internal-wild-cloud -n cert-manager --timeout=300s || echo "Warning: Internal wildcard certificate not ready yet"
|
||||
kubectl wait --for=condition=Ready certificate wildcard-wild-cloud -n cert-manager --timeout=300s || echo "Warning: Wildcard certificate not ready yet"
|
||||
|
||||
# Copying cert-manager secrets to the dashboard namespace (if available)
|
||||
echo "Copying cert-manager secrets to dashboard namespace..."
|
||||
if kubectl get secret wildcard-internal-wild-cloud-tls -n cert-manager >/dev/null 2>&1; then
|
||||
copy-secret cert-manager:wildcard-internal-wild-cloud-tls $NAMESPACE
|
||||
else
|
||||
echo "Warning: wildcard-internal-wild-cloud-tls secret not yet available"
|
||||
fi
|
||||
|
||||
if kubectl get secret wildcard-wild-cloud-tls -n cert-manager >/dev/null 2>&1; then
|
||||
copy-secret cert-manager:wildcard-wild-cloud-tls $NAMESPACE
|
||||
else
|
||||
echo "Warning: wildcard-wild-cloud-tls secret not yet available"
|
||||
fi
|
||||
|
||||
# Apply dashboard customizations using kustomize
|
||||
echo "Applying dashboard customizations..."
|
||||
kubectl apply -k "${KUBERNETES_DASHBOARD_DIR}/kustomize"
|
||||
|
||||
# Restart CoreDNS to pick up the changes
|
||||
kubectl delete pods -n kube-system -l k8s-app=kube-dns
|
||||
echo "Restarted CoreDNS to pick up DNS changes"
|
||||
|
||||
# Wait for dashboard to be ready
|
||||
echo "Waiting for Kubernetes Dashboard to be ready..."
|
||||
kubectl rollout status deployment/kubernetes-dashboard -n $NAMESPACE --timeout=60s
|
||||
|
||||
echo "Kubernetes Dashboard setup complete!"
|
||||
INTERNAL_DOMAIN=$(wild-config cloud.internalDomain) || exit 1
|
||||
echo "Access the dashboard at: https://dashboard.${INTERNAL_DOMAIN}"
|
||||
echo ""
|
||||
echo "To get the authentication token, run:"
|
||||
echo "wild-dashboard-token"
|
@@ -1,6 +1,6 @@
|
||||
---
|
||||
# Internal-only middleware
|
||||
apiVersion: traefik.containo.us/v1alpha1
|
||||
apiVersion: traefik.io/v1alpha1
|
||||
kind: Middleware
|
||||
metadata:
|
||||
name: internal-only
|
||||
@@ -16,7 +16,7 @@ spec:
|
||||
|
||||
---
|
||||
# HTTPS redirect middleware
|
||||
apiVersion: traefik.containo.us/v1alpha1
|
||||
apiVersion: traefik.io/v1alpha1
|
||||
kind: Middleware
|
||||
metadata:
|
||||
name: dashboard-redirect-scheme
|
||||
@@ -28,7 +28,7 @@ spec:
|
||||
|
||||
---
|
||||
# IngressRoute for Dashboard
|
||||
apiVersion: traefik.containo.us/v1alpha1
|
||||
apiVersion: traefik.io/v1alpha1
|
||||
kind: IngressRoute
|
||||
metadata:
|
||||
name: kubernetes-dashboard-https
|
||||
@@ -37,7 +37,7 @@ spec:
|
||||
entryPoints:
|
||||
- websecure
|
||||
routes:
|
||||
- match: Host(`dashboard.internal.${DOMAIN}`)
|
||||
- match: Host(`dashboard.{{ .cloud.internalDomain }}`)
|
||||
kind: Rule
|
||||
middlewares:
|
||||
- name: internal-only
|
||||
@@ -52,7 +52,7 @@ spec:
|
||||
---
|
||||
# HTTP to HTTPS redirect.
|
||||
# FIXME: Is this needed?
|
||||
apiVersion: traefik.containo.us/v1alpha1
|
||||
apiVersion: traefik.io/v1alpha1
|
||||
kind: IngressRoute
|
||||
metadata:
|
||||
name: kubernetes-dashboard-http
|
||||
@@ -61,7 +61,7 @@ spec:
|
||||
entryPoints:
|
||||
- web
|
||||
routes:
|
||||
- match: Host(`dashboard.internal.${DOMAIN}`)
|
||||
- match: Host(`dashboard.{{ .cloud.internalDomain }}`)
|
||||
kind: Rule
|
||||
middlewares:
|
||||
- name: dashboard-redirect-scheme
|
||||
@@ -74,11 +74,11 @@ spec:
|
||||
---
|
||||
# ServersTransport for HTTPS backend with skip verify.
|
||||
# FIXME: Is this needed?
|
||||
apiVersion: traefik.containo.us/v1alpha1
|
||||
apiVersion: traefik.io/v1alpha1
|
||||
kind: ServersTransport
|
||||
metadata:
|
||||
name: dashboard-transport
|
||||
namespace: kubernetes-dashboard
|
||||
spec:
|
||||
insecureSkipVerify: true
|
||||
serverName: dashboard.internal.${DOMAIN}
|
||||
serverName: dashboard.{{ .cloud.internalDomain }}
|
@@ -0,0 +1,6 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- dashboard-admin-rbac.yaml
|
||||
- dashboard-kube-system.yaml
|
@@ -0,0 +1,32 @@
|
||||
---
|
||||
# Service Account and RBAC for Dashboard admin access
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: dashboard-admin
|
||||
namespace: kubernetes-dashboard
|
||||
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: dashboard-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: dashboard-admin
|
||||
namespace: kubernetes-dashboard
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
|
||||
---
|
||||
# Token for dashboard-admin
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: dashboard-admin-token
|
||||
namespace: kubernetes-dashboard
|
||||
annotations:
|
||||
kubernetes.io/service-account.name: dashboard-admin
|
||||
type: kubernetes.io/service-account-token
|
@@ -0,0 +1,84 @@
|
||||
---
|
||||
# Internal-only middleware
|
||||
apiVersion: traefik.io/v1alpha1
|
||||
kind: Middleware
|
||||
metadata:
|
||||
name: internal-only
|
||||
namespace: kubernetes-dashboard
|
||||
spec:
|
||||
ipWhiteList:
|
||||
# Restrict to local private network ranges
|
||||
sourceRange:
|
||||
- 127.0.0.1/32 # localhost
|
||||
- 10.0.0.0/8 # Private network
|
||||
- 172.16.0.0/12 # Private network
|
||||
- 192.168.0.0/16 # Private network
|
||||
|
||||
---
|
||||
# HTTPS redirect middleware
|
||||
apiVersion: traefik.io/v1alpha1
|
||||
kind: Middleware
|
||||
metadata:
|
||||
name: dashboard-redirect-scheme
|
||||
namespace: kubernetes-dashboard
|
||||
spec:
|
||||
redirectScheme:
|
||||
scheme: https
|
||||
permanent: true
|
||||
|
||||
---
|
||||
# IngressRoute for Dashboard
|
||||
apiVersion: traefik.io/v1alpha1
|
||||
kind: IngressRoute
|
||||
metadata:
|
||||
name: kubernetes-dashboard-https
|
||||
namespace: kubernetes-dashboard
|
||||
spec:
|
||||
entryPoints:
|
||||
- websecure
|
||||
routes:
|
||||
- match: Host(`dashboard.internal.cloud2.payne.io`)
|
||||
kind: Rule
|
||||
middlewares:
|
||||
- name: internal-only
|
||||
namespace: kubernetes-dashboard
|
||||
services:
|
||||
- name: kubernetes-dashboard
|
||||
port: 443
|
||||
serversTransport: dashboard-transport
|
||||
tls:
|
||||
secretName: wildcard-internal-wild-cloud-tls
|
||||
|
||||
---
|
||||
# HTTP to HTTPS redirect.
|
||||
# FIXME: Is this needed?
|
||||
apiVersion: traefik.io/v1alpha1
|
||||
kind: IngressRoute
|
||||
metadata:
|
||||
name: kubernetes-dashboard-http
|
||||
namespace: kubernetes-dashboard
|
||||
spec:
|
||||
entryPoints:
|
||||
- web
|
||||
routes:
|
||||
- match: Host(`dashboard.internal.cloud2.payne.io`)
|
||||
kind: Rule
|
||||
middlewares:
|
||||
- name: dashboard-redirect-scheme
|
||||
namespace: kubernetes-dashboard
|
||||
services:
|
||||
- name: kubernetes-dashboard
|
||||
port: 443
|
||||
serversTransport: dashboard-transport
|
||||
|
||||
---
|
||||
# ServersTransport for HTTPS backend with skip verify.
|
||||
# FIXME: Is this needed?
|
||||
apiVersion: traefik.io/v1alpha1
|
||||
kind: ServersTransport
|
||||
metadata:
|
||||
name: dashboard-transport
|
||||
namespace: kubernetes-dashboard
|
||||
spec:
|
||||
insecureSkipVerify: true
|
||||
serverName: dashboard.internal.cloud2.payne.io
|
@@ -0,0 +1,6 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- dashboard-admin-rbac.yaml
|
||||
- dashboard-kube-system.yaml
|
21
setup/cluster/longhorn/install.sh
Executable file
21
setup/cluster/longhorn/install.sh
Executable file
@@ -0,0 +1,21 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
if [ -z "${WC_HOME}" ]; then
|
||||
echo "Please source the wildcloud environment first. (e.g., \`source ./env.sh\`)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CLUSTER_SETUP_DIR="${WC_HOME}/setup/cluster"
|
||||
LONGHORN_DIR="${CLUSTER_SETUP_DIR}/longhorn"
|
||||
|
||||
echo "Setting up Longhorn..."
|
||||
|
||||
# Process templates with wild-compile-template-dir
|
||||
echo "Processing Longhorn templates..."
|
||||
wild-compile-template-dir --clean ${LONGHORN_DIR}/kustomize.template ${LONGHORN_DIR}/kustomize
|
||||
|
||||
# Apply Longhorn with kustomize to apply our customizations
|
||||
kubectl apply -k ${LONGHORN_DIR}/kustomize/
|
||||
|
||||
echo "Longhorn setup complete!"
|
@@ -4,6 +4,10 @@ apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: longhorn-system
|
||||
labels:
|
||||
pod-security.kubernetes.io/enforce: privileged
|
||||
pod-security.kubernetes.io/audit: privileged
|
||||
pod-security.kubernetes.io/warn: privileged
|
||||
---
|
||||
# Source: longhorn/templates/priorityclass.yaml
|
||||
apiVersion: scheduling.k8s.io/v1
|
5
setup/cluster/longhorn/kustomize/kustomization.yaml
Normal file
5
setup/cluster/longhorn/kustomize/kustomization.yaml
Normal file
@@ -0,0 +1,5 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- longhorn.yaml
|
5189
setup/cluster/longhorn/kustomize/longhorn.yaml
Normal file
5189
setup/cluster/longhorn/kustomize/longhorn.yaml
Normal file
File diff suppressed because it is too large
Load Diff
0
setup/cluster/metallb/README.md
Normal file
0
setup/cluster/metallb/README.md
Normal file
@@ -1,18 +0,0 @@
|
||||
namespace: metallb-system
|
||||
resources:
|
||||
- pool.yaml
|
||||
configMapGenerator:
|
||||
- name: metallb-config
|
||||
envs:
|
||||
- config/config.env
|
||||
replacements:
|
||||
- source:
|
||||
kind: ConfigMap
|
||||
name: metallb-config
|
||||
fieldPath: data.CLUSTER_LOAD_BALANCER_RANGE
|
||||
targets:
|
||||
- select:
|
||||
kind: IPAddressPool
|
||||
name: first-pool
|
||||
fieldPaths:
|
||||
- spec.addresses.0
|
@@ -1,27 +1,29 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
# Source environment variables
|
||||
if [[ -f "../load-env.sh" ]]; then
|
||||
source ../load-env.sh
|
||||
if [ -z "${WC_HOME}" ]; then
|
||||
echo "Please source the wildcloud environment first. (e.g., \`source ./env.sh\`)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CLUSTER_SETUP_DIR="${WC_HOME}/setup/cluster"
|
||||
METALLB_DIR="${CLUSTER_SETUP_DIR}/metallb"
|
||||
|
||||
echo "Setting up MetalLB..."
|
||||
|
||||
# Process templates with gomplate
|
||||
echo "Processing MetalLB templates..."
|
||||
wild-compile-template-dir --clean ${METALLB_DIR}/kustomize.template ${METALLB_DIR}/kustomize
|
||||
|
||||
echo "Deploying MetalLB..."
|
||||
# cat ${SCRIPT_DIR}/metallb/metallb-helm-config.yaml | envsubst | kubectl apply -f -
|
||||
kubectl apply -k metallb/installation
|
||||
kubectl apply -k ${METALLB_DIR}/kustomize/installation
|
||||
|
||||
echo "Waiting for MetalLB to be deployed..."
|
||||
kubectl wait --for=condition=Available deployment/controller -n metallb-system --timeout=60s
|
||||
sleep 10 # Extra buffer for webhook initialization
|
||||
|
||||
echo "Customizing MetalLB..."
|
||||
kubectl apply -k metallb/configuration
|
||||
kubectl apply -k ${METALLB_DIR}/kustomize/configuration
|
||||
|
||||
echo "✅ MetalLB installed and configured"
|
||||
echo ""
|
@@ -0,0 +1,3 @@
|
||||
namespace: metallb-system
|
||||
resources:
|
||||
- pool.yaml
|
@@ -6,7 +6,7 @@ metadata:
|
||||
namespace: metallb-system
|
||||
spec:
|
||||
addresses:
|
||||
- PLACEHOLDER_CLUSTER_LOAD_BALANCER_RANGE
|
||||
- {{ .cluster.ipAddressPool }}
|
||||
|
||||
---
|
||||
apiVersion: metallb.io/v1beta1
|
@@ -0,0 +1,3 @@
|
||||
namespace: metallb-system
|
||||
resources:
|
||||
- pool.yaml
|
19
setup/cluster/metallb/kustomize/configuration/pool.yaml
Normal file
19
setup/cluster/metallb/kustomize/configuration/pool.yaml
Normal file
@@ -0,0 +1,19 @@
|
||||
---
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: IPAddressPool
|
||||
metadata:
|
||||
name: first-pool
|
||||
namespace: metallb-system
|
||||
spec:
|
||||
addresses:
|
||||
- 192.168.8.20-192.168.8.29
|
||||
|
||||
---
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: L2Advertisement
|
||||
metadata:
|
||||
name: l2-advertisement
|
||||
namespace: metallb-system
|
||||
spec:
|
||||
ipAddressPools:
|
||||
- first-pool
|
@@ -0,0 +1,3 @@
|
||||
namespace: metallb-system
|
||||
resources:
|
||||
- github.com/metallb/metallb/config/native?ref=v0.15.0
|
54
setup/cluster/nfs/README.md
Normal file
54
setup/cluster/nfs/README.md
Normal file
@@ -0,0 +1,54 @@
|
||||
# NFS Setup (Optional)
|
||||
|
||||
The infrastructure supports optional NFS (Network File System) for shared media storage across the cluster:
|
||||
|
||||
## Host Setup
|
||||
|
||||
First, set up the NFS server on your chosen host.
|
||||
|
||||
```bash
|
||||
./setup-nfs-host.sh
|
||||
```
|
||||
|
||||
## Cluster Integration
|
||||
|
||||
Add to your `config.yaml`:
|
||||
|
||||
```yaml
|
||||
cloud:
|
||||
nfs:
|
||||
host: box-01
|
||||
mediaPath: /data/media
|
||||
storageCapacity: 250Gi
|
||||
```
|
||||
|
||||
And now you can run the nfs cluster setup:
|
||||
|
||||
```bash
|
||||
setup/setup-nfs-host.sh
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
- Automatic IP detection - Uses network IP even when hostname resolves to localhost
|
||||
- Cluster-wide access - Any pod can mount the NFS share regardless of node placement
|
||||
- Configurable capacity - Set PersistentVolume size via `NFS_STORAGE_CAPACITY`
|
||||
- ReadWriteMany - Multiple pods can simultaneously access the same storage
|
||||
|
||||
## Usage
|
||||
|
||||
Applications can use NFS storage by setting `storageClassName: nfs` in their PVCs:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: media-pvc
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
storageClassName: nfs
|
||||
resources:
|
||||
requests:
|
||||
storage: 100Gi
|
||||
```
|
@@ -2,31 +2,24 @@
|
||||
set -e
|
||||
set -o pipefail
|
||||
|
||||
# Navigate to script directory
|
||||
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
|
||||
PROJECT_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
if [ -z "${WC_HOME}" ]; then
|
||||
echo "Please source the wildcloud environment first. (e.g., \`source ./env.sh\`)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Source environment variables
|
||||
source "${PROJECT_DIR}/load-env.sh"
|
||||
CLUSTER_SETUP_DIR="${WC_HOME}/setup/cluster"
|
||||
NFS_DIR="${CLUSTER_SETUP_DIR}/nfs"
|
||||
|
||||
echo "Registering NFS server with Kubernetes cluster..."
|
||||
|
||||
# Check if NFS_HOST is configured
|
||||
if [[ -z "${NFS_HOST}" ]]; then
|
||||
echo "NFS_HOST not set. Skipping NFS Kubernetes setup."
|
||||
echo "To enable NFS media sharing:"
|
||||
echo "1. Set NFS_HOST=<hostname> in your environment"
|
||||
echo "2. Run setup-nfs-host.sh on the NFS host"
|
||||
echo "3. Re-run this script"
|
||||
exit 0
|
||||
fi
|
||||
# Process templates with wild-compile-template-dir
|
||||
echo "Processing NFS templates..."
|
||||
wild-compile-template-dir --clean ${NFS_DIR}/kustomize.template ${NFS_DIR}/kustomize
|
||||
|
||||
# Set default for NFS_STORAGE_CAPACITY if not already set
|
||||
if [[ -z "${NFS_STORAGE_CAPACITY}" ]]; then
|
||||
export NFS_STORAGE_CAPACITY="250Gi"
|
||||
echo "Using default NFS_STORAGE_CAPACITY: ${NFS_STORAGE_CAPACITY}"
|
||||
fi
|
||||
# Get NFS configuration from config.yaml
|
||||
NFS_HOST=$(wild-config cloud.nfs.host) || exit 1
|
||||
NFS_MEDIA_PATH=$(wild-config cloud.nfs.mediaPath) || exit 1
|
||||
NFS_STORAGE_CAPACITY=$(wild-config cloud.nfs.storageCapacity) || exit 1
|
||||
|
||||
echo "NFS host: ${NFS_HOST}"
|
||||
echo "Media path: ${NFS_MEDIA_PATH}"
|
||||
@@ -151,20 +144,9 @@ test_nfs_mount() {
|
||||
create_k8s_resources() {
|
||||
echo "Creating Kubernetes NFS resources..."
|
||||
|
||||
# Generate config file with resolved variables
|
||||
local nfs_dir="${SCRIPT_DIR}/nfs"
|
||||
local env_file="${nfs_dir}/config/.env"
|
||||
local config_file="${nfs_dir}/config/config.env"
|
||||
|
||||
echo "Generating NFS configuration..."
|
||||
export NFS_HOST_IP
|
||||
export NFS_MEDIA_PATH
|
||||
export NFS_STORAGE_CAPACITY
|
||||
envsubst < "${env_file}" > "${config_file}"
|
||||
|
||||
# Apply the NFS Kubernetes manifests using kustomize
|
||||
echo "Applying NFS manifests from: ${nfs_dir}"
|
||||
kubectl apply -k "${nfs_dir}"
|
||||
# Apply the NFS Kubernetes manifests using kustomize (templates already processed)
|
||||
echo "Applying NFS manifests..."
|
||||
kubectl apply -k "${NFS_DIR}/kustomize"
|
||||
|
||||
echo "✓ NFS PersistentVolume and StorageClass created"
|
||||
|
@@ -1,53 +0,0 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- persistent-volume.yaml
|
||||
- storage-class.yaml
|
||||
|
||||
replacements:
|
||||
- source:
|
||||
kind: ConfigMap
|
||||
name: nfs-config
|
||||
fieldPath: data.NFS_HOST_IP
|
||||
targets:
|
||||
- select:
|
||||
kind: PersistentVolume
|
||||
name: nfs-media-pv
|
||||
fieldPaths:
|
||||
- spec.nfs.server
|
||||
- select:
|
||||
kind: StorageClass
|
||||
name: nfs
|
||||
fieldPaths:
|
||||
- parameters.server
|
||||
- source:
|
||||
kind: ConfigMap
|
||||
name: nfs-config
|
||||
fieldPath: data.NFS_MEDIA_PATH
|
||||
targets:
|
||||
- select:
|
||||
kind: PersistentVolume
|
||||
name: nfs-media-pv
|
||||
fieldPaths:
|
||||
- spec.nfs.path
|
||||
- select:
|
||||
kind: StorageClass
|
||||
name: nfs
|
||||
fieldPaths:
|
||||
- parameters.path
|
||||
- source:
|
||||
kind: ConfigMap
|
||||
name: nfs-config
|
||||
fieldPath: data.NFS_STORAGE_CAPACITY
|
||||
targets:
|
||||
- select:
|
||||
kind: PersistentVolume
|
||||
name: nfs-media-pv
|
||||
fieldPaths:
|
||||
- spec.capacity.storage
|
||||
|
||||
configMapGenerator:
|
||||
- name: nfs-config
|
||||
envs:
|
||||
- config/config.env
|
6
setup/cluster/nfs/kustomize.template/kustomization.yaml
Normal file
6
setup/cluster/nfs/kustomize.template/kustomization.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- persistent-volume.yaml
|
||||
- storage-class.yaml
|
23
setup/cluster/nfs/kustomize.template/persistent-volume.yaml
Normal file
23
setup/cluster/nfs/kustomize.template/persistent-volume.yaml
Normal file
@@ -0,0 +1,23 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: nfs-media-pv
|
||||
labels:
|
||||
storage: nfs-media
|
||||
spec:
|
||||
capacity:
|
||||
storage: {{ .cloud.nfs.storageCapacity }}
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
persistentVolumeReclaimPolicy: Retain
|
||||
storageClassName: nfs
|
||||
nfs:
|
||||
server: {{ .cloud.nfs.host }}
|
||||
path: {{ .cloud.nfs.mediaPath }}
|
||||
mountOptions:
|
||||
- nfsvers=4.1
|
||||
- rsize=1048576
|
||||
- wsize=1048576
|
||||
- hard
|
||||
- intr
|
||||
- timeo=600
|
10
setup/cluster/nfs/kustomize.template/storage-class.yaml
Normal file
10
setup/cluster/nfs/kustomize.template/storage-class.yaml
Normal file
@@ -0,0 +1,10 @@
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: nfs
|
||||
provisioner: nfs
|
||||
parameters:
|
||||
server: {{ .cloud.nfs.host }}
|
||||
path: {{ .cloud.nfs.mediaPath }}
|
||||
reclaimPolicy: Retain
|
||||
allowVolumeExpansion: true
|
6
setup/cluster/nfs/kustomize/kustomization.yaml
Normal file
6
setup/cluster/nfs/kustomize/kustomization.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- persistent-volume.yaml
|
||||
- storage-class.yaml
|
@@ -6,14 +6,14 @@ metadata:
|
||||
storage: nfs-media
|
||||
spec:
|
||||
capacity:
|
||||
storage: REPLACE_ME
|
||||
storage: 50Gi
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
persistentVolumeReclaimPolicy: Retain
|
||||
storageClassName: nfs
|
||||
nfs:
|
||||
server: REPLACE_ME
|
||||
path: REPLACE_ME
|
||||
server: box-01
|
||||
path: /data/media
|
||||
mountOptions:
|
||||
- nfsvers=4.1
|
||||
- rsize=1048576
|
@@ -4,7 +4,7 @@ metadata:
|
||||
name: nfs
|
||||
provisioner: nfs
|
||||
parameters:
|
||||
server: REPLACE_ME
|
||||
path: REPLACE_ME
|
||||
server: box-01
|
||||
path: /data/media
|
||||
reclaimPolicy: Retain
|
||||
allowVolumeExpansion: true
|
@@ -1,55 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Navigate to script directory
|
||||
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
echo "Setting up infrastructure components for k3s..."
|
||||
|
||||
# Make all script files executable
|
||||
chmod +x *.sh
|
||||
|
||||
# Utils
|
||||
./setup-utils.sh
|
||||
|
||||
# Setup MetalLB (must be first for IP allocation)
|
||||
./setup-metallb.sh
|
||||
|
||||
# Setup Longhorn
|
||||
./setup-longhorn.sh
|
||||
|
||||
# Setup Traefik
|
||||
./setup-traefik.sh
|
||||
|
||||
# Setup CoreDNS
|
||||
./setup-coredns.sh
|
||||
|
||||
# Setup cert-manager
|
||||
./setup-cert-manager.sh
|
||||
|
||||
# Setup ExternalDNS
|
||||
./setup-externaldns.sh
|
||||
|
||||
# Setup Kubernetes Dashboard
|
||||
./setup-dashboard.sh
|
||||
|
||||
# Setup NFS Kubernetes integration (optional)
|
||||
./setup-nfs.sh
|
||||
|
||||
# Setup Docker Registry
|
||||
./setup-registry.sh
|
||||
|
||||
echo "Infrastructure setup complete!"
|
||||
echo
|
||||
echo "Next steps:"
|
||||
echo "1. Install Helm charts for non-infrastructure components"
|
||||
echo "2. Access the dashboard at: https://dashboard.internal.${DOMAIN}"
|
||||
echo "3. Get the dashboard token with: ./bin/dashboard-token"
|
||||
echo
|
||||
echo "To verify components, run:"
|
||||
echo "- kubectl get pods -n cert-manager"
|
||||
echo "- kubectl get pods -n externaldns"
|
||||
echo "- kubectl get pods -n kubernetes-dashboard"
|
||||
echo "- kubectl get clusterissuers"
|
@@ -1,30 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
# Source environment variables
|
||||
if [[ -f "../load-env.sh" ]]; then
|
||||
source ../load-env.sh
|
||||
fi
|
||||
|
||||
echo "Setting up CoreDNS for k3s..."
|
||||
echo "Script directory: ${SCRIPT_DIR}"
|
||||
echo "Current directory: $(pwd)"
|
||||
|
||||
# Apply the k3s-compatible custom DNS override (k3s will preserve this)
|
||||
echo "Applying CoreDNS custom override configuration..."
|
||||
cat "${SCRIPT_DIR}/coredns/coredns-custom-config.yaml" | envsubst | kubectl apply -f -
|
||||
|
||||
# Apply the LoadBalancer service for external access to CoreDNS
|
||||
echo "Applying CoreDNS service configuration..."
|
||||
cat "${SCRIPT_DIR}/coredns/coredns-lb-service.yaml" | envsubst | kubectl apply -f -
|
||||
|
||||
# Restart CoreDNS pods to apply the changes
|
||||
echo "Restarting CoreDNS pods to apply changes..."
|
||||
kubectl rollout restart deployment/coredns -n kube-system
|
||||
kubectl rollout status deployment/coredns -n kube-system
|
||||
|
||||
echo "CoreDNS setup complete!"
|
@@ -1,46 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Store the script directory path for later use
|
||||
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
# Source environment variables
|
||||
if [[ -f "../load-env.sh" ]]; then
|
||||
source ../load-env.sh
|
||||
fi
|
||||
|
||||
echo "Setting up Kubernetes Dashboard..."
|
||||
|
||||
NAMESPACE="kubernetes-dashboard"
|
||||
|
||||
# Apply the official dashboard installation
|
||||
echo "Installing Kubernetes Dashboard core components..."
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
|
||||
|
||||
# Copying cert-manager secrets to the dashboard namespace
|
||||
copy-secret cert-manager:wildcard-internal-wild-cloud-tls $NAMESPACE
|
||||
copy-secret cert-manager:wildcard-wild-cloud-tls $NAMESPACE
|
||||
|
||||
# Create admin service account and token
|
||||
echo "Creating dashboard admin service account and token..."
|
||||
cat "${SCRIPT_DIR}/kubernetes-dashboard/dashboard-admin-rbac.yaml" | kubectl apply -f -
|
||||
|
||||
# Apply the dashboard configuration
|
||||
echo "Applying dashboard configuration..."
|
||||
cat "${SCRIPT_DIR}/kubernetes-dashboard/dashboard-kube-system.yaml" | envsubst | kubectl apply -f -
|
||||
|
||||
# Restart CoreDNS to pick up the changes
|
||||
kubectl delete pods -n kube-system -l k8s-app=kube-dns
|
||||
echo "Restarted CoreDNS to pick up DNS changes"
|
||||
|
||||
# Wait for dashboard to be ready
|
||||
echo "Waiting for Kubernetes Dashboard to be ready..."
|
||||
kubectl rollout status deployment/kubernetes-dashboard -n $NAMESPACE --timeout=60s
|
||||
|
||||
echo "Kubernetes Dashboard setup complete!"
|
||||
echo "Access the dashboard at: https://dashboard.internal.${DOMAIN}"
|
||||
echo ""
|
||||
echo "To get the authentication token, run:"
|
||||
echo "./bin/dashboard-token"
|
@@ -1,51 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Navigate to script directory
|
||||
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
# Source environment variables
|
||||
if [[ -f "../load-env.sh" ]]; then
|
||||
source ../load-env.sh
|
||||
fi
|
||||
|
||||
echo "Setting up ExternalDNS..."
|
||||
|
||||
# Create externaldns namespace
|
||||
kubectl create namespace externaldns --dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Setup Cloudflare API token secret for ExternalDNS
|
||||
if [[ -n "${CLOUDFLARE_API_TOKEN}" ]]; then
|
||||
echo "Creating Cloudflare API token secret..."
|
||||
kubectl create secret generic cloudflare-api-token \
|
||||
--namespace externaldns \
|
||||
--from-literal=api-token="${CLOUDFLARE_API_TOKEN}" \
|
||||
--dry-run=client -o yaml | kubectl apply -f -
|
||||
else
|
||||
echo "Error: CLOUDFLARE_API_TOKEN not set. ExternalDNS will not work correctly."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Apply common RBAC resources
|
||||
echo "Deploying ExternalDNS RBAC resources..."
|
||||
cat ${SCRIPT_DIR}/externaldns/externaldns-rbac.yaml | envsubst | kubectl apply -f -
|
||||
|
||||
# Apply ExternalDNS manifests with environment variables
|
||||
echo "Deploying ExternalDNS for external DNS (Cloudflare)..."
|
||||
cat ${SCRIPT_DIR}/externaldns/externaldns-cloudflare.yaml | envsubst | kubectl apply -f -
|
||||
|
||||
# Wait for ExternalDNS to be ready
|
||||
echo "Waiting for Cloudflare ExternalDNS to be ready..."
|
||||
kubectl rollout status deployment/external-dns -n externaldns --timeout=60s
|
||||
|
||||
# echo "Waiting for CoreDNS ExternalDNS to be ready..."
|
||||
# kubectl rollout status deployment/external-dns-coredns -n externaldns --timeout=60s
|
||||
|
||||
echo "ExternalDNS setup complete!"
|
||||
echo ""
|
||||
echo "To verify the installation:"
|
||||
echo " kubectl get pods -n externaldns"
|
||||
echo " kubectl logs -n externaldns -l app=external-dns -f"
|
||||
echo " kubectl logs -n externaldns -l app=external-dns-coredns -f"
|
@@ -1,16 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
|
||||
cd "$SCRIPT_DIR"
|
||||
if [[ -f "../load-env.sh" ]]; then
|
||||
source ../load-env.sh
|
||||
fi
|
||||
|
||||
echo "Setting up Longhorn..."
|
||||
|
||||
# Apply Longhorn with kustomize to apply our customizations
|
||||
kubectl apply -k ${SCRIPT_DIR}/longhorn/
|
||||
|
||||
echo "Longhorn setup complete!"
|
@@ -1,20 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Navigate to script directory
|
||||
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
|
||||
|
||||
echo "Setting up Docker Registry..."
|
||||
|
||||
# Apply the docker registry manifests using kustomize
|
||||
kubectl apply -k "${SCRIPT_DIR}/docker-registry"
|
||||
|
||||
echo "Waiting for Docker Registry to be ready..."
|
||||
kubectl wait --for=condition=available --timeout=300s deployment/docker-registry -n docker-registry
|
||||
|
||||
echo "Docker Registry setup complete!"
|
||||
|
||||
# Show deployment status
|
||||
kubectl get pods -n docker-registry
|
||||
kubectl get services -n docker-registry
|
@@ -1,18 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
# Source environment variables
|
||||
if [[ -f "../load-env.sh" ]]; then
|
||||
source ../load-env.sh
|
||||
fi
|
||||
|
||||
echo "Setting up Traefik service and middleware for k3s..."
|
||||
|
||||
cat ${SCRIPT_DIR}/traefik/traefik-service.yaml | envsubst | kubectl apply -f -
|
||||
cat ${SCRIPT_DIR}/traefik/internal-middleware.yaml | envsubst | kubectl apply -f -
|
||||
|
||||
echo "Traefik setup complete!"
|
@@ -1,37 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
SCRIPT_PATH="$(realpath "${BASH_SOURCE[0]}")"
|
||||
SCRIPT_DIR="$(dirname "$SCRIPT_PATH")"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
# Install gomplate
|
||||
if command -v gomplate &> /dev/null; then
|
||||
echo "gomplate is already installed."
|
||||
else
|
||||
curl -sSL https://github.com/hairyhenderson/gomplate/releases/latest/download/gomplate_linux-amd64 -o $HOME/.local/bin/gomplate
|
||||
chmod +x $HOME/.local/bin/gomplate
|
||||
echo "gomplate installed successfully."
|
||||
fi
|
||||
|
||||
# Install kustomize
|
||||
if command -v kustomize &> /dev/null; then
|
||||
echo "kustomize is already installed."
|
||||
else
|
||||
curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash
|
||||
mv kustomize $HOME/.local/bin/
|
||||
echo "kustomize installed successfully."
|
||||
fi
|
||||
|
||||
## Install yq
|
||||
if command -v yq &> /dev/null; then
|
||||
echo "yq is already installed."
|
||||
else
|
||||
VERSION=v4.45.4
|
||||
BINARY=yq_linux_amd64
|
||||
wget https://github.com/mikefarah/yq/releases/download/${VERSION}/${BINARY}.tar.gz -O - | tar xz
|
||||
mv ${BINARY} $HOME/.local/bin/yq
|
||||
chmod +x $HOME/.local/bin/yq
|
||||
rm yq.1
|
||||
echo "yq installed successfully."
|
||||
fi
|
@@ -5,3 +5,27 @@
|
||||
Ingress RDs can be create for any service. The routes specificed in the Ingress are added automatically to the Traefik proxy.
|
||||
|
||||
Traefik serves all incoming network traffic on ports 80 and 443 to their appropriate services based on the route.
|
||||
|
||||
## Notes
|
||||
|
||||
These kustomize templates were created with:
|
||||
|
||||
```bash
|
||||
helm-chart-to-kustomize traefik/traefik traefik traefik values.yaml
|
||||
```
|
||||
|
||||
With values.yaml being:
|
||||
|
||||
```yaml
|
||||
ingressRoute:
|
||||
dashboard:
|
||||
enabled: true
|
||||
matchRule: Host(`dashboard.localhost`)
|
||||
entryPoints:
|
||||
- web
|
||||
providers:
|
||||
kubernetesGateway:
|
||||
enabled: true
|
||||
gateway:
|
||||
namespacePolicy: All
|
||||
```
|
||||
|
44
setup/cluster/traefik/install.sh
Executable file
44
setup/cluster/traefik/install.sh
Executable file
@@ -0,0 +1,44 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
if [ -z "${WC_HOME}" ]; then
|
||||
echo "Please source the wildcloud environment first. (e.g., \`source ./env.sh\`)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CLUSTER_SETUP_DIR="${WC_HOME}/setup/cluster"
|
||||
TRAEFIK_DIR="${CLUSTER_SETUP_DIR}/traefik"
|
||||
|
||||
echo "Setting up Traefik ingress controller..."
|
||||
|
||||
# Install required CRDs first
|
||||
echo "Installing Gateway API CRDs..."
|
||||
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml
|
||||
|
||||
echo "Installing Traefik CRDs..."
|
||||
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v3.4/docs/content/reference/dynamic-configuration/kubernetes-crd-definition-v1.yml
|
||||
|
||||
echo "Waiting for CRDs to be established..."
|
||||
kubectl wait --for condition=established crd/gateways.gateway.networking.k8s.io --timeout=60s
|
||||
kubectl wait --for condition=established crd/gatewayclasses.gateway.networking.k8s.io --timeout=60s
|
||||
kubectl wait --for condition=established crd/ingressroutes.traefik.io --timeout=60s
|
||||
kubectl wait --for condition=established crd/middlewares.traefik.io --timeout=60s
|
||||
|
||||
# Process templates with wild-compile-template-dir
|
||||
echo "Processing Traefik templates..."
|
||||
wild-compile-template-dir --clean ${TRAEFIK_DIR}/kustomize.template ${TRAEFIK_DIR}/kustomize
|
||||
|
||||
# Apply Traefik using kustomize
|
||||
echo "Deploying Traefik..."
|
||||
kubectl apply -k ${TRAEFIK_DIR}/kustomize
|
||||
|
||||
# Wait for Traefik to be ready
|
||||
echo "Waiting for Traefik to be ready..."
|
||||
kubectl wait --for=condition=Available deployment/traefik -n traefik --timeout=120s
|
||||
|
||||
|
||||
echo "✅ Traefik setup complete!"
|
||||
echo ""
|
||||
echo "To verify the installation:"
|
||||
echo " kubectl get pods -n traefik"
|
||||
echo " kubectl get svc -n traefik"
|
13
setup/cluster/traefik/kustomize.template/kustomization.yaml
Normal file
13
setup/cluster/traefik/kustomize.template/kustomization.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- namespace.yaml
|
||||
- templates/deployment.yaml
|
||||
- templates/gatewayclass.yaml
|
||||
- templates/gateway.yaml
|
||||
- templates/ingressclass.yaml
|
||||
- templates/ingressroute.yaml
|
||||
- templates/rbac/clusterrolebinding.yaml
|
||||
- templates/rbac/clusterrole.yaml
|
||||
- templates/rbac/serviceaccount.yaml
|
||||
- templates/service.yaml
|
4
setup/cluster/traefik/kustomize.template/namespace.yaml
Normal file
4
setup/cluster/traefik/kustomize.template/namespace.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: traefik
|
@@ -0,0 +1,130 @@
|
||||
---
|
||||
# Source: traefik/templates/deployment.yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: traefik
|
||||
namespace: traefik
|
||||
labels:
|
||||
app.kubernetes.io/name: traefik
|
||||
app.kubernetes.io/instance: traefik-traefik
|
||||
helm.sh/chart: traefik-36.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
annotations:
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: traefik
|
||||
app.kubernetes.io/instance: traefik-traefik
|
||||
strategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxUnavailable: 0
|
||||
maxSurge: 1
|
||||
minReadySeconds: 0
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
prometheus.io/scrape: "true"
|
||||
prometheus.io/path: "/metrics"
|
||||
prometheus.io/port: "9100"
|
||||
labels:
|
||||
app.kubernetes.io/name: traefik
|
||||
app.kubernetes.io/instance: traefik-traefik
|
||||
helm.sh/chart: traefik-36.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
spec:
|
||||
serviceAccountName: traefik
|
||||
automountServiceAccountToken: true
|
||||
terminationGracePeriodSeconds: 60
|
||||
hostNetwork: false
|
||||
containers:
|
||||
- image: docker.io/traefik:v3.4.1
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: traefik
|
||||
resources:
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
port: 8080
|
||||
scheme: HTTP
|
||||
failureThreshold: 1
|
||||
initialDelaySeconds: 2
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 2
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
port: 8080
|
||||
scheme: HTTP
|
||||
failureThreshold: 3
|
||||
initialDelaySeconds: 2
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 2
|
||||
lifecycle:
|
||||
ports:
|
||||
- name: metrics
|
||||
containerPort: 9100
|
||||
protocol: TCP
|
||||
- name: traefik
|
||||
containerPort: 8080
|
||||
protocol: TCP
|
||||
- name: web
|
||||
containerPort: 8000
|
||||
protocol: TCP
|
||||
- name: websecure
|
||||
containerPort: 8443
|
||||
protocol: TCP
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
readOnlyRootFilesystem: true
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /data
|
||||
- name: tmp
|
||||
mountPath: /tmp
|
||||
args:
|
||||
- "--global.checkNewVersion"
|
||||
- "--entryPoints.metrics.address=:9100/tcp"
|
||||
- "--entryPoints.traefik.address=:8080/tcp"
|
||||
- "--entryPoints.web.address=:8000/tcp"
|
||||
- "--entryPoints.websecure.address=:8443/tcp"
|
||||
- "--api.dashboard=true"
|
||||
- "--ping=true"
|
||||
- "--metrics.prometheus=true"
|
||||
- "--metrics.prometheus.entrypoint=metrics"
|
||||
- "--providers.kubernetescrd"
|
||||
- "--providers.kubernetescrd.allowEmptyServices=true"
|
||||
- "--providers.kubernetesingress"
|
||||
- "--providers.kubernetesingress.allowEmptyServices=true"
|
||||
- "--providers.kubernetesingress.ingressendpoint.publishedservice=traefik/traefik"
|
||||
- "--providers.kubernetesgateway"
|
||||
- "--providers.kubernetesgateway.statusaddress.service.name=traefik"
|
||||
- "--providers.kubernetesgateway.statusaddress.service.namespace=traefik"
|
||||
- "--entryPoints.websecure.http.tls=true"
|
||||
- "--log.level=INFO"
|
||||
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
volumes:
|
||||
- name: data
|
||||
emptyDir: {}
|
||||
- name: tmp
|
||||
emptyDir: {}
|
||||
securityContext:
|
||||
runAsGroup: 65532
|
||||
runAsNonRoot: true
|
||||
runAsUser: 65532
|
@@ -0,0 +1,18 @@
|
||||
---
|
||||
# Source: traefik/templates/gateway.yaml
|
||||
apiVersion: gateway.networking.k8s.io/v1
|
||||
kind: Gateway
|
||||
metadata:
|
||||
name: traefik-gateway
|
||||
namespace: traefik
|
||||
labels:
|
||||
app.kubernetes.io/name: traefik
|
||||
app.kubernetes.io/instance: traefik-traefik
|
||||
helm.sh/chart: traefik-36.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
spec:
|
||||
gatewayClassName: traefik
|
||||
listeners:
|
||||
- name: web
|
||||
port: 8000
|
||||
protocol: HTTP
|
@@ -0,0 +1,13 @@
|
||||
---
|
||||
# Source: traefik/templates/gatewayclass.yaml
|
||||
apiVersion: gateway.networking.k8s.io/v1
|
||||
kind: GatewayClass
|
||||
metadata:
|
||||
name: traefik
|
||||
labels:
|
||||
app.kubernetes.io/name: traefik
|
||||
app.kubernetes.io/instance: traefik-traefik
|
||||
helm.sh/chart: traefik-36.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
spec:
|
||||
controllerName: traefik.io/gateway-controller
|
@@ -0,0 +1,15 @@
|
||||
---
|
||||
# Source: traefik/templates/ingressclass.yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: IngressClass
|
||||
metadata:
|
||||
annotations:
|
||||
ingressclass.kubernetes.io/is-default-class: "true"
|
||||
labels:
|
||||
app.kubernetes.io/name: traefik
|
||||
app.kubernetes.io/instance: traefik-traefik
|
||||
helm.sh/chart: traefik-36.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
name: traefik
|
||||
spec:
|
||||
controller: traefik.io/ingress-controller
|
@@ -0,0 +1,21 @@
|
||||
---
|
||||
# Source: traefik/templates/ingressroute.yaml
|
||||
apiVersion: traefik.io/v1alpha1
|
||||
kind: IngressRoute
|
||||
metadata:
|
||||
name: traefik-dashboard
|
||||
namespace: traefik
|
||||
labels:
|
||||
app.kubernetes.io/name: traefik
|
||||
app.kubernetes.io/instance: traefik-traefik
|
||||
helm.sh/chart: traefik-36.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
spec:
|
||||
entryPoints:
|
||||
- web
|
||||
routes:
|
||||
- match: Host(`dashboard.localhost`)
|
||||
kind: Rule
|
||||
services:
|
||||
- kind: TraefikService
|
||||
name: api@internal
|
@@ -0,0 +1,108 @@
|
||||
---
|
||||
# Source: traefik/templates/rbac/clusterrole.yaml
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: traefik-traefik
|
||||
labels:
|
||||
app.kubernetes.io/name: traefik
|
||||
app.kubernetes.io/instance: traefik-traefik
|
||||
helm.sh/chart: traefik-36.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
- nodes
|
||||
- services
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- discovery.k8s.io
|
||||
resources:
|
||||
- endpointslices
|
||||
verbs:
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- secrets
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- extensions
|
||||
- networking.k8s.io
|
||||
resources:
|
||||
- ingressclasses
|
||||
- ingresses
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- extensions
|
||||
- networking.k8s.io
|
||||
resources:
|
||||
- ingresses/status
|
||||
verbs:
|
||||
- update
|
||||
- apiGroups:
|
||||
- traefik.io
|
||||
resources:
|
||||
- ingressroutes
|
||||
- ingressroutetcps
|
||||
- ingressrouteudps
|
||||
- middlewares
|
||||
- middlewaretcps
|
||||
- serverstransports
|
||||
- serverstransporttcps
|
||||
- tlsoptions
|
||||
- tlsstores
|
||||
- traefikservices
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- namespaces
|
||||
- secrets
|
||||
- configmaps
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- gateway.networking.k8s.io
|
||||
resources:
|
||||
- backendtlspolicies
|
||||
- gatewayclasses
|
||||
- gateways
|
||||
- grpcroutes
|
||||
- httproutes
|
||||
- referencegrants
|
||||
- tcproutes
|
||||
- tlsroutes
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- gateway.networking.k8s.io
|
||||
resources:
|
||||
- backendtlspolicies/status
|
||||
- gatewayclasses/status
|
||||
- gateways/status
|
||||
- grpcroutes/status
|
||||
- httproutes/status
|
||||
- tcproutes/status
|
||||
- tlsroutes/status
|
||||
verbs:
|
||||
- update
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user