Compare commits

...

7 Commits

Author SHA1 Message Date
Paul Payne
1d11996cd6 Better support for Talos ISO downloads. 2025-10-12 20:15:43 +00:00
Paul Payne
3a8488eaff Adds CORS. 2025-10-12 16:52:36 +00:00
Paul Payne
9d1abc3e90 Update env var name. 2025-10-12 00:41:04 +00:00
Paul Payne
9f2d5fc7fb Add dnsmasq endpoints. 2025-10-12 00:35:03 +00:00
Paul Payne
47c3b10be9 Dnsmasq management endpoints. 2025-10-11 23:01:26 +00:00
Paul Payne
ff97f14229 Get templates from embedded package. 2025-10-11 22:20:26 +00:00
Paul Payne
89c6a7aa80 Moves setup files into embedded package. 2025-10-11 22:06:39 +00:00
131 changed files with 1930 additions and 428 deletions

44
BUILDING_WILD_API.md Normal file
View File

@@ -0,0 +1,44 @@
# Building the Wild Cloud Central API
## Dev Environment Requirements
- Go 1.21+
- GNU Make (for build automation)
## Principles
- The API enables all of the functionaly needed by the CLI and the webapp. These clients should conform to the API. The API should not be designed around the needs of the CLI or webapp.
- A wild cloud instance is primarily data (YAML files for config, secrets, and manifests).
- Because a wild cloud instance is primarily data, a wild cloud instance can be managed by non-technical users through the webapp or by technical users by SSHing into the device (e.g. VSCode Remote SSH).
- Like v.PoC, we should only use gomplate templates for distinguishing between cloud instances. However, **within** a cloud instance, there should be no templating. The templates are compiled when being copied into the instances. This allows transparency and simple management by the user.
- Manage state and infrastructure idempotently.
- Cluster state should be the k8s cluster itself, not local files. It should be accessed via kubectl and talosctl.
- All wild cloud state should be stored on the filesystem in easy to read YAML files, and can be edited directly or through the webapp.
- All code should be simple and easy to understand.
- Avoid unnecessary complexity.
- Avoid unnecessary dependencies.
- Avoid unnecessary features.
- Avoid unnecessary abstractions.
- Avoid unnecessary comments.
- Avoid unnecessary configuration options.
- Avoid Helm. Use Kustomize.
- The API should be able to run on low-resource devices like a Raspberry Pi 4 (4GB RAM).
- The API should be able to manage multiple Wild Cloud instances on the LAN.
- The API should include functionality to manage a dnsmasq server on the same device. Currently, this is only used to resolve wild cloud domain names within the LAN to provide for private addresses on the LAN. The LAN router should be configured to use the Wild Central IP as its DNS server.
- The API is configurable to use various providers for:
- Wild Cloud Apps Directory provider (local FS, git repo, etc)
- DNS (built-in dnsmasq, external DNS server, etc)
### Coding Standards
- Use a standard Go project structure.
- Use Go modules.
- Use standard Go libraries wherever possible.
- Use popular, well-maintained libraries for common tasks (e.g. gorilla/mux for HTTP routing).
- Write unit tests for all functions and methods.
- Make and use common modules. For example, one module should handle all interactions with talosctl. Another modules should handle all interactions with kubectl.
- If the code is getting long and complex, break it into smaller modules.
### Features
- If WILD_CENTRAL_ENV environment variable is set to "development", the API should run in development mode.

178
README.md
View File

@@ -1,168 +1,38 @@
# Wild Central Daemon
# Wild Central API
The Wild Central Daemon is a lightweight service that runs on a local machine (e.g., a Raspberry Pi) to manage Wild Cloud instances on the local network. It provides an interface for users to interact with and manage their Wild Cloud environments.
The Wild Central API is a lightweight service that runs on a local machine (e.g., a Raspberry Pi) to manage Wild Cloud instances on the local network. It provides an interface for users to interact with and manage their Wild Cloud environments.
## Development
Start the development server:
```bash
make dev
```
## Usage
The API will be available at `http://localhost:5055`.
### Batch Configuration Update Endpoint
### Environment Variables
#### Overview
- `WILD_API_DATA_DIR` - Directory for instance data (default: `/var/lib/wild-central`)
- `WILD_DIRECTORY` - Path to Wild Cloud apps directory (default: `/opt/wild-cloud/apps`)
- `WILD_API_DNSMASQ_CONFIG_PATH` - Path to dnsmasq config file (default: `/etc/dnsmasq.d/wild-cloud.conf`)
- `WILD_CORS_ORIGINS` - Comma-separated list of allowed CORS origins for production (default: localhost development origins)
The batch configuration update endpoint allows updating multiple configuration values in a single atomic request.
## API Endpoints
#### Endpoint
The API provides the following endpoint categories:
```
PATCH /api/v1/instances/{name}/config
```
- **Instances** - Create, list, get, and delete Wild Cloud instances
- **Configuration** - Manage instance config.yaml
- **Secrets** - Manage instance secrets.yaml (redacted by default)
- **Nodes** - Discover, configure, and manage cluster nodes
- **Cluster** - Bootstrap and manage Talos/Kubernetes clusters
- **Services** - Install and manage base infrastructure services
- **Apps** - Deploy and manage Wild Cloud applications
- **PXE** - Manage PXE boot assets for network installation
- **Operations** - Track and stream long-running operations
- **Utilities** - Helper functions and status endpoints
- **dnsmasq** - Configure and manage dnsmasq for network services
#### Request Format
```json
{
"updates": [
{"path": "string", "value": "any"},
{"path": "string", "value": "any"}
]
}
```
#### Response Format
Success (200 OK):
```json
{
"message": "Configuration updated successfully",
"updated": 3
}
```
Error (400 Bad Request / 404 Not Found / 500 Internal Server Error):
```json
{
"error": "error message"
}
```
#### Usage Examples
##### Example 1: Update Basic Configuration Values
```bash
curl -X PATCH http://localhost:8080/api/v1/instances/my-cloud/config \
-H "Content-Type: application/json" \
-d '{
"updates": [
{"path": "baseDomain", "value": "example.com"},
{"path": "domain", "value": "wild.example.com"},
{"path": "internalDomain", "value": "int.wild.example.com"}
]
}'
```
Response:
```json
{
"message": "Configuration updated successfully",
"updated": 3
}
```
##### Example 2: Update Nested Configuration Values
```bash
curl -X PATCH http://localhost:8080/api/v1/instances/my-cloud/config \
-H "Content-Type: application/json" \
-d '{
"updates": [
{"path": "cluster.name", "value": "prod-cluster"},
{"path": "cluster.loadBalancerIp", "value": "192.168.1.100"},
{"path": "cluster.ipAddressPool", "value": "192.168.1.100-192.168.1.200"}
]
}'
```
##### Example 3: Update Array Values
```bash
curl -X PATCH http://localhost:8080/api/v1/instances/my-cloud/config \
-H "Content-Type: application/json" \
-d '{
"updates": [
{"path": "cluster.nodes.activeNodes[0]", "value": "node-1"},
{"path": "cluster.nodes.activeNodes[1]", "value": "node-2"}
]
}'
```
##### Example 4: Error Handling - Invalid Instance
```bash
curl -X PATCH http://localhost:8080/api/v1/instances/nonexistent/config \
-H "Content-Type: application/json" \
-d '{
"updates": [
{"path": "baseDomain", "value": "example.com"}
]
}'
```
Response (404):
```json
{
"error": "Instance not found: instance nonexistent does not exist"
}
```
##### Example 5: Error Handling - Empty Updates
```bash
curl -X PATCH http://localhost:8080/api/v1/instances/my-cloud/config \
-H "Content-Type: application/json" \
-d '{
"updates": []
}'
```
Response (400):
```json
{
"error": "updates array is required and cannot be empty"
}
```
##### Example 6: Error Handling - Missing Path
```bash
curl -X PATCH http://localhost:8080/api/v1/instances/my-cloud/config \
-H "Content-Type: application/json" \
-d '{
"updates": [
{"path": "", "value": "example.com"}
]
}'
```
Response (400):
```json
{
"error": "update[0]: path is required"
}
```
#### Configuration Path Syntax
The `path` field uses YAML path syntax as implemented by the `yq` tool:
- Simple fields: `baseDomain`
- Nested fields: `cluster.name`
- Array elements: `cluster.nodes.activeNodes[0]`
- Array append: `cluster.nodes.activeNodes[+]`
Refer to the yq documentation for advanced path syntax.
See the API handler files in `internal/api/v1/` for detailed endpoint documentation.

2
go.mod
View File

@@ -6,3 +6,5 @@ require (
github.com/gorilla/mux v1.8.1
gopkg.in/yaml.v3 v3.0.1
)
require github.com/rs/cors v1.11.1 // indirect

2
go.sum
View File

@@ -1,5 +1,7 @@
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/rs/cors v1.11.1 h1:eU3gRzXLRK57F5rKMGMZURNdIG4EoAmX8k94r9wXWHA=
github.com/rs/cors v1.11.1/go.mod h1:XyqrcTp5zjWr1wsJ8PIRZssZ8b/WMcMf71DJnit4EMU=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=

View File

@@ -4,6 +4,7 @@ import (
"encoding/json"
"fmt"
"io"
"log"
"net/http"
"os"
"path/filepath"
@@ -14,6 +15,7 @@ import (
"github.com/wild-cloud/wild-central/daemon/internal/config"
"github.com/wild-cloud/wild-central/daemon/internal/context"
"github.com/wild-cloud/wild-central/daemon/internal/dnsmasq"
"github.com/wild-cloud/wild-central/daemon/internal/instance"
"github.com/wild-cloud/wild-central/daemon/internal/operations"
"github.com/wild-cloud/wild-central/daemon/internal/secrets"
@@ -21,61 +23,65 @@ import (
// API holds all dependencies for API handlers
type API struct {
dataDir string
directoryPath string // Path to Wild Cloud Directory
appsDir string
config *config.Manager
secrets *secrets.Manager
context *context.Manager
instance *instance.Manager
broadcaster *operations.Broadcaster // SSE broadcaster for operation output
dataDir string
appsDir string // Path to external apps directory
config *config.Manager
secrets *secrets.Manager
context *context.Manager
instance *instance.Manager
dnsmasq *dnsmasq.ConfigGenerator
broadcaster *operations.Broadcaster // SSE broadcaster for operation output
}
// NewAPI creates a new API handler with all dependencies
func NewAPI(dataDir, directoryPath string) (*API, error) {
// Note: Setup files (cluster-services, cluster-nodes, etc.) are now embedded in the binary
func NewAPI(dataDir, appsDir string) (*API, error) {
// Ensure base directories exist
instancesDir := filepath.Join(dataDir, "instances")
if err := os.MkdirAll(instancesDir, 0755); err != nil {
return nil, fmt.Errorf("failed to create instances directory: %w", err)
}
// Apps directory is now in Wild Cloud Directory
appsDir := filepath.Join(directoryPath, "apps")
// Determine dnsmasq config path
dnsmasqConfigPath := "/etc/dnsmasq.d/wild-cloud.conf"
if os.Getenv("WILD_API_DNSMASQ_CONFIG_PATH") != "" {
dnsmasqConfigPath = os.Getenv("WILD_API_DNSMASQ_CONFIG_PATH")
log.Printf("Using custom dnsmasq config path: %s", dnsmasqConfigPath)
}
return &API{
dataDir: dataDir,
directoryPath: directoryPath,
appsDir: appsDir,
config: config.NewManager(),
secrets: secrets.NewManager(),
context: context.NewManager(dataDir),
instance: instance.NewManager(dataDir),
broadcaster: operations.NewBroadcaster(),
dataDir: dataDir,
appsDir: appsDir,
config: config.NewManager(),
secrets: secrets.NewManager(),
context: context.NewManager(dataDir),
instance: instance.NewManager(dataDir),
dnsmasq: dnsmasq.NewConfigGenerator(dnsmasqConfigPath),
broadcaster: operations.NewBroadcaster(),
}, nil
}
// RegisterRoutes registers all API routes (Phase 1 + Phase 2)
func (api *API) RegisterRoutes(r *mux.Router) {
// Phase 1: Instance management
// Instance management
r.HandleFunc("/api/v1/instances", api.CreateInstance).Methods("POST")
r.HandleFunc("/api/v1/instances", api.ListInstances).Methods("GET")
r.HandleFunc("/api/v1/instances/{name}", api.GetInstance).Methods("GET")
r.HandleFunc("/api/v1/instances/{name}", api.DeleteInstance).Methods("DELETE")
// Phase 1: Config management
// Config management
r.HandleFunc("/api/v1/instances/{name}/config", api.GetConfig).Methods("GET")
r.HandleFunc("/api/v1/instances/{name}/config", api.UpdateConfig).Methods("PUT")
r.HandleFunc("/api/v1/instances/{name}/config", api.ConfigUpdateBatch).Methods("PATCH")
// Phase 1: Secrets management
// Secrets management
r.HandleFunc("/api/v1/instances/{name}/secrets", api.GetSecrets).Methods("GET")
r.HandleFunc("/api/v1/instances/{name}/secrets", api.UpdateSecrets).Methods("PUT")
// Phase 1: Context management
// Context management
r.HandleFunc("/api/v1/context", api.GetContext).Methods("GET")
r.HandleFunc("/api/v1/context", api.SetContext).Methods("POST")
// Phase 2: Node management
// Node management
r.HandleFunc("/api/v1/instances/{name}/nodes/discover", api.NodeDiscover).Methods("POST")
r.HandleFunc("/api/v1/instances/{name}/nodes/detect", api.NodeDetect).Methods("POST")
r.HandleFunc("/api/v1/instances/{name}/discovery", api.NodeDiscoveryStatus).Methods("GET")
@@ -88,19 +94,25 @@ func (api *API) RegisterRoutes(r *mux.Router) {
r.HandleFunc("/api/v1/instances/{name}/nodes/{node}/apply", api.NodeApply).Methods("POST")
r.HandleFunc("/api/v1/instances/{name}/nodes/{node}", api.NodeDelete).Methods("DELETE")
// Phase 2: PXE asset management
r.HandleFunc("/api/v1/instances/{name}/pxe/assets", api.PXEListAssets).Methods("GET")
r.HandleFunc("/api/v1/instances/{name}/pxe/assets/download", api.PXEDownloadAsset).Methods("POST")
r.HandleFunc("/api/v1/instances/{name}/pxe/assets/{type}", api.PXEGetAsset).Methods("GET")
r.HandleFunc("/api/v1/instances/{name}/pxe/assets/{type}", api.PXEDeleteAsset).Methods("DELETE")
// Asset management
r.HandleFunc("/api/v1/assets", api.AssetsListSchematics).Methods("GET")
r.HandleFunc("/api/v1/assets/{schematicId}", api.AssetsGetSchematic).Methods("GET")
r.HandleFunc("/api/v1/assets/{schematicId}/download", api.AssetsDownload).Methods("POST")
r.HandleFunc("/api/v1/assets/{schematicId}/pxe/{assetType}", api.AssetsServePXE).Methods("GET")
r.HandleFunc("/api/v1/assets/{schematicId}/status", api.AssetsGetStatus).Methods("GET")
r.HandleFunc("/api/v1/assets/{schematicId}", api.AssetsDeleteSchematic).Methods("DELETE")
// Phase 2: Operations
// Instance-schematic relationship
r.HandleFunc("/api/v1/instances/{name}/schematic", api.SchematicGetInstanceSchematic).Methods("GET")
r.HandleFunc("/api/v1/instances/{name}/schematic", api.SchematicUpdateInstanceSchematic).Methods("PUT")
// Operations
r.HandleFunc("/api/v1/instances/{name}/operations", api.OperationList).Methods("GET")
r.HandleFunc("/api/v1/operations/{id}", api.OperationGet).Methods("GET")
r.HandleFunc("/api/v1/operations/{id}/stream", api.OperationStream).Methods("GET")
r.HandleFunc("/api/v1/operations/{id}/cancel", api.OperationCancel).Methods("POST")
// Phase 3: Cluster operations
// Cluster operations
r.HandleFunc("/api/v1/instances/{name}/cluster/config/generate", api.ClusterGenerateConfig).Methods("POST")
r.HandleFunc("/api/v1/instances/{name}/cluster/bootstrap", api.ClusterBootstrap).Methods("POST")
r.HandleFunc("/api/v1/instances/{name}/cluster/endpoints", api.ClusterConfigureEndpoints).Methods("POST")
@@ -111,7 +123,7 @@ func (api *API) RegisterRoutes(r *mux.Router) {
r.HandleFunc("/api/v1/instances/{name}/cluster/talosconfig", api.ClusterGetTalosconfig).Methods("GET")
r.HandleFunc("/api/v1/instances/{name}/cluster/reset", api.ClusterReset).Methods("POST")
// Phase 4: Services
// Services
r.HandleFunc("/api/v1/instances/{name}/services", api.ServicesList).Methods("GET")
r.HandleFunc("/api/v1/instances/{name}/services", api.ServicesInstall).Methods("POST")
r.HandleFunc("/api/v1/instances/{name}/services/install-all", api.ServicesInstallAll).Methods("POST")
@@ -127,7 +139,7 @@ func (api *API) RegisterRoutes(r *mux.Router) {
r.HandleFunc("/api/v1/instances/{name}/services/{service}/compile", api.ServicesCompile).Methods("POST")
r.HandleFunc("/api/v1/instances/{name}/services/{service}/deploy", api.ServicesDeploy).Methods("POST")
// Phase 4: Apps
// Apps
r.HandleFunc("/api/v1/apps", api.AppsListAvailable).Methods("GET")
r.HandleFunc("/api/v1/apps/{app}", api.AppsGetAvailable).Methods("GET")
r.HandleFunc("/api/v1/instances/{name}/apps", api.AppsListDeployed).Methods("GET")
@@ -136,12 +148,12 @@ func (api *API) RegisterRoutes(r *mux.Router) {
r.HandleFunc("/api/v1/instances/{name}/apps/{app}", api.AppsDelete).Methods("DELETE")
r.HandleFunc("/api/v1/instances/{name}/apps/{app}/status", api.AppsGetStatus).Methods("GET")
// Phase 5: Backup & Restore
// Backup & Restore
r.HandleFunc("/api/v1/instances/{name}/apps/{app}/backup", api.BackupAppStart).Methods("POST")
r.HandleFunc("/api/v1/instances/{name}/apps/{app}/backup", api.BackupAppList).Methods("GET")
r.HandleFunc("/api/v1/instances/{name}/apps/{app}/restore", api.BackupAppRestore).Methods("POST")
// Phase 5: Utilities
// Utilities
r.HandleFunc("/api/v1/utilities/health", api.UtilitiesHealth).Methods("GET")
r.HandleFunc("/api/v1/instances/{name}/utilities/health", api.InstanceUtilitiesHealth).Methods("GET")
r.HandleFunc("/api/v1/utilities/dashboard/token", api.UtilitiesDashboardToken).Methods("GET")
@@ -149,6 +161,13 @@ func (api *API) RegisterRoutes(r *mux.Router) {
r.HandleFunc("/api/v1/utilities/controlplane/ip", api.UtilitiesControlPlaneIP).Methods("GET")
r.HandleFunc("/api/v1/utilities/secrets/{secret}/copy", api.UtilitiesSecretCopy).Methods("POST")
r.HandleFunc("/api/v1/utilities/version", api.UtilitiesVersion).Methods("GET")
// dnsmasq management
r.HandleFunc("/api/v1/dnsmasq/status", api.DnsmasqStatus).Methods("GET")
r.HandleFunc("/api/v1/dnsmasq/config", api.DnsmasqGetConfig).Methods("GET")
r.HandleFunc("/api/v1/dnsmasq/restart", api.DnsmasqRestart).Methods("POST")
r.HandleFunc("/api/v1/dnsmasq/generate", api.DnsmasqGenerate).Methods("POST")
r.HandleFunc("/api/v1/dnsmasq/update", api.DnsmasqUpdate).Methods("POST")
}
// CreateInstance creates a new instance
@@ -172,10 +191,19 @@ func (api *API) CreateInstance(w http.ResponseWriter, r *http.Request) {
return
}
respondJSON(w, http.StatusCreated, map[string]string{
// Attempt to update dnsmasq configuration with all instances
// This is non-critical - include warning in response if it fails
response := map[string]interface{}{
"name": req.Name,
"message": "Instance created successfully",
})
}
if err := api.updateDnsmasqForAllInstances(); err != nil {
log.Printf("Warning: Could not update dnsmasq configuration: %v", err)
response["warning"] = fmt.Sprintf("dnsmasq update failed: %v. Use POST /api/v1/dnsmasq/update to retry.", err)
}
respondJSON(w, http.StatusCreated, response)
}
// ListInstances lists all instances
@@ -427,7 +455,7 @@ func (api *API) SetContext(w http.ResponseWriter, r *http.Request) {
}
// StatusHandler returns daemon status information
func (api *API) StatusHandler(w http.ResponseWriter, r *http.Request, startTime time.Time, dataDir, directoryPath string) {
func (api *API) StatusHandler(w http.ResponseWriter, r *http.Request, startTime time.Time, dataDir, appsDir string) {
// Get list of instances
instances, err := api.instance.ListInstances()
if err != nil {
@@ -443,7 +471,8 @@ func (api *API) StatusHandler(w http.ResponseWriter, r *http.Request, startTime
"uptime": uptime.String(),
"uptimeSeconds": int(uptime.Seconds()),
"dataDir": dataDir,
"directoryPath": directoryPath,
"appsDir": appsDir,
"setupFiles": "embedded", // Indicate that setup files are now embedded
"instances": map[string]interface{}{
"count": len(instances),
"names": instances,

View File

@@ -0,0 +1,172 @@
package v1
import (
"encoding/json"
"fmt"
"net/http"
"os"
"github.com/gorilla/mux"
"github.com/wild-cloud/wild-central/daemon/internal/assets"
)
// AssetsListSchematics lists all available schematics
func (api *API) AssetsListSchematics(w http.ResponseWriter, r *http.Request) {
assetsMgr := assets.NewManager(api.dataDir)
schematics, err := assetsMgr.ListSchematics()
if err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to list schematics: %v", err))
return
}
respondJSON(w, http.StatusOK, map[string]interface{}{
"schematics": schematics,
})
}
// AssetsGetSchematic returns details for a specific schematic
func (api *API) AssetsGetSchematic(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
schematicID := vars["schematicId"]
assetsMgr := assets.NewManager(api.dataDir)
schematic, err := assetsMgr.GetSchematic(schematicID)
if err != nil {
respondError(w, http.StatusNotFound, fmt.Sprintf("Schematic not found: %v", err))
return
}
respondJSON(w, http.StatusOK, schematic)
}
// AssetsDownload downloads assets for a schematic
func (api *API) AssetsDownload(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
schematicID := vars["schematicId"]
// Parse request body
var req struct {
Version string `json:"version"`
Platform string `json:"platform,omitempty"`
AssetTypes []string `json:"asset_types,omitempty"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
respondError(w, http.StatusBadRequest, "Invalid request body")
return
}
if req.Version == "" {
respondError(w, http.StatusBadRequest, "version is required")
return
}
// Default platform to amd64 if not specified
if req.Platform == "" {
req.Platform = "amd64"
}
// Download assets
assetsMgr := assets.NewManager(api.dataDir)
if err := assetsMgr.DownloadAssets(schematicID, req.Version, req.Platform, req.AssetTypes); err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to download assets: %v", err))
return
}
respondJSON(w, http.StatusOK, map[string]interface{}{
"message": "Assets downloaded successfully",
"schematic_id": schematicID,
"version": req.Version,
"platform": req.Platform,
})
}
// AssetsServePXE serves a PXE asset file
func (api *API) AssetsServePXE(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
schematicID := vars["schematicId"]
assetType := vars["assetType"]
assetsMgr := assets.NewManager(api.dataDir)
// Get asset path
assetPath, err := assetsMgr.GetAssetPath(schematicID, assetType)
if err != nil {
respondError(w, http.StatusNotFound, fmt.Sprintf("Asset not found: %v", err))
return
}
// Open file
file, err := os.Open(assetPath)
if err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to open asset: %v", err))
return
}
defer file.Close()
// Get file info for size
info, err := file.Stat()
if err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to stat asset: %v", err))
return
}
// Set appropriate content type
var contentType string
switch assetType {
case "kernel":
contentType = "application/octet-stream"
case "initramfs":
contentType = "application/x-xz"
case "iso":
contentType = "application/x-iso9660-image"
default:
contentType = "application/octet-stream"
}
// Set headers
w.Header().Set("Content-Type", contentType)
w.Header().Set("Content-Length", fmt.Sprintf("%d", info.Size()))
// Set Content-Disposition to suggest filename for download
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", info.Name()))
// Serve file
http.ServeContent(w, r, info.Name(), info.ModTime(), file)
}
// AssetsGetStatus returns download status for a schematic
func (api *API) AssetsGetStatus(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
schematicID := vars["schematicId"]
assetsMgr := assets.NewManager(api.dataDir)
status, err := assetsMgr.GetAssetStatus(schematicID)
if err != nil {
respondError(w, http.StatusNotFound, fmt.Sprintf("Schematic not found: %v", err))
return
}
respondJSON(w, http.StatusOK, status)
}
// AssetsDeleteSchematic deletes a schematic and all its assets
func (api *API) AssetsDeleteSchematic(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
schematicID := vars["schematicId"]
assetsMgr := assets.NewManager(api.dataDir)
if err := assetsMgr.DeleteSchematic(schematicID); err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to delete schematic: %v", err))
return
}
respondJSON(w, http.StatusOK, map[string]string{
"message": "Schematic deleted successfully",
"schematic_id": schematicID,
})
}

View File

@@ -0,0 +1,137 @@
package v1
import (
"fmt"
"log"
"net/http"
"github.com/wild-cloud/wild-central/daemon/internal/config"
)
// DnsmasqStatus returns the status of the dnsmasq service
func (api *API) DnsmasqStatus(w http.ResponseWriter, r *http.Request) {
status, err := api.dnsmasq.GetStatus()
if err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to get dnsmasq status: %v", err))
return
}
if status.Status != "active" {
w.WriteHeader(http.StatusServiceUnavailable)
}
respondJSON(w, http.StatusOK, status)
}
// DnsmasqGetConfig returns the current dnsmasq configuration
func (api *API) DnsmasqGetConfig(w http.ResponseWriter, r *http.Request) {
configContent, err := api.dnsmasq.ReadConfig()
if err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to read dnsmasq config: %v", err))
return
}
respondJSON(w, http.StatusOK, map[string]interface{}{
"config_file": api.dnsmasq.GetConfigPath(),
"content": configContent,
})
}
// DnsmasqRestart restarts the dnsmasq service
func (api *API) DnsmasqRestart(w http.ResponseWriter, r *http.Request) {
if err := api.dnsmasq.RestartService(); err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to restart dnsmasq: %v", err))
return
}
respondJSON(w, http.StatusOK, map[string]string{
"message": "dnsmasq service restarted successfully",
})
}
// DnsmasqGenerate generates the dnsmasq configuration without applying it (dry-run)
func (api *API) DnsmasqGenerate(w http.ResponseWriter, r *http.Request) {
// Get all instances
instanceNames, err := api.instance.ListInstances()
if err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to list instances: %v", err))
return
}
// Load global config
globalConfigPath := api.getGlobalConfigPath()
globalCfg, err := config.LoadGlobalConfig(globalConfigPath)
if err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to load global config: %v", err))
return
}
// Load all instance configs
var instanceConfigs []config.InstanceConfig
for _, name := range instanceNames {
instanceConfigPath := api.instance.GetInstanceConfigPath(name)
instanceCfg, err := config.LoadCloudConfig(instanceConfigPath)
if err != nil {
log.Printf("Warning: Could not load instance config for %s: %v", name, err)
continue
}
instanceConfigs = append(instanceConfigs, *instanceCfg)
}
// Generate config without writing or restarting
configContent := api.dnsmasq.Generate(globalCfg, instanceConfigs)
respondJSON(w, http.StatusOK, map[string]interface{}{
"message": "dnsmasq configuration generated (dry-run mode)",
"config": configContent,
})
}
// DnsmasqUpdate regenerates and updates the dnsmasq configuration with all instances
func (api *API) DnsmasqUpdate(w http.ResponseWriter, r *http.Request) {
if err := api.updateDnsmasqForAllInstances(); err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to update dnsmasq: %v", err))
return
}
respondJSON(w, http.StatusOK, map[string]string{
"message": "dnsmasq configuration updated successfully",
})
}
// updateDnsmasqForAllInstances helper regenerates dnsmasq config from all instances
func (api *API) updateDnsmasqForAllInstances() error {
// Get all instances
instanceNames, err := api.instance.ListInstances()
if err != nil {
return fmt.Errorf("listing instances: %w", err)
}
// Load global config
globalConfigPath := api.getGlobalConfigPath()
globalCfg, err := config.LoadGlobalConfig(globalConfigPath)
if err != nil {
return fmt.Errorf("loading global config: %w", err)
}
// Load all instance configs
var instanceConfigs []config.InstanceConfig
for _, name := range instanceNames {
instanceConfigPath := api.instance.GetInstanceConfigPath(name)
instanceCfg, err := config.LoadCloudConfig(instanceConfigPath)
if err != nil {
log.Printf("Warning: Could not load instance config for %s: %v", name, err)
continue
}
instanceConfigs = append(instanceConfigs, *instanceCfg)
}
// Regenerate and write dnsmasq config
return api.dnsmasq.UpdateConfig(globalCfg, instanceConfigs)
}
// getGlobalConfigPath returns the path to the global config file
func (api *API) getGlobalConfigPath() string {
// This should match the structure from data.Paths
return api.dataDir + "/config.yaml"
}

View File

@@ -22,8 +22,9 @@ func (api *API) NodeDiscover(w http.ResponseWriter, r *http.Request) {
return
}
// Parse request body
// Parse request body - support both subnet and ip_list formats
var req struct {
Subnet string `json:"subnet"`
IPList []string `json:"ip_list"`
}
@@ -32,14 +33,21 @@ func (api *API) NodeDiscover(w http.ResponseWriter, r *http.Request) {
return
}
if len(req.IPList) == 0 {
respondError(w, http.StatusBadRequest, "ip_list is required")
// If subnet provided, use it as a single "IP" for discovery
// The discovery manager will scan this subnet
var ipList []string
if req.Subnet != "" {
ipList = []string{req.Subnet}
} else if len(req.IPList) > 0 {
ipList = req.IPList
} else {
respondError(w, http.StatusBadRequest, "subnet or ip_list is required")
return
}
// Start discovery
discoveryMgr := discovery.NewManager(api.dataDir, instanceName)
if err := discoveryMgr.StartDiscovery(instanceName, req.IPList); err != nil {
if err := discoveryMgr.StartDiscovery(instanceName, ipList); err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to start discovery: %v", err))
return
}

View File

@@ -3,42 +3,72 @@ package v1
import (
"encoding/json"
"fmt"
"log"
"net/http"
"github.com/gorilla/mux"
"github.com/wild-cloud/wild-central/daemon/internal/assets"
"github.com/wild-cloud/wild-central/daemon/internal/pxe"
)
// PXEListAssets lists all PXE assets for an instance
// DEPRECATED: This endpoint is deprecated. Use GET /api/v1/assets/{schematicId} instead.
func (api *API) PXEListAssets(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
instanceName := vars["name"]
// Add deprecation warning header
w.Header().Set("X-Deprecated", "This endpoint is deprecated. Use GET /api/v1/assets/{schematicId} instead.")
log.Printf("Warning: Deprecated endpoint /api/v1/instances/%s/pxe/assets called", instanceName)
// Validate instance exists
if err := api.instance.ValidateInstance(instanceName); err != nil {
respondError(w, http.StatusNotFound, fmt.Sprintf("Instance not found: %v", err))
return
}
// List assets
pxeMgr := pxe.NewManager(api.dataDir)
assets, err := pxeMgr.ListAssets(instanceName)
// Get schematic ID from instance config
configPath := api.instance.GetInstanceConfigPath(instanceName)
schematicID, err := api.config.GetConfigValue(configPath, "cluster.nodes.talos.schematicId")
if err != nil || schematicID == "" || schematicID == "null" {
// Fall back to old PXE manager if no schematic configured
pxeMgr := pxe.NewManager(api.dataDir)
assets, err := pxeMgr.ListAssets(instanceName)
if err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to list assets: %v", err))
return
}
respondJSON(w, http.StatusOK, map[string]interface{}{
"assets": assets,
})
return
}
// Proxy to new asset system
assetsMgr := assets.NewManager(api.dataDir)
schematic, err := assetsMgr.GetSchematic(schematicID)
if err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to list assets: %v", err))
respondError(w, http.StatusNotFound, fmt.Sprintf("Schematic not found: %v", err))
return
}
respondJSON(w, http.StatusOK, map[string]interface{}{
"assets": assets,
"assets": schematic.Assets,
})
}
// PXEDownloadAsset downloads a PXE asset
// DEPRECATED: This endpoint is deprecated. Use POST /api/v1/assets/{schematicId}/download instead.
func (api *API) PXEDownloadAsset(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
instanceName := vars["name"]
// Add deprecation warning header
w.Header().Set("X-Deprecated", "This endpoint is deprecated. Use POST /api/v1/assets/{schematicId}/download instead.")
log.Printf("Warning: Deprecated endpoint /api/v1/instances/%s/pxe/assets/download called", instanceName)
// Validate instance exists
if err := api.instance.ValidateInstance(instanceName); err != nil {
respondError(w, http.StatusNotFound, fmt.Sprintf("Instance not found: %v", err))
@@ -62,80 +92,154 @@ func (api *API) PXEDownloadAsset(w http.ResponseWriter, r *http.Request) {
return
}
if req.URL == "" {
respondError(w, http.StatusBadRequest, "url is required")
// Get schematic ID from instance config
configPath := api.instance.GetInstanceConfigPath(instanceName)
schematicID, err := api.config.GetConfigValue(configPath, "cluster.nodes.talos.schematicId")
// If no schematic configured or URL provided (old behavior), fall back to old PXE manager
if (err != nil || schematicID == "" || schematicID == "null") || req.URL != "" {
if req.URL == "" {
respondError(w, http.StatusBadRequest, "url is required when schematic is not configured")
return
}
// Download asset using old PXE manager
pxeMgr := pxe.NewManager(api.dataDir)
if err := pxeMgr.DownloadAsset(instanceName, req.AssetType, req.Version, req.URL); err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to download asset: %v", err))
return
}
respondJSON(w, http.StatusOK, map[string]string{
"message": "Asset downloaded successfully",
"asset_type": req.AssetType,
"version": req.Version,
})
return
}
// Download asset
pxeMgr := pxe.NewManager(api.dataDir)
if err := pxeMgr.DownloadAsset(instanceName, req.AssetType, req.Version, req.URL); err != nil {
// Proxy to new asset system
if req.Version == "" {
respondError(w, http.StatusBadRequest, "version is required")
return
}
assetsMgr := assets.NewManager(api.dataDir)
assetTypes := []string{req.AssetType}
platform := "amd64" // Default platform for backward compatibility
if err := assetsMgr.DownloadAssets(schematicID, req.Version, platform, assetTypes); err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to download asset: %v", err))
return
}
respondJSON(w, http.StatusOK, map[string]string{
"message": "Asset downloaded successfully",
"asset_type": req.AssetType,
"version": req.Version,
"message": "Asset downloaded successfully",
"asset_type": req.AssetType,
"version": req.Version,
"schematic_id": schematicID,
})
}
// PXEGetAsset returns information about a specific asset
// DEPRECATED: This endpoint is deprecated. Use GET /api/v1/assets/{schematicId}/pxe/{assetType} instead.
func (api *API) PXEGetAsset(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
instanceName := vars["name"]
assetType := vars["type"]
// Add deprecation warning header
w.Header().Set("X-Deprecated", "This endpoint is deprecated. Use GET /api/v1/assets/{schematicId}/pxe/{assetType} instead.")
log.Printf("Warning: Deprecated endpoint /api/v1/instances/%s/pxe/assets/%s called", instanceName, assetType)
// Validate instance exists
if err := api.instance.ValidateInstance(instanceName); err != nil {
respondError(w, http.StatusNotFound, fmt.Sprintf("Instance not found: %v", err))
return
}
// Get asset path
pxeMgr := pxe.NewManager(api.dataDir)
assetPath, err := pxeMgr.GetAssetPath(instanceName, assetType)
// Get schematic ID from instance config
configPath := api.instance.GetInstanceConfigPath(instanceName)
schematicID, err := api.config.GetConfigValue(configPath, "cluster.nodes.talos.schematicId")
if err != nil || schematicID == "" || schematicID == "null" {
// Fall back to old PXE manager if no schematic configured
pxeMgr := pxe.NewManager(api.dataDir)
assetPath, err := pxeMgr.GetAssetPath(instanceName, assetType)
if err != nil {
respondError(w, http.StatusNotFound, fmt.Sprintf("Asset not found: %v", err))
return
}
// Verify asset
valid, err := pxeMgr.VerifyAsset(instanceName, assetType)
if err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to verify asset: %v", err))
return
}
respondJSON(w, http.StatusOK, map[string]interface{}{
"type": assetType,
"path": assetPath,
"valid": valid,
})
return
}
// Proxy to new asset system - serve the file directly
assetsMgr := assets.NewManager(api.dataDir)
assetPath, err := assetsMgr.GetAssetPath(schematicID, assetType)
if err != nil {
respondError(w, http.StatusNotFound, fmt.Sprintf("Asset not found: %v", err))
return
}
// Verify asset
valid, err := pxeMgr.VerifyAsset(instanceName, assetType)
if err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to verify asset: %v", err))
return
}
respondJSON(w, http.StatusOK, map[string]interface{}{
"type": assetType,
"path": assetPath,
"valid": valid,
"type": assetType,
"path": assetPath,
"valid": true,
"schematic_id": schematicID,
})
}
// PXEDeleteAsset deletes a PXE asset
// DEPRECATED: This endpoint is deprecated. Use DELETE /api/v1/assets/{schematicId} instead.
func (api *API) PXEDeleteAsset(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
instanceName := vars["name"]
assetType := vars["type"]
// Add deprecation warning header
w.Header().Set("X-Deprecated", "This endpoint is deprecated. Use DELETE /api/v1/assets/{schematicId} instead.")
log.Printf("Warning: Deprecated endpoint DELETE /api/v1/instances/%s/pxe/assets/%s called", instanceName, assetType)
// Validate instance exists
if err := api.instance.ValidateInstance(instanceName); err != nil {
respondError(w, http.StatusNotFound, fmt.Sprintf("Instance not found: %v", err))
return
}
// Delete asset
pxeMgr := pxe.NewManager(api.dataDir)
if err := pxeMgr.DeleteAsset(instanceName, assetType); err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to delete asset: %v", err))
// Get schematic ID from instance config
configPath := api.instance.GetInstanceConfigPath(instanceName)
schematicID, err := api.config.GetConfigValue(configPath, "cluster.nodes.talos.schematicId")
if err != nil || schematicID == "" || schematicID == "null" {
// Fall back to old PXE manager if no schematic configured
pxeMgr := pxe.NewManager(api.dataDir)
if err := pxeMgr.DeleteAsset(instanceName, assetType); err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to delete asset: %v", err))
return
}
respondJSON(w, http.StatusOK, map[string]string{
"message": "Asset deleted successfully",
"type": assetType,
})
return
}
// Note: In the new system, we don't delete individual assets, only entire schematics
// For backward compatibility, we'll just report success without doing anything
respondJSON(w, http.StatusOK, map[string]string{
"message": "Asset deleted successfully",
"type": assetType,
"message": "Individual asset deletion not supported in schematic mode. Use DELETE /api/v1/assets/{schematicId} to delete all assets.",
"type": assetType,
"schematic_id": schematicID,
})
}

View File

@@ -0,0 +1,122 @@
package v1
import (
"encoding/json"
"fmt"
"net/http"
"github.com/gorilla/mux"
"github.com/wild-cloud/wild-central/daemon/internal/assets"
)
// SchematicGetInstanceSchematic returns the schematic configuration for an instance
func (api *API) SchematicGetInstanceSchematic(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
instanceName := vars["name"]
// Validate instance exists
if err := api.instance.ValidateInstance(instanceName); err != nil {
respondError(w, http.StatusNotFound, fmt.Sprintf("Instance not found: %v", err))
return
}
configPath := api.instance.GetInstanceConfigPath(instanceName)
// Get schematic ID from config
schematicID, err := api.config.GetConfigValue(configPath, "cluster.nodes.talos.schematicId")
if err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to get schematic ID: %v", err))
return
}
// Get version from config
version, err := api.config.GetConfigValue(configPath, "cluster.nodes.talos.version")
if err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to get version: %v", err))
return
}
// If schematic is configured, get asset status
var assetStatus interface{}
if schematicID != "" && schematicID != "null" {
assetsMgr := assets.NewManager(api.dataDir)
status, err := assetsMgr.GetAssetStatus(schematicID)
if err == nil {
assetStatus = status
}
}
respondJSON(w, http.StatusOK, map[string]interface{}{
"schematic_id": schematicID,
"version": version,
"assets": assetStatus,
})
}
// SchematicUpdateInstanceSchematic updates the schematic configuration for an instance
func (api *API) SchematicUpdateInstanceSchematic(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
instanceName := vars["name"]
// Validate instance exists
if err := api.instance.ValidateInstance(instanceName); err != nil {
respondError(w, http.StatusNotFound, fmt.Sprintf("Instance not found: %v", err))
return
}
// Parse request body
var req struct {
SchematicID string `json:"schematic_id"`
Version string `json:"version"`
Download bool `json:"download,omitempty"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
respondError(w, http.StatusBadRequest, "Invalid request body")
return
}
if req.SchematicID == "" {
respondError(w, http.StatusBadRequest, "schematic_id is required")
return
}
if req.Version == "" {
respondError(w, http.StatusBadRequest, "version is required")
return
}
configPath := api.instance.GetInstanceConfigPath(instanceName)
// Update schematic ID in config
if err := api.config.SetConfigValue(configPath, "cluster.nodes.talos.schematicId", req.SchematicID); err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to set schematic ID: %v", err))
return
}
// Update version in config
if err := api.config.SetConfigValue(configPath, "cluster.nodes.talos.version", req.Version); err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to set version: %v", err))
return
}
response := map[string]interface{}{
"message": "Schematic configuration updated successfully",
"schematic_id": req.SchematicID,
"version": req.Version,
}
// Optionally download assets
if req.Download {
assetsMgr := assets.NewManager(api.dataDir)
platform := "amd64" // Default platform
if err := assetsMgr.DownloadAssets(req.SchematicID, req.Version, platform, nil); err != nil {
response["download_warning"] = fmt.Sprintf("Failed to download assets: %v", err)
} else {
response["download_status"] = "Assets downloaded successfully"
}
}
respondJSON(w, http.StatusOK, response)
}

View File

@@ -27,7 +27,7 @@ func (api *API) ServicesList(w http.ResponseWriter, r *http.Request) {
}
// List services
servicesMgr := services.NewManager(api.dataDir, filepath.Join(api.directoryPath, "setup", "cluster-services"))
servicesMgr := services.NewManager(api.dataDir)
svcList, err := servicesMgr.List(instanceName)
if err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to list services: %v", err))
@@ -52,7 +52,7 @@ func (api *API) ServicesGet(w http.ResponseWriter, r *http.Request) {
}
// Get service
servicesMgr := services.NewManager(api.dataDir, filepath.Join(api.directoryPath, "setup", "cluster-services"))
servicesMgr := services.NewManager(api.dataDir)
service, err := servicesMgr.Get(instanceName, serviceName)
if err != nil {
respondError(w, http.StatusNotFound, fmt.Sprintf("Service not found: %v", err))
@@ -109,7 +109,7 @@ func (api *API) ServicesInstall(w http.ResponseWriter, r *http.Request) {
}()
fmt.Printf("[DEBUG] Service install goroutine started: service=%s instance=%s opID=%s\n", req.Name, instanceName, opID)
servicesMgr := services.NewManager(api.dataDir, filepath.Join(api.directoryPath, "setup", "cluster-services"))
servicesMgr := services.NewManager(api.dataDir)
opsMgr.UpdateStatus(instanceName, opID, "running")
if err := servicesMgr.Install(instanceName, req.Name, req.Fetch, req.Deploy, opID, api.broadcaster); err != nil {
@@ -159,7 +159,7 @@ func (api *API) ServicesInstallAll(w http.ResponseWriter, r *http.Request) {
// Install in background
go func() {
servicesMgr := services.NewManager(api.dataDir, filepath.Join(api.directoryPath, "setup", "cluster-services"))
servicesMgr := services.NewManager(api.dataDir)
opsMgr.UpdateStatus(instanceName, opID, "running")
if err := servicesMgr.InstallAll(instanceName, req.Fetch, req.Deploy, opID, api.broadcaster); err != nil {
@@ -197,7 +197,7 @@ func (api *API) ServicesDelete(w http.ResponseWriter, r *http.Request) {
// Delete in background
go func() {
servicesMgr := services.NewManager(api.dataDir, filepath.Join(api.directoryPath, "setup", "cluster-services"))
servicesMgr := services.NewManager(api.dataDir)
opsMgr.UpdateStatus(instanceName, opID, "running")
if err := servicesMgr.Delete(instanceName, serviceName); err != nil {
@@ -226,7 +226,7 @@ func (api *API) ServicesGetStatus(w http.ResponseWriter, r *http.Request) {
}
// Get status
servicesMgr := services.NewManager(api.dataDir, filepath.Join(api.directoryPath, "setup", "cluster-services"))
servicesMgr := services.NewManager(api.dataDir)
status, err := servicesMgr.GetStatus(instanceName, serviceName)
if err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to get status: %v", err))
@@ -241,7 +241,7 @@ func (api *API) ServicesGetManifest(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
serviceName := vars["service"]
servicesMgr := services.NewManager(api.dataDir, filepath.Join(api.directoryPath, "setup", "cluster-services"))
servicesMgr := services.NewManager(api.dataDir)
manifest, err := servicesMgr.GetManifest(serviceName)
if err != nil {
respondError(w, http.StatusNotFound, fmt.Sprintf("Service not found: %v", err))
@@ -256,7 +256,7 @@ func (api *API) ServicesGetConfig(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
serviceName := vars["service"]
servicesMgr := services.NewManager(api.dataDir, filepath.Join(api.directoryPath, "setup", "cluster-services"))
servicesMgr := services.NewManager(api.dataDir)
// Get manifest
manifest, err := servicesMgr.GetManifest(serviceName)
@@ -286,7 +286,7 @@ func (api *API) ServicesGetInstanceConfig(w http.ResponseWriter, r *http.Request
return
}
servicesMgr := services.NewManager(api.dataDir, filepath.Join(api.directoryPath, "setup", "cluster-services"))
servicesMgr := services.NewManager(api.dataDir)
// Get manifest to know which config paths to read
manifest, err := servicesMgr.GetManifest(serviceName)
@@ -364,7 +364,7 @@ func (api *API) ServicesFetch(w http.ResponseWriter, r *http.Request) {
}
// Fetch service files
servicesMgr := services.NewManager(api.dataDir, filepath.Join(api.directoryPath, "setup", "cluster-services"))
servicesMgr := services.NewManager(api.dataDir)
if err := servicesMgr.Fetch(instanceName, serviceName); err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to fetch service: %v", err))
return
@@ -388,7 +388,7 @@ func (api *API) ServicesCompile(w http.ResponseWriter, r *http.Request) {
}
// Compile templates
servicesMgr := services.NewManager(api.dataDir, filepath.Join(api.directoryPath, "setup", "cluster-services"))
servicesMgr := services.NewManager(api.dataDir)
if err := servicesMgr.Compile(instanceName, serviceName); err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to compile templates: %v", err))
return
@@ -412,7 +412,7 @@ func (api *API) ServicesDeploy(w http.ResponseWriter, r *http.Request) {
}
// Deploy service (without operation tracking for standalone deploy)
servicesMgr := services.NewManager(api.dataDir, filepath.Join(api.directoryPath, "setup", "cluster-services"))
servicesMgr := services.NewManager(api.dataDir)
if err := servicesMgr.Deploy(instanceName, serviceName, "", nil); err != nil {
respondError(w, http.StatusInternalServerError, fmt.Sprintf("Failed to deploy service: %v", err))
return

View File

@@ -139,44 +139,44 @@ func (m *Manager) ListDeployed(instanceName string) ([]DeployedApp, error) {
appName := entry.Name()
// Initialize app with basic info
app := DeployedApp{
Name: appName,
Namespace: appName,
Status: "added", // Default status: added but not deployed
}
// Try to get version from manifest
manifestPath := filepath.Join(appsDir, appName, "manifest.yaml")
if storage.FileExists(manifestPath) {
manifestData, _ := os.ReadFile(manifestPath)
var manifest struct {
Version string `yaml:"version"`
}
if yaml.Unmarshal(manifestData, &manifest) == nil {
app.Version = manifest.Version
}
}
// Check if namespace exists in cluster
checkCmd := exec.Command("kubectl", "get", "namespace", appName, "-o", "json")
tools.WithKubeconfig(checkCmd, kubeconfigPath)
output, err := checkCmd.CombinedOutput()
if err != nil {
// Namespace doesn't exist - app not deployed
continue
}
// Parse namespace status
var ns struct {
Status struct {
Phase string `json:"phase"`
} `json:"status"`
}
if err := yaml.Unmarshal(output, &ns); err == nil && ns.Status.Phase == "Active" {
// App is deployed - get more details
app := DeployedApp{
Name: appName,
Namespace: appName,
Status: "deployed",
if err == nil {
// Namespace exists - parse status
var ns struct {
Status struct {
Phase string `json:"phase"`
} `json:"status"`
}
// Try to get version from manifest
manifestPath := filepath.Join(appsDir, appName, "manifest.yaml")
if storage.FileExists(manifestPath) {
manifestData, _ := os.ReadFile(manifestPath)
var manifest struct {
Version string `yaml:"version"`
}
if yaml.Unmarshal(manifestData, &manifest) == nil {
app.Version = manifest.Version
}
if yaml.Unmarshal(output, &ns) == nil && ns.Status.Phase == "Active" {
// Namespace is active - app is deployed
app.Status = "deployed"
}
apps = append(apps, app)
}
apps = append(apps, app)
}
return apps, nil

448
internal/assets/assets.go Normal file
View File

@@ -0,0 +1,448 @@
package assets
import (
"crypto/sha256"
"fmt"
"io"
"net/http"
"os"
"path/filepath"
"strings"
"github.com/wild-cloud/wild-central/daemon/internal/storage"
)
// Manager handles centralized Talos asset management
type Manager struct {
dataDir string
}
// NewManager creates a new asset manager
func NewManager(dataDir string) *Manager {
return &Manager{
dataDir: dataDir,
}
}
// Asset represents a Talos boot asset
type Asset struct {
Type string `json:"type"` // kernel, initramfs, iso
Path string `json:"path"` // Full path to asset file
Size int64 `json:"size"` // File size in bytes
SHA256 string `json:"sha256"` // SHA256 hash
Downloaded bool `json:"downloaded"` // Whether asset exists
}
// Schematic represents a Talos schematic and its assets
type Schematic struct {
SchematicID string `json:"schematic_id"`
Version string `json:"version"`
Path string `json:"path"`
Assets []Asset `json:"assets"`
}
// AssetStatus represents download status for a schematic
type AssetStatus struct {
SchematicID string `json:"schematic_id"`
Version string `json:"version"`
Assets map[string]Asset `json:"assets"`
Complete bool `json:"complete"`
}
// GetAssetDir returns the asset directory for a schematic
func (m *Manager) GetAssetDir(schematicID string) string {
return filepath.Join(m.dataDir, "assets", schematicID)
}
// GetAssetsRootDir returns the root assets directory
func (m *Manager) GetAssetsRootDir() string {
return filepath.Join(m.dataDir, "assets")
}
// ListSchematics returns all available schematics
func (m *Manager) ListSchematics() ([]Schematic, error) {
assetsDir := m.GetAssetsRootDir()
// Ensure assets directory exists
if err := storage.EnsureDir(assetsDir, 0755); err != nil {
return nil, fmt.Errorf("ensuring assets directory: %w", err)
}
entries, err := os.ReadDir(assetsDir)
if err != nil {
return nil, fmt.Errorf("reading assets directory: %w", err)
}
var schematics []Schematic
for _, entry := range entries {
if entry.IsDir() {
schematicID := entry.Name()
schematic, err := m.GetSchematic(schematicID)
if err != nil {
// Skip invalid schematics
continue
}
schematics = append(schematics, *schematic)
}
}
return schematics, nil
}
// GetSchematic returns details for a specific schematic
func (m *Manager) GetSchematic(schematicID string) (*Schematic, error) {
if schematicID == "" {
return nil, fmt.Errorf("schematic ID cannot be empty")
}
assetDir := m.GetAssetDir(schematicID)
// Check if schematic directory exists
if !storage.FileExists(assetDir) {
return nil, fmt.Errorf("schematic %s not found", schematicID)
}
// List assets for this schematic
assets, err := m.listSchematicAssets(schematicID)
if err != nil {
return nil, fmt.Errorf("listing schematic assets: %w", err)
}
// Try to determine version from version file
version := ""
versionPath := filepath.Join(assetDir, "version.txt")
if storage.FileExists(versionPath) {
data, err := os.ReadFile(versionPath)
if err == nil {
version = strings.TrimSpace(string(data))
}
}
return &Schematic{
SchematicID: schematicID,
Version: version,
Path: assetDir,
Assets: assets,
}, nil
}
// listSchematicAssets lists all assets for a schematic
func (m *Manager) listSchematicAssets(schematicID string) ([]Asset, error) {
assetDir := m.GetAssetDir(schematicID)
var assets []Asset
// Check for PXE assets (kernel and initramfs for both platforms)
pxeDir := filepath.Join(assetDir, "pxe")
pxePatterns := []string{
"kernel-amd64",
"kernel-arm64",
"initramfs-amd64.xz",
"initramfs-arm64.xz",
}
for _, pattern := range pxePatterns {
assetPath := filepath.Join(pxeDir, pattern)
info, err := os.Stat(assetPath)
var assetType string
if strings.HasPrefix(pattern, "kernel-") {
assetType = "kernel"
} else {
assetType = "initramfs"
}
asset := Asset{
Type: assetType,
Path: assetPath,
Downloaded: err == nil,
}
if err == nil && info != nil {
asset.Size = info.Size()
// Calculate SHA256 if file exists
if hash, err := calculateSHA256(assetPath); err == nil {
asset.SHA256 = hash
}
}
assets = append(assets, asset)
}
// Check for ISO assets (glob pattern to find all ISOs)
isoDir := filepath.Join(assetDir, "iso")
isoMatches, err := filepath.Glob(filepath.Join(isoDir, "talos-*.iso"))
if err == nil {
for _, isoPath := range isoMatches {
info, err := os.Stat(isoPath)
asset := Asset{
Type: "iso",
Path: isoPath,
Downloaded: err == nil,
}
if err == nil && info != nil {
asset.Size = info.Size()
// Calculate SHA256 if file exists
if hash, err := calculateSHA256(isoPath); err == nil {
asset.SHA256 = hash
}
}
assets = append(assets, asset)
}
}
return assets, nil
}
// DownloadAssets downloads specified assets for a schematic
func (m *Manager) DownloadAssets(schematicID, version, platform string, assetTypes []string) error {
if schematicID == "" {
return fmt.Errorf("schematic ID cannot be empty")
}
if version == "" {
return fmt.Errorf("version cannot be empty")
}
if platform == "" {
platform = "amd64" // Default to amd64
}
// Validate platform
if platform != "amd64" && platform != "arm64" {
return fmt.Errorf("invalid platform: %s (must be amd64 or arm64)", platform)
}
if len(assetTypes) == 0 {
// Default to all asset types
assetTypes = []string{"kernel", "initramfs", "iso"}
}
assetDir := m.GetAssetDir(schematicID)
// Ensure asset directory exists
if err := storage.EnsureDir(assetDir, 0755); err != nil {
return fmt.Errorf("creating asset directory: %w", err)
}
// Save version info
versionPath := filepath.Join(assetDir, "version.txt")
if err := os.WriteFile(versionPath, []byte(version), 0644); err != nil {
return fmt.Errorf("saving version info: %w", err)
}
// Download each requested asset
for _, assetType := range assetTypes {
if err := m.downloadAsset(schematicID, assetType, version, platform); err != nil {
return fmt.Errorf("downloading %s: %w", assetType, err)
}
}
return nil
}
// downloadAsset downloads a single asset
func (m *Manager) downloadAsset(schematicID, assetType, version, platform string) error {
assetDir := m.GetAssetDir(schematicID)
// Determine subdirectory, filename, and URL based on asset type and platform
var subdir, filename, urlPath string
switch assetType {
case "kernel":
subdir = "pxe"
filename = fmt.Sprintf("kernel-%s", platform)
urlPath = fmt.Sprintf("kernel-%s", platform)
case "initramfs":
subdir = "pxe"
filename = fmt.Sprintf("initramfs-%s.xz", platform)
urlPath = fmt.Sprintf("initramfs-%s.xz", platform)
case "iso":
subdir = "iso"
// Preserve version and platform in filename for clarity
filename = fmt.Sprintf("talos-%s-metal-%s.iso", version, platform)
urlPath = fmt.Sprintf("metal-%s.iso", platform)
default:
return fmt.Errorf("unknown asset type: %s", assetType)
}
// Create subdirectory structure
assetTypeDir := filepath.Join(assetDir, subdir)
if err := storage.EnsureDir(assetTypeDir, 0755); err != nil {
return fmt.Errorf("creating %s directory: %w", subdir, err)
}
assetPath := filepath.Join(assetTypeDir, filename)
// Skip if asset already exists (idempotency)
if storage.FileExists(assetPath) {
return nil
}
// Construct download URL from Image Factory
url := fmt.Sprintf("https://factory.talos.dev/image/%s/%s/%s", schematicID, version, urlPath)
// Download file
resp, err := http.Get(url)
if err != nil {
return fmt.Errorf("downloading from %s: %w", url, err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("download failed with status %d from %s", resp.StatusCode, url)
}
// Create temporary file
tmpFile := assetPath + ".tmp"
out, err := os.Create(tmpFile)
if err != nil {
return fmt.Errorf("creating temporary file: %w", err)
}
defer out.Close()
// Copy data
_, err = io.Copy(out, resp.Body)
if err != nil {
os.Remove(tmpFile)
return fmt.Errorf("writing file: %w", err)
}
// Close file before rename
out.Close()
// Move to final location
if err := os.Rename(tmpFile, assetPath); err != nil {
os.Remove(tmpFile)
return fmt.Errorf("moving file to final location: %w", err)
}
return nil
}
// GetAssetStatus returns the download status for a schematic
func (m *Manager) GetAssetStatus(schematicID string) (*AssetStatus, error) {
if schematicID == "" {
return nil, fmt.Errorf("schematic ID cannot be empty")
}
assetDir := m.GetAssetDir(schematicID)
// Check if schematic directory exists
if !storage.FileExists(assetDir) {
return nil, fmt.Errorf("schematic %s not found", schematicID)
}
// Get version
version := ""
versionPath := filepath.Join(assetDir, "version.txt")
if storage.FileExists(versionPath) {
data, err := os.ReadFile(versionPath)
if err == nil {
version = strings.TrimSpace(string(data))
}
}
// List assets
assets, err := m.listSchematicAssets(schematicID)
if err != nil {
return nil, fmt.Errorf("listing assets: %w", err)
}
// Build asset map and check completion
assetMap := make(map[string]Asset)
complete := true
for _, asset := range assets {
assetMap[asset.Type] = asset
if !asset.Downloaded {
complete = false
}
}
return &AssetStatus{
SchematicID: schematicID,
Version: version,
Assets: assetMap,
Complete: complete,
}, nil
}
// GetAssetPath returns the path to a specific asset file
func (m *Manager) GetAssetPath(schematicID, assetType string) (string, error) {
if schematicID == "" {
return "", fmt.Errorf("schematic ID cannot be empty")
}
assetDir := m.GetAssetDir(schematicID)
var subdir, pattern string
switch assetType {
case "kernel":
subdir = "pxe"
pattern = "kernel-amd64"
case "initramfs":
subdir = "pxe"
pattern = "initramfs-amd64.xz"
case "iso":
subdir = "iso"
pattern = "talos-*.iso" // Glob pattern for version-specific filename
default:
return "", fmt.Errorf("unknown asset type: %s", assetType)
}
assetTypeDir := filepath.Join(assetDir, subdir)
// Find matching file (supports glob pattern for ISO)
var assetPath string
if strings.Contains(pattern, "*") {
matches, err := filepath.Glob(filepath.Join(assetTypeDir, pattern))
if err != nil {
return "", fmt.Errorf("searching for asset: %w", err)
}
if len(matches) == 0 {
return "", fmt.Errorf("asset %s not found for schematic %s", assetType, schematicID)
}
assetPath = matches[0] // Use first match
} else {
assetPath = filepath.Join(assetTypeDir, pattern)
}
if !storage.FileExists(assetPath) {
return "", fmt.Errorf("asset %s not found for schematic %s", assetType, schematicID)
}
return assetPath, nil
}
// DeleteSchematic removes a schematic and all its assets
func (m *Manager) DeleteSchematic(schematicID string) error {
if schematicID == "" {
return fmt.Errorf("schematic ID cannot be empty")
}
assetDir := m.GetAssetDir(schematicID)
if !storage.FileExists(assetDir) {
return nil // Already deleted, idempotent
}
return os.RemoveAll(assetDir)
}
// calculateSHA256 computes the SHA256 hash of a file
func calculateSHA256(filePath string) (string, error) {
file, err := os.Open(filePath)
if err != nil {
return "", err
}
defer file.Close()
hash := sha256.New()
if _, err := io.Copy(hash, file); err != nil {
return "", err
}
return fmt.Sprintf("%x", hash.Sum(nil)), nil
}

View File

@@ -101,17 +101,31 @@ type NodeConfig struct {
}
type InstanceConfig struct {
BaseDomain string `yaml:"baseDomain" json:"baseDomain"`
Domain string `yaml:"domain" json:"domain"`
InternalDomain string `yaml:"internalDomain" json:"internalDomain"`
Backup struct {
Root string `yaml:"root" json:"root"`
} `yaml:"backup" json:"backup"`
DHCPRange string `yaml:"dhcpRange" json:"dhcpRange"`
NFS struct {
Host string `yaml:"host" json:"host"`
MediaPath string `yaml:"mediaPath" json:"mediaPath"`
} `yaml:"nfs" json:"nfs"`
Cloud struct {
Router struct {
IP string `yaml:"ip" json:"ip"`
} `yaml:"router" json:"router"`
DNS struct {
IP string `yaml:"ip" json:"ip"`
ExternalResolver string `yaml:"externalResolver" json:"externalResolver"`
} `yaml:"dns" json:"dns"`
DHCPRange string `yaml:"dhcpRange" json:"dhcpRange"`
Dnsmasq struct {
Interface string `yaml:"interface" json:"interface"`
} `yaml:"dnsmasq" json:"dnsmasq"`
BaseDomain string `yaml:"baseDomain" json:"baseDomain"`
Domain string `yaml:"domain" json:"domain"`
InternalDomain string `yaml:"internalDomain" json:"internalDomain"`
NFS struct {
MediaPath string `yaml:"mediaPath" json:"mediaPath"`
Host string `yaml:"host" json:"host"`
StorageCapacity string `yaml:"storageCapacity" json:"storageCapacity"`
} `yaml:"nfs" json:"nfs"`
DockerRegistryHost string `yaml:"dockerRegistryHost" json:"dockerRegistryHost"`
Backup struct {
Root string `yaml:"root" json:"root"`
} `yaml:"backup" json:"backup"`
} `yaml:"cloud" json:"cloud"`
Cluster struct {
Name string `yaml:"name" json:"name"`
LoadBalancerIp string `yaml:"loadBalancerIp" json:"loadBalancerIp"`

View File

@@ -37,8 +37,8 @@ func (m *Manager) Initialize() error {
if err != nil {
return fmt.Errorf("failed to get current directory: %w", err)
}
if os.Getenv("WILD_CENTRAL_DATA") != "" {
dataDir = os.Getenv("WILD_CENTRAL_DATA")
if os.Getenv("WILD_API_DATA_DIR") != "" {
dataDir = os.Getenv("WILD_API_DATA_DIR")
} else {
dataDir = filepath.Join(cwd, "data")
}

View File

@@ -5,16 +5,31 @@ import (
"log"
"os"
"os/exec"
"strconv"
"strings"
"time"
"github.com/wild-cloud/wild-central/daemon/internal/config"
)
// ConfigGenerator handles dnsmasq configuration generation
type ConfigGenerator struct{}
type ConfigGenerator struct {
configPath string
}
// NewConfigGenerator creates a new dnsmasq config generator
func NewConfigGenerator() *ConfigGenerator {
return &ConfigGenerator{}
func NewConfigGenerator(configPath string) *ConfigGenerator {
if configPath == "" {
configPath = "/etc/dnsmasq.d/wild-cloud.conf"
}
return &ConfigGenerator{
configPath: configPath,
}
}
// GetConfigPath returns the dnsmasq config file path
func (g *ConfigGenerator) GetConfigPath() string {
return g.configPath
}
// Generate creates a dnsmasq configuration from the app config
@@ -22,8 +37,8 @@ func (g *ConfigGenerator) Generate(cfg *config.GlobalConfig, clouds []config.Ins
resolution_section := ""
for _, cloud := range clouds {
resolution_section += fmt.Sprintf("local=/%s/\naddress=/%s/%s\n", cloud.Domain, cloud.Domain, cfg.Cluster.EndpointIP)
resolution_section += fmt.Sprintf("local=/%s/\naddress=/%s/%s\n", cloud.InternalDomain, cloud.InternalDomain, cfg.Cluster.EndpointIP)
resolution_section += fmt.Sprintf("local=/%s/\naddress=/%s/%s\n", cloud.Cloud.Domain, cloud.Cloud.Domain, cfg.Cluster.EndpointIP)
resolution_section += fmt.Sprintf("local=/%s/\naddress=/%s/%s\n", cloud.Cloud.InternalDomain, cloud.Cloud.InternalDomain, cfg.Cluster.EndpointIP)
}
template := `# Configuration file for dnsmasq.
@@ -71,3 +86,91 @@ func (g *ConfigGenerator) RestartService() error {
}
return nil
}
// ServiceStatus represents the status of the dnsmasq service
type ServiceStatus struct {
Status string `json:"status"`
PID int `json:"pid"`
ConfigFile string `json:"config_file"`
InstancesConfigured int `json:"instances_configured"`
LastRestart time.Time `json:"last_restart"`
}
// GetStatus checks the status of the dnsmasq service
func (g *ConfigGenerator) GetStatus() (*ServiceStatus, error) {
status := &ServiceStatus{
ConfigFile: g.configPath,
}
// Check if service is active
cmd := exec.Command("systemctl", "is-active", "dnsmasq.service")
output, err := cmd.Output()
if err != nil {
status.Status = "inactive"
return status, nil
}
statusStr := strings.TrimSpace(string(output))
status.Status = statusStr
// Get PID if running
if statusStr == "active" {
cmd = exec.Command("systemctl", "show", "dnsmasq.service", "--property=MainPID")
output, err := cmd.Output()
if err == nil {
parts := strings.Split(strings.TrimSpace(string(output)), "=")
if len(parts) == 2 {
if pid, err := strconv.Atoi(parts[1]); err == nil {
status.PID = pid
}
}
}
// Get last restart time
cmd = exec.Command("systemctl", "show", "dnsmasq.service", "--property=ActiveEnterTimestamp")
output, err = cmd.Output()
if err == nil {
parts := strings.Split(strings.TrimSpace(string(output)), "=")
if len(parts) == 2 {
// Parse systemd timestamp format
if t, err := time.Parse("Mon 2006-01-02 15:04:05 MST", parts[1]); err == nil {
status.LastRestart = t
}
}
}
}
// Count instances in config
if data, err := os.ReadFile(g.configPath); err == nil {
// Count "local=/" occurrences (each instance has multiple)
count := strings.Count(string(data), "local=/")
// Each instance creates 2 "local=/" entries (domain and internal domain)
status.InstancesConfigured = count / 2
}
return status, nil
}
// ReadConfig reads the current dnsmasq configuration
func (g *ConfigGenerator) ReadConfig() (string, error) {
data, err := os.ReadFile(g.configPath)
if err != nil {
return "", fmt.Errorf("reading dnsmasq config: %w", err)
}
return string(data), nil
}
// UpdateConfig regenerates and writes the dnsmasq configuration for all instances
func (g *ConfigGenerator) UpdateConfig(cfg *config.GlobalConfig, instances []config.InstanceConfig) error {
// Generate fresh config from scratch
configContent := g.Generate(cfg, instances)
// Write config
log.Printf("Writing dnsmasq config to: %s", g.configPath)
if err := os.WriteFile(g.configPath, []byte(configContent), 0644); err != nil {
return fmt.Errorf("writing dnsmasq config: %w", err)
}
// Restart service to apply changes
return g.RestartService()
}

View File

@@ -8,6 +8,7 @@ import (
"strings"
"github.com/wild-cloud/wild-central/daemon/internal/config"
"github.com/wild-cloud/wild-central/daemon/internal/setup"
"github.com/wild-cloud/wild-central/daemon/internal/tools"
)
@@ -335,11 +336,11 @@ func (m *Manager) Apply(instanceName, nodeIdentifier string, opts ApplyOptions)
}
}
// Always auto-fetch templates if they don't exist
// Always auto-extract templates from embedded files if they don't exist
templatesDir := filepath.Join(setupDir, "patch.templates")
if !m.templatesExist(templatesDir) {
if err := m.copyTemplatesFromDirectory(templatesDir); err != nil {
return fmt.Errorf("failed to copy templates: %w", err)
if err := m.extractEmbeddedTemplates(templatesDir); err != nil {
return fmt.Errorf("failed to extract templates: %w", err)
}
}
@@ -504,36 +505,31 @@ func (m *Manager) templatesExist(templatesDir string) bool {
return err1 == nil && err2 == nil
}
// copyTemplatesFromDirectory copies patch templates from directory/ to instance
func (m *Manager) copyTemplatesFromDirectory(destDir string) error {
// Find the directory/setup/cluster-nodes/patch.templates directory
// It should be in the same parent as the data directory
sourceDir := filepath.Join(filepath.Dir(m.dataDir), "directory", "setup", "cluster-nodes", "patch.templates")
// Check if source directory exists
if _, err := os.Stat(sourceDir); err != nil {
return fmt.Errorf("source templates directory not found: %s", sourceDir)
}
// extractEmbeddedTemplates extracts patch templates from embedded files to instance directory
func (m *Manager) extractEmbeddedTemplates(destDir string) error {
// Create destination directory
if err := os.MkdirAll(destDir, 0755); err != nil {
return fmt.Errorf("failed to create templates directory: %w", err)
}
// Copy controlplane.yaml
if err := m.copyFile(
filepath.Join(sourceDir, "controlplane.yaml"),
filepath.Join(destDir, "controlplane.yaml"),
); err != nil {
return fmt.Errorf("failed to copy controlplane template: %w", err)
// Get embedded template files
controlplaneData, err := setup.GetClusterNodesFile("patch.templates/controlplane.yaml")
if err != nil {
return fmt.Errorf("failed to get controlplane template: %w", err)
}
// Copy worker.yaml
if err := m.copyFile(
filepath.Join(sourceDir, "worker.yaml"),
filepath.Join(destDir, "worker.yaml"),
); err != nil {
return fmt.Errorf("failed to copy worker template: %w", err)
workerData, err := setup.GetClusterNodesFile("patch.templates/worker.yaml")
if err != nil {
return fmt.Errorf("failed to get worker template: %w", err)
}
// Write templates
if err := os.WriteFile(filepath.Join(destDir, "controlplane.yaml"), controlplaneData, 0644); err != nil {
return fmt.Errorf("failed to write controlplane template: %w", err)
}
if err := os.WriteFile(filepath.Join(destDir, "worker.yaml"), workerData, 0644); err != nil {
return fmt.Errorf("failed to write worker template: %w", err)
}
return nil
@@ -660,9 +656,9 @@ func (m *Manager) Update(instanceName string, hostname string, updates map[strin
return nil
}
// FetchTemplates copies patch templates from directory/ to instance
// FetchTemplates extracts patch templates from embedded files to instance
func (m *Manager) FetchTemplates(instanceName string) error {
instancePath := m.GetInstancePath(instanceName)
destDir := filepath.Join(instancePath, "setup", "cluster-nodes", "patch.templates")
return m.copyTemplatesFromDirectory(destDir)
return m.extractEmbeddedTemplates(destDir)
}

View File

@@ -2,6 +2,7 @@ package services
import (
"fmt"
"io/fs"
"os"
"os/exec"
"path/filepath"
@@ -10,30 +11,41 @@ import (
"gopkg.in/yaml.v3"
"github.com/wild-cloud/wild-central/daemon/internal/operations"
"github.com/wild-cloud/wild-central/daemon/internal/setup"
"github.com/wild-cloud/wild-central/daemon/internal/storage"
"github.com/wild-cloud/wild-central/daemon/internal/tools"
)
// Manager handles base service operations
type Manager struct {
dataDir string
servicesDir string // Path to services directory
manifests map[string]*ServiceManifest // Cached service manifests
dataDir string
manifests map[string]*ServiceManifest // Cached service manifests
}
// NewManager creates a new services manager
func NewManager(dataDir, servicesDir string) *Manager {
// Note: Service definitions are now loaded from embedded setup files
func NewManager(dataDir string) *Manager {
m := &Manager{
dataDir: dataDir,
servicesDir: servicesDir,
dataDir: dataDir,
}
// Load all service manifests
manifests, err := LoadAllManifests(servicesDir)
if err != nil {
// Log error but continue - services without manifests will fall back to hardcoded map
fmt.Printf("Warning: failed to load service manifests: %v\n", err)
manifests = make(map[string]*ServiceManifest)
// Load all service manifests from embedded files
manifests := make(map[string]*ServiceManifest)
services, err := setup.ListServices()
if err == nil {
for _, serviceName := range services {
manifest, err := setup.GetManifest(serviceName)
if err == nil {
// Convert setup.ServiceManifest to services.ServiceManifest
manifests[serviceName] = &ServiceManifest{
Name: manifest.Name,
Description: manifest.Description,
Category: manifest.Category,
}
}
}
} else {
fmt.Printf("Warning: failed to load service manifests from embedded files: %v\n", err)
}
m.manifests = manifests
@@ -124,18 +136,13 @@ func (m *Manager) checkServiceStatus(instanceName, serviceName string) string {
func (m *Manager) List(instanceName string) ([]Service, error) {
services := []Service{}
// Discover services from the services directory
entries, err := os.ReadDir(m.servicesDir)
// Discover services from embedded setup files
serviceNames, err := setup.ListServices()
if err != nil {
return nil, fmt.Errorf("failed to read services directory: %w", err)
return nil, fmt.Errorf("failed to list services from embedded files: %w", err)
}
for _, entry := range entries {
if !entry.IsDir() {
continue // Skip non-directories like README.md
}
name := entry.Name()
for _, name := range serviceNames {
// Get service info from manifest if available
var namespace, description, version string
@@ -232,11 +239,17 @@ func (m *Manager) InstallAll(instanceName string, fetch, deploy bool, opID strin
func (m *Manager) Delete(instanceName, serviceName string) error {
kubeconfigPath := tools.GetKubeconfigPath(m.dataDir, instanceName)
serviceDir := filepath.Join(m.servicesDir, serviceName)
manifestsFile := filepath.Join(serviceDir, "manifests.yaml")
// Check if service exists in embedded files
if !setup.ServiceExists(serviceName) {
return fmt.Errorf("service %s not found", serviceName)
}
// Get manifests file from embedded setup or instance directory
instanceServiceDir := filepath.Join(m.dataDir, "instances", instanceName, "setup", "cluster-services", serviceName)
manifestsFile := filepath.Join(instanceServiceDir, "manifests.yaml")
if !storage.FileExists(manifestsFile) {
return fmt.Errorf("service %s not found", serviceName)
return fmt.Errorf("service manifests not found - service may not be installed")
}
cmd := exec.Command("kubectl", "delete", "-f", manifestsFile)
@@ -292,12 +305,11 @@ func (m *Manager) GetConfigReferences(serviceName string) ([]string, error) {
return manifest.ConfigReferences, nil
}
// Fetch copies service files from directory to instance
// Fetch extracts service files from embedded setup to instance
func (m *Manager) Fetch(instanceName, serviceName string) error {
// 1. Validate service exists in directory
sourceDir := filepath.Join(m.servicesDir, serviceName)
if !dirExists(sourceDir) {
return fmt.Errorf("service %s not found in directory", serviceName)
// 1. Validate service exists in embedded files
if !setup.ServiceExists(serviceName) {
return fmt.Errorf("service %s not found in embedded files", serviceName)
}
// 2. Create instance service directory
@@ -307,31 +319,36 @@ func (m *Manager) Fetch(instanceName, serviceName string) error {
return fmt.Errorf("failed to create service directory: %w", err)
}
// 3. Copy files:
// 3. Extract files from embedded setup:
// - README.md (if exists, optional)
// - install.sh (if exists, optional)
// - wild-manifest.yaml
// - kustomize.template/* (if exists, optional)
// Copy README.md
copyFileIfExists(filepath.Join(sourceDir, "README.md"),
filepath.Join(instanceDir, "README.md"))
// Copy install.sh (optional)
installSh := filepath.Join(sourceDir, "install.sh")
if fileExists(installSh) {
if err := copyFile(installSh, filepath.Join(instanceDir, "install.sh")); err != nil {
return fmt.Errorf("failed to copy install.sh: %w", err)
}
// Make install.sh executable
os.Chmod(filepath.Join(instanceDir, "install.sh"), 0755)
// Extract README.md if it exists
if readmeData, err := setup.GetServiceFile(serviceName, "README.md"); err == nil {
os.WriteFile(filepath.Join(instanceDir, "README.md"), readmeData, 0644)
}
// Copy kustomize.template directory if it exists
templateDir := filepath.Join(sourceDir, "kustomize.template")
if dirExists(templateDir) {
// Extract install.sh if it exists
if installData, err := setup.GetServiceFile(serviceName, "install.sh"); err == nil {
installPath := filepath.Join(instanceDir, "install.sh")
if err := os.WriteFile(installPath, installData, 0755); err != nil {
return fmt.Errorf("failed to write install.sh: %w", err)
}
}
// Extract wild-manifest.yaml
if manifestData, err := setup.GetServiceFile(serviceName, "wild-manifest.yaml"); err == nil {
os.WriteFile(filepath.Join(instanceDir, "wild-manifest.yaml"), manifestData, 0644)
}
// Extract kustomize.template directory
templateFS, err := setup.GetKustomizeTemplate(serviceName)
if err == nil {
destTemplateDir := filepath.Join(instanceDir, "kustomize.template")
if err := copyDir(templateDir, destTemplateDir); err != nil {
return fmt.Errorf("failed to copy templates: %w", err)
if err := extractFS(templateFS, destTemplateDir); err != nil {
return fmt.Errorf("failed to extract templates: %w", err)
}
}
@@ -404,6 +421,32 @@ func copyDir(src, dst string) error {
return nil
}
// extractFS extracts files from an fs.FS to a destination directory
func extractFS(fsys fs.FS, dst string) error {
return fs.WalkDir(fsys, ".", func(path string, d fs.DirEntry, err error) error {
if err != nil {
return err
}
// Create destination path
dstPath := filepath.Join(dst, path)
if d.IsDir() {
// Create directory
return os.MkdirAll(dstPath, 0755)
}
// Read file from embedded FS
data, err := fs.ReadFile(fsys, path)
if err != nil {
return err
}
// Write file to destination
return os.WriteFile(dstPath, data, 0644)
})
}
// Compile processes gomplate templates into final Kubernetes manifests
func (m *Manager) Compile(instanceName, serviceName string) error {
instanceDir := filepath.Join(m.dataDir, "instances", instanceName)
@@ -511,7 +554,7 @@ func (m *Manager) Deploy(instanceName, serviceName, opID string, broadcaster *op
env := os.Environ()
env = append(env,
fmt.Sprintf("WILD_INSTANCE=%s", instanceName),
fmt.Sprintf("WILD_CENTRAL_DATA=%s", m.dataDir),
fmt.Sprintf("WILD_API_DATA_DIR=%s", m.dataDir),
fmt.Sprintf("KUBECONFIG=%s", kubeconfigPath),
)
fmt.Printf("[DEBUG] Environment configured: WILD_INSTANCE=%s, KUBECONFIG=%s\n", instanceName, kubeconfigPath)

View File

@@ -8,9 +8,9 @@ if [ -z "${WILD_INSTANCE}" ]; then
exit 1
fi
# Ensure WILD_CENTRAL_DATA is set
if [ -z "${WILD_CENTRAL_DATA}" ]; then
echo "❌ ERROR: WILD_CENTRAL_DATA is not set"
# Ensure WILD_API_DATA_DIR is set
if [ -z "${WILD_API_DATA_DIR}" ]; then
echo "❌ ERROR: WILD_API_DATA_DIR is not set"
exit 1
fi
@@ -20,7 +20,7 @@ if [ -z "${KUBECONFIG}" ]; then
exit 1
fi
INSTANCE_DIR="${WILD_CENTRAL_DATA}/instances/${WILD_INSTANCE}"
INSTANCE_DIR="${WILD_API_DATA_DIR}/instances/${WILD_INSTANCE}"
CLUSTER_SETUP_DIR="${INSTANCE_DIR}/setup/cluster-services"
CERT_MANAGER_DIR="${CLUSTER_SETUP_DIR}/cert-manager"
@@ -65,7 +65,7 @@ kubectl wait --for=condition=Available deployment/cert-manager-webhook -n cert-m
# Create Cloudflare API token secret
# Read token from Wild Central secrets file
echo "🔐 Creating Cloudflare API token secret..."
SECRETS_FILE="${WILD_CENTRAL_DATA}/instances/${WILD_INSTANCE}/secrets.yaml"
SECRETS_FILE="${WILD_API_DATA_DIR}/instances/${WILD_INSTANCE}/secrets.yaml"
CLOUDFLARE_API_TOKEN=$(yq '.cloudflare.token' "$SECRETS_FILE" 2>/dev/null)
CLOUDFLARE_API_TOKEN=$(echo "$CLOUDFLARE_API_TOKEN")

View File

@@ -9,14 +9,14 @@ if [ -z "${WILD_INSTANCE}" ]; then
exit 1
fi
if [ -z "${WILD_CENTRAL_DATA}" ]; then
echo "❌ ERROR: WILD_CENTRAL_DATA environment variable is not set"
if [ -z "${WILD_API_DATA_DIR}" ]; then
echo "❌ ERROR: WILD_API_DATA_DIR environment variable is not set"
exit 1
fi
# Get the instance directory path
get_instance_dir() {
echo "${WILD_CENTRAL_DATA}/instances/${WILD_INSTANCE}"
echo "${WILD_API_DATA_DIR}/instances/${WILD_INSTANCE}"
}
# Get the secrets file path

View File

@@ -8,9 +8,9 @@ if [ -z "${WILD_INSTANCE}" ]; then
exit 1
fi
# Ensure WILD_CENTRAL_DATA is set
if [ -z "${WILD_CENTRAL_DATA}" ]; then
echo "❌ ERROR: WILD_CENTRAL_DATA is not set"
# Ensure WILD_API_DATA_DIR is set
if [ -z "${WILD_API_DATA_DIR}" ]; then
echo "❌ ERROR: WILD_API_DATA_DIR is not set"
exit 1
fi
@@ -20,7 +20,7 @@ if [ -z "${KUBECONFIG}" ]; then
exit 1
fi
INSTANCE_DIR="${WILD_CENTRAL_DATA}/instances/${WILD_INSTANCE}"
INSTANCE_DIR="${WILD_API_DATA_DIR}/instances/${WILD_INSTANCE}"
CLUSTER_SETUP_DIR="${INSTANCE_DIR}/setup/cluster-services"
COREDNS_DIR="${CLUSTER_SETUP_DIR}/coredns"

View File

@@ -8,9 +8,9 @@ if [ -z "${WILD_INSTANCE}" ]; then
exit 1
fi
# Ensure WILD_CENTRAL_DATA is set
if [ -z "${WILD_CENTRAL_DATA}" ]; then
echo "❌ ERROR: WILD_CENTRAL_DATA is not set"
# Ensure WILD_API_DATA_DIR is set
if [ -z "${WILD_API_DATA_DIR}" ]; then
echo "❌ ERROR: WILD_API_DATA_DIR is not set"
exit 1
fi
@@ -20,7 +20,7 @@ if [ -z "${KUBECONFIG}" ]; then
exit 1
fi
INSTANCE_DIR="${WILD_CENTRAL_DATA}/instances/${WILD_INSTANCE}"
INSTANCE_DIR="${WILD_API_DATA_DIR}/instances/${WILD_INSTANCE}"
CLUSTER_SETUP_DIR="${INSTANCE_DIR}/setup/cluster-services"
DOCKER_REGISTRY_DIR="${CLUSTER_SETUP_DIR}/docker-registry"

View File

@@ -8,9 +8,9 @@ if [ -z "${WILD_INSTANCE}" ]; then
exit 1
fi
# Ensure WILD_CENTRAL_DATA is set
if [ -z "${WILD_CENTRAL_DATA}" ]; then
echo "❌ ERROR: WILD_CENTRAL_DATA is not set"
# Ensure WILD_API_DATA_DIR is set
if [ -z "${WILD_API_DATA_DIR}" ]; then
echo "❌ ERROR: WILD_API_DATA_DIR is not set"
exit 1
fi
@@ -20,7 +20,7 @@ if [ -z "${KUBECONFIG}" ]; then
exit 1
fi
INSTANCE_DIR="${WILD_CENTRAL_DATA}/instances/${WILD_INSTANCE}"
INSTANCE_DIR="${WILD_API_DATA_DIR}/instances/${WILD_INSTANCE}"
CLUSTER_SETUP_DIR="${INSTANCE_DIR}/setup/cluster-services"
EXTERNALDNS_DIR="${CLUSTER_SETUP_DIR}/externaldns"
@@ -49,7 +49,7 @@ kubectl apply -k ${EXTERNALDNS_DIR}/kustomize
# Setup Cloudflare API token secret
echo "🔐 Creating Cloudflare API token secret..."
SECRETS_FILE="${WILD_CENTRAL_DATA}/instances/${WILD_INSTANCE}/secrets.yaml"
SECRETS_FILE="${WILD_API_DATA_DIR}/instances/${WILD_INSTANCE}/secrets.yaml"
CLOUDFLARE_API_TOKEN=$(yq '.cloudflare.token' "$SECRETS_FILE" 2>/dev/null | tr -d '"')
if [ -z "$CLOUDFLARE_API_TOKEN" ] || [ "$CLOUDFLARE_API_TOKEN" = "null" ]; then

View File

@@ -8,9 +8,9 @@ if [ -z "${WILD_INSTANCE}" ]; then
exit 1
fi
# Ensure WILD_CENTRAL_DATA is set
if [ -z "${WILD_CENTRAL_DATA}" ]; then
echo "❌ ERROR: WILD_CENTRAL_DATA is not set"
# Ensure WILD_API_DATA_DIR is set
if [ -z "${WILD_API_DATA_DIR}" ]; then
echo "❌ ERROR: WILD_API_DATA_DIR is not set"
exit 1
fi
@@ -20,7 +20,7 @@ if [ -z "${KUBECONFIG}" ]; then
exit 1
fi
INSTANCE_DIR="${WILD_CENTRAL_DATA}/instances/${WILD_INSTANCE}"
INSTANCE_DIR="${WILD_API_DATA_DIR}/instances/${WILD_INSTANCE}"
CLUSTER_SETUP_DIR="${INSTANCE_DIR}/setup/cluster-services"
KUBERNETES_DASHBOARD_DIR="${CLUSTER_SETUP_DIR}/kubernetes-dashboard"

View File

@@ -8,9 +8,9 @@ if [ -z "${WILD_INSTANCE}" ]; then
exit 1
fi
# Ensure WILD_CENTRAL_DATA is set
if [ -z "${WILD_CENTRAL_DATA}" ]; then
echo "❌ ERROR: WILD_CENTRAL_DATA is not set"
# Ensure WILD_API_DATA_DIR is set
if [ -z "${WILD_API_DATA_DIR}" ]; then
echo "❌ ERROR: WILD_API_DATA_DIR is not set"
exit 1
fi
@@ -20,7 +20,7 @@ if [ -z "${KUBECONFIG}" ]; then
exit 1
fi
INSTANCE_DIR="${WILD_CENTRAL_DATA}/instances/${WILD_INSTANCE}"
INSTANCE_DIR="${WILD_API_DATA_DIR}/instances/${WILD_INSTANCE}"
CLUSTER_SETUP_DIR="${INSTANCE_DIR}/setup/cluster-services"
LONGHORN_DIR="${CLUSTER_SETUP_DIR}/longhorn"

View File

@@ -8,9 +8,9 @@ if [ -z "${WILD_INSTANCE}" ]; then
exit 1
fi
# Ensure WILD_CENTRAL_DATA is set
if [ -z "${WILD_CENTRAL_DATA}" ]; then
echo "❌ ERROR: WILD_CENTRAL_DATA is not set"
# Ensure WILD_API_DATA_DIR is set
if [ -z "${WILD_API_DATA_DIR}" ]; then
echo "❌ ERROR: WILD_API_DATA_DIR is not set"
exit 1
fi
@@ -20,7 +20,7 @@ if [ -z "${KUBECONFIG}" ]; then
exit 1
fi
INSTANCE_DIR="${WILD_CENTRAL_DATA}/instances/${WILD_INSTANCE}"
INSTANCE_DIR="${WILD_API_DATA_DIR}/instances/${WILD_INSTANCE}"
CLUSTER_SETUP_DIR="${INSTANCE_DIR}/setup/cluster-services"
METALLB_DIR="${CLUSTER_SETUP_DIR}/metallb"

View File

@@ -8,9 +8,9 @@ if [ -z "${WILD_INSTANCE}" ]; then
exit 1
fi
# Ensure WILD_CENTRAL_DATA is set
if [ -z "${WILD_CENTRAL_DATA}" ]; then
echo "❌ ERROR: WILD_CENTRAL_DATA is not set"
# Ensure WILD_API_DATA_DIR is set
if [ -z "${WILD_API_DATA_DIR}" ]; then
echo "❌ ERROR: WILD_API_DATA_DIR is not set"
exit 1
fi
@@ -20,7 +20,7 @@ if [ -z "${KUBECONFIG}" ]; then
exit 1
fi
INSTANCE_DIR="${WILD_CENTRAL_DATA}/instances/${WILD_INSTANCE}"
INSTANCE_DIR="${WILD_API_DATA_DIR}/instances/${WILD_INSTANCE}"
CONFIG_FILE="${INSTANCE_DIR}/config.yaml"
CLUSTER_SETUP_DIR="${INSTANCE_DIR}/setup/cluster-services"
NFS_DIR="${CLUSTER_SETUP_DIR}/nfs"

View File

@@ -8,9 +8,9 @@ if [ -z "${WILD_INSTANCE}" ]; then
exit 1
fi
# Ensure WILD_CENTRAL_DATA is set
if [ -z "${WILD_CENTRAL_DATA}" ]; then
echo "ERROR: WILD_CENTRAL_DATA is not set"
# Ensure WILD_API_DATA_DIR is set
if [ -z "${WILD_API_DATA_DIR}" ]; then
echo "ERROR: WILD_API_DATA_DIR is not set"
exit 1
fi
@@ -20,7 +20,7 @@ if [ -z "${KUBECONFIG}" ]; then
exit 1
fi
INSTANCE_DIR="${WILD_CENTRAL_DATA}/instances/${WILD_INSTANCE}"
INSTANCE_DIR="${WILD_API_DATA_DIR}/instances/${WILD_INSTANCE}"
CLUSTER_SETUP_DIR="${INSTANCE_DIR}/setup/cluster-services"
NFD_DIR="${CLUSTER_SETUP_DIR}/node-feature-discovery"

View File

@@ -8,9 +8,9 @@ if [ -z "${WILD_INSTANCE}" ]; then
exit 1
fi
# Ensure WILD_CENTRAL_DATA is set
if [ -z "${WILD_CENTRAL_DATA}" ]; then
echo "❌ ERROR: WILD_CENTRAL_DATA is not set"
# Ensure WILD_API_DATA_DIR is set
if [ -z "${WILD_API_DATA_DIR}" ]; then
echo "❌ ERROR: WILD_API_DATA_DIR is not set"
exit 1
fi
@@ -20,7 +20,7 @@ if [ -z "${KUBECONFIG}" ]; then
exit 1
fi
INSTANCE_DIR="${WILD_CENTRAL_DATA}/instances/${WILD_INSTANCE}"
INSTANCE_DIR="${WILD_API_DATA_DIR}/instances/${WILD_INSTANCE}"
CLUSTER_SETUP_DIR="${INSTANCE_DIR}/setup/cluster-services"
NVIDIA_PLUGIN_DIR="${CLUSTER_SETUP_DIR}/nvidia-device-plugin"

View File

@@ -8,9 +8,9 @@ if [ -z "${WILD_INSTANCE}" ]; then
exit 1
fi
# Ensure WILD_CENTRAL_DATA is set
if [ -z "${WILD_CENTRAL_DATA}" ]; then
echo "❌ ERROR: WILD_CENTRAL_DATA is not set"
# Ensure WILD_API_DATA_DIR is set
if [ -z "${WILD_API_DATA_DIR}" ]; then
echo "❌ ERROR: WILD_API_DATA_DIR is not set"
exit 1
fi
@@ -20,7 +20,7 @@ if [ -z "${KUBECONFIG}" ]; then
exit 1
fi
INSTANCE_DIR="${WILD_CENTRAL_DATA}/instances/${WILD_INSTANCE}"
INSTANCE_DIR="${WILD_API_DATA_DIR}/instances/${WILD_INSTANCE}"
CLUSTER_SETUP_DIR="${INSTANCE_DIR}/setup/cluster-services"
TRAEFIK_DIR="${CLUSTER_SETUP_DIR}/traefik"

Some files were not shown because too many files have changed in this diff Show More