refactor: update gitea-webhook-ambassador Dockerfile and configuration

- Changed the build process to include a web UI build stage using Node.js.
- Updated Go build stage to copy web UI files to the correct location.
- Removed the main.go file as it is no longer needed.
- Added SQLite database configuration to example config.
- Updated dependencies in go.mod and go.sum, including new packages for JWT and SQLite.
- Modified .gitignore to include new database and configuration files.

Signed-off-by: zhenyus <zhenyus@mathmast.com>
This commit is contained in:
zhenyus 2025-06-10 16:00:52 +08:00
parent bbce9ea9c3
commit db590f3f27
49 changed files with 3656 additions and 2458 deletions

4
.gitignore vendored
View File

@ -2,4 +2,6 @@ cluster/ansible/venv
cluster/ansible/manifests/inventory.ini
.idea/*
apps/gitea-webhook-ambassador/gitea-webhook-ambassador
apps/gitea-webhook-ambassador/config.yaml
apps/gitea-webhook-ambassador/config.yaml
apps/gitea-webhook-ambassador/data/gitea-webhook-ambassador.db
.cursorrules

View File

@ -0,0 +1,114 @@
---
description:
globs:
alwaysApply: true
---
You are an expert in Go, DevOps, Jenkins, Gitea and clean backend development
practices. Your roles is to ensure code is idiomatic, modular, testable, and
aligned with modern best practices and design patterns.
### Project Goals
This project is called `gitea-webhook-ambassador`, aims to build reliable and secure webhook service
that bridges the gap between Gitea and Jenkins.
This application is a simple HTTP server that listens to Gitea webhook events and transforms them into
Jenkins job trigger calls.
Finally, this application will be deployed as a Kubernetes deployment with a service and ingress.
### Project Structure
This project is a Go application that is organized into the following directories:
- `cmd/gitea-webhook-ambassador/main.go`: The main entry point for the application.
- `internal/`: Contains the internal logic for the application.
- `pkg/`: Contains the shared code for the application.
- `test/`: Contains the test code for the application.
- `vendor/`: Contains the dependencies for the application.
### Project Dependencies
- Go 1.23.4
- Go modules
- fsnotify
- ants
- yaml.v2
### Architecture Patterns:
- Apply **Clean Architecture** by structuring code into handlers/controllers, services/use cases, repositories/data access, and domain models.
- Use **domain-driven design** principles where applicable.
- Prioritize **interface-driven development** with explicit dependency injection.
- Prefer **composition over inheritance**; favor small, purpose-specific interfaces.
- Ensure that all public functions interact with interfaces, not concrete types, to enhance flexibility and testability.
### Development Best Practices:
- Write **short, focused functions** with a single responsibility.
- Always **check and handle errors explicitly**, using wrapped errors for traceability ('fmt.Errorf("context: %w", err)').
- Avoid **global state**; use constructor functions to inject dependencies.
- Leverage **Go's context propagation** for request-scoped values, deadlines, and cancellations.
- Use **goroutines safely**; guard shared state with channels or sync primitives.
- **Defer closing resources** and handle them carefully to avoid leaks.
### Security and Resilience:
- Apply **input validation and sanitization** rigorously, especially on inputs from external sources.
- Use secure defaults for **JWT, cookies**, and configuration settings.
- Isolate sensitive operations with clear **permission boundaries**.
- Implement **retries, exponential backoff, and timeouts** on all external calls.
- Use **circuit breakers and rate limiting** for service protection.
- Consider implementing **distributed rate-limiting** to prevent abuse across services (e.g., using Redis).
### Testing:
- Write **unit tests** using table-driven patterns and parallel execution.
- **Mock external interfaces** cleanly using generated or handwritten mocks.
- Separate **fast unit tests** from slower integration and E2E tests.
- Ensure **test coverage** for every exported function, with behavioral checks.
- Use tools like 'go test -cover' to ensure adequate test coverage.
### Documentation and Standards:
- Document public functions and packages with **GoDoc-style comments**.
- Provide concise **READMEs** for services and libraries.
- Maintain a 'CONTRIBUTING.md' and 'ARCHITECTURE.md' to guide team practices.
- Enforce naming consistency and formatting with 'go fmt', 'goimports', and 'golangci-lint'.
### Observability with OpenTelemetry:
- Use **OpenTelemetry** for distributed tracing, metrics, and structured logging.
- Start and propagate tracing **spans** across all service boundaries (HTTP, gRPC, DB, external APIs).
- Always attach 'context.Context' to spans, logs, and metric exports.
- Use **otel.Tracer** for creating spans and **otel.Meter** for collecting metrics.
- Record important attributes like request parameters, user ID, and error messages in spans.
- Use **log correlation** by injecting trace IDs into structured logs.
- Export data to **OpenTelemetry Collector**, **Jaeger**, or **Prometheus**.
### Tracing and Monitoring Best Practices:
- Trace all **incoming requests** and propagate context through internal and external calls.
- Use **middleware** to instrument HTTP and gRPC endpoints automatically.
- Annotate slow, critical, or error-prone paths with **custom spans**.
- Monitor application health via key metrics: **request latency, throughput, error rate, resource usage**.
- Define **SLIs** (e.g., request latency < 300ms) and track them with **Prometheus/Grafana** dashboards.
- Alert on key conditions (e.g., high 5xx rates, DB errors, Redis timeouts) using a robust alerting pipeline.
- Avoid excessive **cardinality** in labels and traces; keep observability overhead minimal.
- Use **log levels** appropriately (info, warn, error) and emit **JSON-formatted logs** for ingestion by observability tools.
- Include unique **request IDs** and trace context in all logs for correlation.
### Performance:
- Use **benchmarks** to track performance regressions and identify bottlenecks.
- Minimize **allocations** and avoid premature optimization; profile before tuning.
- Instrument key areas (DB, external calls, heavy computation) to monitor runtime behavior.
### Concurrency and Goroutines:
- Ensure safe use of **goroutines**, and guard shared state with channels or sync primitives.
- Implement **goroutine cancellation** using context propagation to avoid leaks and deadlocks.
### Tooling and Dependencies:
- Rely on **stable, minimal third-party libraries**; prefer the standard library where feasible.
- Use **Go modules** for dependency management and reproducibility.
- Version-lock dependencies for deterministic builds.
- Integrate **linting, testing, and security checks** in CI pipelines.
### Key Conventions:
1. Prioritize **readability, simplicity, and maintainability**.
2. Design for **change**: isolate business logic and minimize framework lock-in.
3. Emphasize clear **boundaries** and **dependency inversion**.
4. Ensure all behavior is **observable, testable, and documented**.
5. **Automate workflows** for testing, building, and deployment.

View File

@ -1,5 +1,18 @@
# Build stage
FROM golang:1.24-alpine AS builder
# Build stage for web UI
FROM node:20-alpine AS web-builder
# Set working directory for web UI
WORKDIR /web
# Copy web UI files
COPY web/package*.json ./
RUN npm ci
COPY web/ ./
RUN npm run build
# Go build stage
FROM golang:1.24-alpine AS go-builder
# Set working directory
WORKDIR /app
@ -7,7 +20,7 @@ WORKDIR /app
# Install build dependencies
RUN apk add --no-cache git make
# Copy go.mod and go.sum (if present)
# Copy go.mod and go.sum
COPY go.mod .
COPY go.sum* .
@ -17,8 +30,12 @@ RUN go mod download
# Copy source code
COPY . .
# Copy web UI files to the correct location
RUN mkdir -p cmd/server/web/out
COPY --from=web-builder /web/out/* cmd/server/web/out/
# Build the application with version information
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-s -w" -o gitea-webhook-ambassador .
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-s -w" -o gitea-webhook-ambassador ./cmd/server
# Runtime stage
FROM alpine:3.19
@ -36,7 +53,7 @@ RUN mkdir -p /app/config && \
WORKDIR /app
# Copy the binary from builder stage
COPY --from=builder /app/gitea-webhook-ambassador .
COPY --from=go-builder /app/gitea-webhook-ambassador .
# Copy default config (will be overridden by volume mount in production)
COPY config.yaml /app/config/

View File

@ -19,9 +19,9 @@ GOBUILD := $(GO) build
.DEFAULT_GOAL := help
# Build executable
build: $(GO_FILES)
build: $(GO_FILES)
@echo "Building $(APP_NAME)..."
$(GOBUILD) $(LDFLAGS) -o $(APP_NAME) .
$(GOBUILD) $(LDFLAGS) -o $(APP_NAME) ./cmd/server
# Clean build artifacts
clean:

View File

@ -0,0 +1,177 @@
# Gitea Webhook Ambassador
A service that receives Gitea webhooks and triggers corresponding Jenkins jobs based on repository and branch configurations.
## Features
- Receives Gitea webhooks and triggers Jenkins jobs
- Configurable repository to Jenkins job mappings
- Branch-specific job mappings with regex pattern support
- API key management for secure access
- SQLite persistence for configurations and logs
- Configurable worker pool for job processing
- Automatic retry with exponential backoff
- Webhook event deduplication
- Comprehensive logging and monitoring
## Configuration
The service is configured using a YAML file. Here's an example configuration:
```yaml
server:
port: 8080
webhookPath: "/webhook"
secretHeader: "X-Gitea-Signature"
secretKey: "custom-secret-key"
jenkins:
url: "http://jenkins.example.com"
username: "jenkins-user"
token: "jenkins-api-token"
timeout: 30
admin:
token: "admin-api-token" # Token for admin API access
database:
path: "data/gitea-webhook-ambassador.db" # Path to SQLite database file
logging:
level: "info"
format: "json"
file: ""
worker:
poolSize: 10
queueSize: 100
maxRetries: 3
retryBackoff: 1
eventCleanup:
interval: 3600
expireAfter: 7200
```
## API Endpoints
### Admin API
All admin API endpoints require the `X-Admin-Token` header with the configured admin token.
#### Create API Key
```
POST /admin/api-keys
Content-Type: application/json
{
"key": "api-key-value",
"description": "Key description"
}
```
#### List API Keys
```
GET /admin/api-keys
```
#### Delete API Key
```
DELETE /admin/api-keys/delete?key=api-key-value
```
### Project Mapping API
All project mapping API endpoints require the `X-API-Key` header with a valid API key.
#### Create Project Mapping
```
POST /api/projects
Content-Type: application/json
{
"repository_name": "owner/repo",
"default_job": "default-jenkins-job",
"branch_jobs": [
{
"branch_name": "main",
"job_name": "main-job"
}
],
"branch_patterns": [
{
"pattern": "^feature/.*$",
"job_name": "feature-job"
}
]
}
```
#### Get Project Mapping
```
GET /api/projects?repository=owner/repo
```
### Trigger Logs API
Requires the `X-API-Key` header with a valid API key.
#### Get Trigger Logs
```
GET /api/logs?repository=owner/repo&branch=main&since=2024-01-01T00:00:00Z&limit=100
```
Query parameters:
- `repository`: Filter by repository name (optional)
- `branch`: Filter by branch name (optional)
- `since`: Filter by timestamp (RFC3339 format, optional)
- `limit`: Maximum number of logs to return (default: 100, max: 1000)
### Webhook Endpoint
```
POST /webhook
X-Gitea-Signature: custom-secret-key
{
"ref": "refs/heads/main",
"after": "commit-sha",
"repository": {
"full_name": "owner/repo",
"clone_url": "https://gitea.example.com/owner/repo.git"
},
"pusher": {
"login": "username",
"email": "user@example.com"
}
}
```
## Building and Running
### Prerequisites
- Go 1.24 or later
- SQLite3
### Build
```bash
make build
```
### Run
```bash
make run
```
### Docker
```bash
# Build Docker image
make docker-build
# Run with Docker
docker run -p 8080:8080 -v /path/to/config.yaml:/app/config/config.yaml freeleaps/gitea-webhook-ambassador
```
## License
This project is licensed under the MIT License - see the LICENSE file for details.

View File

@ -0,0 +1,265 @@
package main
import (
"flag"
"fmt"
"net/http"
"os"
"os/signal"
"path/filepath"
"syscall"
"time"
"freeleaps.com/gitea-webhook-ambassador/internal/auth"
"freeleaps.com/gitea-webhook-ambassador/internal/config"
"freeleaps.com/gitea-webhook-ambassador/internal/database"
"freeleaps.com/gitea-webhook-ambassador/internal/handler"
"freeleaps.com/gitea-webhook-ambassador/internal/jenkins"
"freeleaps.com/gitea-webhook-ambassador/internal/logger"
"freeleaps.com/gitea-webhook-ambassador/internal/web"
webhandler "freeleaps.com/gitea-webhook-ambassador/internal/web/handler"
"freeleaps.com/gitea-webhook-ambassador/internal/worker"
)
var (
configFile = flag.String("config", "config.yaml", "Path to configuration file")
)
func main() {
flag.Parse()
// Initialize logger with default configuration
logger.Configure(logger.Config{
Level: "info",
Format: "text",
})
// Load initial configuration
if err := config.Load(*configFile); err != nil {
logger.Error("Failed to load configuration: %v", err)
os.Exit(1)
}
// Setup application
app, err := setupApplication()
if err != nil {
logger.Error("Failed to setup application: %v", err)
os.Exit(1)
}
defer app.cleanup()
// Start HTTP server
go app.startServer()
// Handle graceful shutdown
app.handleShutdown()
}
type application struct {
server *http.Server
workerPool *worker.Pool
db *database.DB
watcher *config.Watcher
}
func setupApplication() (*application, error) {
cfg := config.Get()
// Configure logger based on configuration
logger.Configure(logger.Config{
Level: cfg.Logging.Level,
Format: cfg.Logging.Format,
File: cfg.Logging.File,
})
// Ensure database directory exists
dbDir := filepath.Dir(cfg.Database.Path)
if err := os.MkdirAll(dbDir, 0755); err != nil {
return nil, fmt.Errorf("failed to create database directory: %v", err)
}
// Initialize database
db, err := setupDatabase(cfg)
if err != nil {
return nil, fmt.Errorf("failed to setup database: %v", err)
}
// Create Jenkins client
jenkinsClient := jenkins.New(jenkins.Config{
URL: cfg.Jenkins.URL,
Username: cfg.Jenkins.Username,
Token: cfg.Jenkins.Token,
Timeout: time.Duration(cfg.Jenkins.Timeout) * time.Second,
})
// Create worker pool
workerPool, err := setupWorkerPool(cfg, jenkinsClient, db)
if err != nil {
return nil, fmt.Errorf("failed to setup worker pool: %v", err)
}
// Setup config watcher
watcher, err := setupConfigWatcher(*configFile)
if err != nil {
return nil, fmt.Errorf("failed to setup config watcher: %v", err)
}
if err := watcher.Start(); err != nil {
return nil, fmt.Errorf("failed to start config watcher: %v", err)
}
// Create HTTP server
server := setupHTTPServer(cfg, workerPool, db)
return &application{
server: server,
workerPool: workerPool,
db: db,
watcher: watcher,
}, nil
}
func setupDatabase(cfg config.Configuration) (*database.DB, error) {
return database.New(database.Config{
Path: cfg.Database.Path,
})
}
func setupWorkerPool(cfg config.Configuration, jenkinsClient *jenkins.Client, db *database.DB) (*worker.Pool, error) {
pool, err := worker.New(worker.Config{
PoolSize: cfg.Worker.PoolSize,
QueueSize: cfg.Worker.QueueSize,
MaxRetries: cfg.Worker.MaxRetries,
RetryBackoff: time.Duration(cfg.Worker.RetryBackoff) * time.Second,
Client: jenkinsClient,
DB: db,
})
if err != nil {
return nil, err
}
// Start event cleanup
go worker.CleanupEvents(time.Duration(cfg.EventCleanup.ExpireAfter) * time.Second)
return pool, nil
}
func setupConfigWatcher(configPath string) (*config.Watcher, error) {
return config.NewWatcher(configPath, func() error {
if err := config.Load(configPath); err != nil {
return err
}
newCfg := config.Get()
// Update logger configuration
logger.Configure(logger.Config{
Level: newCfg.Logging.Level,
Format: newCfg.Logging.Format,
File: newCfg.Logging.File,
})
logger.Info("Configuration reloaded successfully")
return nil
})
}
func setupHTTPServer(cfg config.Configuration, workerPool *worker.Pool, db *database.DB) *http.Server {
// Create handlers
webhookHandler := handler.NewWebhookHandler(workerPool, db, &cfg)
healthHandler := handler.NewHealthHandler(workerPool, &cfg)
adminHandler := handler.NewAdminHandler(db, &cfg)
projectHandler := handler.NewProjectHandler(db, &cfg)
logsHandler := handler.NewLogsHandler(db, &cfg)
// Create auth middleware
authMiddleware := auth.NewMiddleware(cfg.Server.SecretKey)
// Create dashboard handler
dashboardHandler, err := webhandler.NewDashboardHandler(
web.WebAssets,
projectHandler,
adminHandler,
logsHandler,
healthHandler,
)
if err != nil {
logger.Error("Failed to create dashboard handler: %v", err)
os.Exit(1)
}
// Setup HTTP routes
mux := http.NewServeMux()
// Static file handlers (not protected by auth)
mux.HandleFunc("/css/", dashboardHandler.ServeHTTP)
mux.HandleFunc("/js/", dashboardHandler.ServeHTTP)
mux.HandleFunc("/img/", dashboardHandler.ServeHTTP)
// Webhook endpoint (not protected by auth, uses its own validation)
mux.HandleFunc(cfg.Server.WebhookPath, webhookHandler.HandleWebhook)
// Login routes - must be defined before protected routes
mux.HandleFunc("/login", func(w http.ResponseWriter, r *http.Request) {
if r.Method == http.MethodGet {
dashboardHandler.ServeHTTP(w, r)
} else {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
}
})
mux.HandleFunc("/api/auth/login", func(w http.ResponseWriter, r *http.Request) {
if r.Method == http.MethodPost {
authMiddleware.HandleLogin(w, r)
} else {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
}
})
// Protected routes
mux.Handle("/", authMiddleware.Authenticate(dashboardHandler))
mux.Handle("/dashboard", authMiddleware.Authenticate(dashboardHandler))
// Protected API routes
mux.Handle("/api/projects", authMiddleware.Authenticate(http.HandlerFunc(projectHandler.HandleGetProjectMapping)))
mux.Handle("/api/admin/api-keys", authMiddleware.Authenticate(http.HandlerFunc(adminHandler.HandleListAPIKeys)))
mux.Handle("/api/admin/api-keys/delete", authMiddleware.Authenticate(http.HandlerFunc(adminHandler.HandleDeleteAPIKey)))
mux.Handle("/api/logs", authMiddleware.Authenticate(http.HandlerFunc(logsHandler.HandleGetTriggerLogs)))
mux.Handle("/api/health", authMiddleware.Authenticate(http.HandlerFunc(healthHandler.HandleHealth)))
return &http.Server{
Addr: fmt.Sprintf(":%d", cfg.Server.Port),
Handler: mux,
ReadTimeout: 30 * time.Second,
WriteTimeout: 30 * time.Second,
IdleTimeout: 60 * time.Second,
}
}
func (app *application) startServer() {
logger.Info("Server listening on %s", app.server.Addr)
if err := app.server.ListenAndServe(); err != nil && err != http.ErrServerClosed {
logger.Error("HTTP server error: %v", err)
os.Exit(1)
}
}
func (app *application) handleShutdown() {
stop := make(chan os.Signal, 1)
signal.Notify(stop, os.Interrupt, syscall.SIGTERM)
<-stop
logger.Info("Shutting down server...")
app.cleanup()
logger.Info("Server shutdown complete")
}
func (app *application) cleanup() {
if app.workerPool != nil {
app.workerPool.Release()
}
if app.db != nil {
app.db.Close()
}
if app.watcher != nil {
app.watcher.Stop()
}
}

View File

@ -10,45 +10,23 @@ jenkins:
token: "jenkins-api-token"
timeout: 30
gitea:
secretToken: "your-gitea-webhook-secret"
projects:
# Simple configuration with different jobs for different branches
"owner/repo1":
defaultJob: "repo1-default-job" # Used when no specific branch match is found
branchJobs:
"main": "repo1-main-job" # Specific job for the main branch
"develop": "repo1-dev-job" # Specific job for the develop branch
"release": "repo1-release-job" # Specific job for the release branch
admin:
token: "admin-api-token" # Token for admin API access
# Advanced configuration with regex pattern matching
"owner/repo2":
defaultJob: "repo2-default-job"
branchJobs:
"main": "repo2-main-job"
branchPatterns:
- pattern: "^feature/.*$" # All feature branches
job: "repo2-feature-job"
- pattern: "^release/v[0-9]+\\.[0-9]+$" # Release branches like release/v1.0
job: "repo2-release-job"
- pattern: "^hotfix/.*$" # All hotfix branches
job: "repo2-hotfix-job"
# Simple configuration with just a default job
"owner/repo3":
defaultJob: "repo3-job" # This job is triggered for all branches
database:
path: "data/gitea-webhook-ambassador.db" # Path to SQLite database file
logging:
level: "info"
format: "json"
file: ""
level: "info" # debug, info, warn, error
format: "json" # text, json
file: "" # stdout if empty, or path to log file
worker:
poolSize: 10
queueSize: 100
maxRetries: 3
retryBackoff: 1
poolSize: 10 # Number of concurrent workers
queueSize: 100 # Size of job queue
maxRetries: 3 # Maximum number of retry attempts
retryBackoff: 1 # Initial retry backoff in seconds (exponential)
eventCleanup:
interval: 3600
expireAfter: 7200
interval: 3600 # Cleanup interval in seconds
expireAfter: 7200 # Event expiration time in seconds

View File

@ -1,10 +1,11 @@
module freeleaps.com/gitea-webhook-ambassador
go 1.24.0
go 1.24
require (
github.com/fsnotify/fsnotify v1.8.0
github.com/go-playground/validator/v10 v10.26.0
github.com/mattn/go-sqlite3 v1.14.22
github.com/panjf2000/ants/v2 v2.11.2
gopkg.in/yaml.v2 v2.4.0
)
@ -13,10 +14,15 @@ require (
github.com/gabriel-vasile/mimetype v1.4.8 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/golang-jwt/jwt/v5 v5.2.2 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/gorilla/mux v1.8.1 // indirect
github.com/gorilla/securecookie v1.1.2 // indirect
github.com/gorilla/sessions v1.4.0 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
golang.org/x/crypto v0.33.0 // indirect
golang.org/x/crypto v0.38.0 // indirect
golang.org/x/net v0.34.0 // indirect
golang.org/x/sync v0.11.0 // indirect
golang.org/x/sys v0.30.0 // indirect
golang.org/x/text v0.22.0 // indirect
golang.org/x/sync v0.14.0 // indirect
golang.org/x/sys v0.33.0 // indirect
golang.org/x/text v0.25.0 // indirect
)

View File

@ -12,24 +12,36 @@ github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJn
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
github.com/go-playground/validator/v10 v10.26.0 h1:SP05Nqhjcvz81uJaRfEV0YBSSSGMc/iMaVtFbr3Sw2k=
github.com/go-playground/validator/v10 v10.26.0/go.mod h1:I5QpIEbmr8On7W0TktmJAumgzX4CA1XNl4ZmDuVHKKo=
github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=
github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/gorilla/securecookie v1.1.2 h1:YCIWL56dvtr73r6715mJs5ZvhtnY73hBvEF8kXD8ePA=
github.com/gorilla/securecookie v1.1.2/go.mod h1:NfCASbcHqRSY+3a8tlWJwsQap2VX5pwzwo4h3eOamfo=
github.com/gorilla/sessions v1.4.0 h1:kpIYOp/oi6MG/p5PgxApU8srsSw9tuFbt46Lt7auzqQ=
github.com/gorilla/sessions v1.4.0/go.mod h1:FLWm50oby91+hl7p/wRxDth9bWSuk0qVL2emc7lT5ik=
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
github.com/mattn/go-sqlite3 v1.14.22 h1:2gZY6PC6kBnID23Tichd1K+Z0oS6nE/XwU+Vz/5o4kU=
github.com/mattn/go-sqlite3 v1.14.22/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
github.com/panjf2000/ants/v2 v2.11.2 h1:AVGpMSePxUNpcLaBO34xuIgM1ZdKOiGnpxLXixLi5Jo=
github.com/panjf2000/ants/v2 v2.11.2/go.mod h1:8u92CYMUc6gyvTIw8Ru7Mt7+/ESnJahz5EVtqfrilek=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
golang.org/x/crypto v0.33.0 h1:IOBPskki6Lysi0lo9qQvbxiQ+FvsCC/YWOecCHAixus=
golang.org/x/crypto v0.33.0/go.mod h1:bVdXmD7IV/4GdElGPozy6U7lWdRXA4qyRVGJV57uQ5M=
golang.org/x/crypto v0.38.0 h1:jt+WWG8IZlBnVbomuhg2Mdq0+BBQaHbtqHEFEigjUV8=
golang.org/x/crypto v0.38.0/go.mod h1:MvrbAqul58NNYPKnOra203SB9vpuZW0e+RRZV+Ggqjw=
golang.org/x/net v0.34.0 h1:Mb7Mrk043xzHgnRM88suvJFwzVrRfHEHJEl5/71CKw0=
golang.org/x/net v0.34.0/go.mod h1:di0qlW3YNM5oh6GqDGQr92MyTozJPmybPK4Ev/Gm31k=
golang.org/x/sync v0.11.0 h1:GGz8+XQP4FvTTrjZPzNKTMFtSXH80RAzG+5ghFPgK9w=
golang.org/x/sync v0.11.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.30.0 h1:QjkSwP/36a20jFYWkSue1YwXzLmsV5Gfq7Eiy72C1uc=
golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/text v0.22.0 h1:bofq7m3/HAFvbF51jz3Q9wLg3jkvSPuiZu/pD1XwgtM=
golang.org/x/text v0.22.0/go.mod h1:YRoo4H8PVmsu+E3Ou7cqLVH8oXWIHVoX0jqUWALQhfY=
golang.org/x/sync v0.14.0 h1:woo0S4Yywslg6hp4eUFjTVOyKt0RookbpAHG4c1HmhQ=
golang.org/x/sync v0.14.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/text v0.25.0 h1:qVyWApTSYLk/drJRO5mDlNYskwQznZmkpV2c8q9zls4=
golang.org/x/text v0.25.0/go.mod h1:WEdwpYrmk1qmdHvhkSTNPm3app7v4rsT8F2UD6+VHIA=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=

View File

@ -0,0 +1,137 @@
package auth
import (
"crypto/subtle"
"encoding/json"
"fmt"
"net/http"
"strings"
"time"
"freeleaps.com/gitea-webhook-ambassador/internal/logger"
"github.com/golang-jwt/jwt/v5"
)
type Middleware struct {
secretKey string
}
func NewMiddleware(secretKey string) *Middleware {
logger.Debug("Creating auth middleware with secret key length: %d", len(secretKey))
return &Middleware{
secretKey: secretKey,
}
}
// VerifyToken verifies a JWT token and returns an error if invalid
func (m *Middleware) VerifyToken(r *http.Request) error {
// Get token from Authorization header
authHeader := r.Header.Get("Authorization")
if authHeader == "" {
return fmt.Errorf("no authorization header")
}
// Remove 'Bearer ' prefix
tokenString := strings.TrimPrefix(authHeader, "Bearer ")
// Parse and validate token
token, err := jwt.Parse(tokenString, func(token *jwt.Token) (interface{}, error) {
if _, ok := token.Method.(*jwt.SigningMethodHMAC); !ok {
return nil, fmt.Errorf("unexpected signing method: %v", token.Header["alg"])
}
return []byte(m.secretKey), nil
})
if err != nil {
return fmt.Errorf("invalid token: %w", err)
}
if !token.Valid {
return fmt.Errorf("token is not valid")
}
return nil
}
// LoginRequest represents the login request body
type LoginRequest struct {
SecretKey string `json:"secret_key"`
}
// LoginResponse represents the login response
type LoginResponse struct {
Token string `json:"token"`
}
// HandleLogin handles the login API request
func (m *Middleware) HandleLogin(w http.ResponseWriter, r *http.Request) {
// Only accept POST requests
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Parse JSON request
var req LoginRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
http.Error(w, "Invalid request body", http.StatusBadRequest)
return
}
// Validate secret key
if subtle.ConstantTimeCompare([]byte(req.SecretKey), []byte(m.secretKey)) != 1 {
w.WriteHeader(http.StatusUnauthorized)
json.NewEncoder(w).Encode(map[string]string{
"error": "Invalid secret key",
})
return
}
// Generate JWT token
token := jwt.NewWithClaims(jwt.SigningMethodHS256, jwt.MapClaims{
"exp": time.Now().Add(24 * time.Hour).Unix(),
"iat": time.Now().Unix(),
})
// Sign the token
tokenString, err := token.SignedString([]byte(m.secretKey))
if err != nil {
logger.Error("Failed to generate token: %v", err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
// Return the token
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(LoginResponse{
Token: tokenString,
})
}
// Authenticate middleware for protecting routes
func (m *Middleware) Authenticate(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Skip authentication for login page and static assets
if r.URL.Path == "/login" || strings.HasPrefix(r.URL.Path, "/css/") ||
strings.HasPrefix(r.URL.Path, "/js/") || strings.HasPrefix(r.URL.Path, "/img/") {
next.ServeHTTP(w, r)
return
}
if err := m.VerifyToken(r); err != nil {
logger.Debug("Token verification failed: %v", err)
if r.Header.Get("X-Requested-With") == "XMLHttpRequest" {
w.WriteHeader(http.StatusUnauthorized)
json.NewEncoder(w).Encode(map[string]string{
"error": "Invalid or expired token",
})
} else {
http.Redirect(w, r, "/login", http.StatusSeeOther)
}
return
}
// Token is valid, proceed
next.ServeHTTP(w, r)
})
}

View File

@ -0,0 +1,146 @@
package config
import (
"fmt"
"os"
"sync"
"freeleaps.com/gitea-webhook-ambassador/internal/logger"
"github.com/go-playground/validator/v10"
"gopkg.in/yaml.v2"
)
// Configuration holds application configuration
type Configuration struct {
Server struct {
Port int `yaml:"port" validate:"required,gt=0"`
WebhookPath string `yaml:"webhookPath" validate:"required"`
SecretHeader string `yaml:"secretHeader" default:"Authorization"`
SecretKey string `yaml:"secretKey"`
} `yaml:"server"`
Jenkins struct {
URL string `yaml:"url" validate:"required,url"`
Username string `yaml:"username"`
Token string `yaml:"token"`
Timeout int `yaml:"timeout" default:"30"`
} `yaml:"jenkins"`
Database struct {
Path string `yaml:"path" validate:"required"` // Path to SQLite database file
} `yaml:"database"`
Logging struct {
Level string `yaml:"level" default:"info" validate:"oneof=debug info warn error"`
Format string `yaml:"format" default:"text" validate:"oneof=text json"`
File string `yaml:"file"`
} `yaml:"logging"`
Worker struct {
PoolSize int `yaml:"poolSize" default:"10" validate:"gt=0"`
QueueSize int `yaml:"queueSize" default:"100" validate:"gt=0"`
MaxRetries int `yaml:"maxRetries" default:"3" validate:"gte=0"`
RetryBackoff int `yaml:"retryBackoff" default:"1" validate:"gt=0"`
} `yaml:"worker"`
EventCleanup struct {
Interval int `yaml:"interval" default:"3600"`
ExpireAfter int `yaml:"expireAfter" default:"7200"`
} `yaml:"eventCleanup"`
}
// ProjectConfig represents the configuration for a specific repository
type ProjectConfig struct {
DefaultJob string `yaml:"defaultJob"`
BranchJobs map[string]string `yaml:"branchJobs,omitempty"`
BranchPatterns []BranchPattern `yaml:"branchPatterns,omitempty"`
}
// BranchPattern defines a pattern-based branch to job mapping
type BranchPattern struct {
Pattern string `yaml:"pattern"`
Job string `yaml:"job"`
}
var (
config Configuration
configMutex sync.RWMutex
validate = validator.New()
)
// Load reads and parses the configuration file
func Load(file string) error {
logger.Debug("Loading configuration from file: %s", file)
f, err := os.Open(file)
if err != nil {
return fmt.Errorf("cannot open config file: %v", err)
}
defer f.Close()
var newConfig Configuration
decoder := yaml.NewDecoder(f)
if err := decoder.Decode(&newConfig); err != nil {
return fmt.Errorf("cannot decode config: %v", err)
}
setDefaults(&newConfig)
if err := validate.Struct(newConfig); err != nil {
return fmt.Errorf("invalid configuration: %v", err)
}
logger.Debug("Configuration loaded successfully - Server.SecretKey length: %d", len(newConfig.Server.SecretKey))
configMutex.Lock()
config = newConfig
configMutex.Unlock()
return nil
}
// Get returns a copy of the current configuration
func Get() Configuration {
configMutex.RLock()
defer configMutex.RUnlock()
return config
}
// Update atomically updates the configuration
func Update(newConfig Configuration) {
configMutex.Lock()
config = newConfig
configMutex.Unlock()
}
func setDefaults(config *Configuration) {
if config.Server.SecretHeader == "" {
config.Server.SecretHeader = "X-Gitea-Signature"
}
if config.Jenkins.Timeout == 0 {
config.Jenkins.Timeout = 30
}
if config.Worker.PoolSize == 0 {
config.Worker.PoolSize = 10
}
if config.Worker.QueueSize == 0 {
config.Worker.QueueSize = 100
}
if config.Worker.MaxRetries == 0 {
config.Worker.MaxRetries = 3
}
if config.Worker.RetryBackoff == 0 {
config.Worker.RetryBackoff = 1
}
if config.EventCleanup.Interval == 0 {
config.EventCleanup.Interval = 3600
}
if config.EventCleanup.ExpireAfter == 0 {
config.EventCleanup.ExpireAfter = 7200
}
if config.Logging.Level == "" {
config.Logging.Level = "info"
}
if config.Logging.Format == "" {
config.Logging.Format = "text"
}
}

View File

@ -0,0 +1,71 @@
package config
import (
"fmt"
"path/filepath"
"github.com/fsnotify/fsnotify"
)
// Watcher represents a configuration file watcher
type Watcher struct {
watcher *fsnotify.Watcher
configPath string
onReload func() error
}
// NewWatcher creates a new configuration watcher
func NewWatcher(configPath string, onReload func() error) (*Watcher, error) {
fsWatcher, err := fsnotify.NewWatcher()
if err != nil {
return nil, fmt.Errorf("failed to create file watcher: %v", err)
}
w := &Watcher{
watcher: fsWatcher,
configPath: configPath,
onReload: onReload,
}
return w, nil
}
// Start begins watching the configuration file for changes
func (w *Watcher) Start() error {
// Watch the directory containing the config file
configDir := filepath.Dir(w.configPath)
if err := w.watcher.Add(configDir); err != nil {
return fmt.Errorf("failed to watch config directory: %v", err)
}
go w.watch()
return nil
}
// Stop stops watching for configuration changes
func (w *Watcher) Stop() error {
return w.watcher.Close()
}
func (w *Watcher) watch() {
for {
select {
case event, ok := <-w.watcher.Events:
if !ok {
return
}
// Check if the config file was modified
if event.Op&fsnotify.Write == fsnotify.Write &&
filepath.Base(event.Name) == filepath.Base(w.configPath) {
if err := w.onReload(); err != nil {
fmt.Printf("Error reloading config: %v\n", err)
}
}
case err, ok := <-w.watcher.Errors:
if !ok {
return
}
fmt.Printf("Error watching config file: %v\n", err)
}
}
}

View File

@ -0,0 +1,102 @@
package database
import (
"database/sql"
"fmt"
_ "github.com/mattn/go-sqlite3"
)
// DB represents the database connection
type DB struct {
*sql.DB
}
// Config holds database configuration
type Config struct {
Path string
}
// New creates a new database connection
func New(config Config) (*DB, error) {
db, err := sql.Open("sqlite3", config.Path)
if err != nil {
return nil, fmt.Errorf("failed to open database: %v", err)
}
if err := db.Ping(); err != nil {
return nil, fmt.Errorf("failed to ping database: %v", err)
}
if err := initSchema(db); err != nil {
return nil, fmt.Errorf("failed to initialize schema: %v", err)
}
return &DB{db}, nil
}
// initSchema creates the database schema if it doesn't exist
func initSchema(db *sql.DB) error {
schema := `
CREATE TABLE IF NOT EXISTS api_keys (
id INTEGER PRIMARY KEY AUTOINCREMENT,
key TEXT NOT NULL UNIQUE,
description TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS project_mappings (
id INTEGER PRIMARY KEY AUTOINCREMENT,
repository_name TEXT NOT NULL,
default_job TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP,
UNIQUE(repository_name)
);
CREATE TABLE IF NOT EXISTS branch_jobs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
project_id INTEGER NOT NULL,
branch_name TEXT NOT NULL,
job_name TEXT NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (project_id) REFERENCES project_mappings(id) ON DELETE CASCADE,
UNIQUE(project_id, branch_name)
);
CREATE TABLE IF NOT EXISTS branch_patterns (
id INTEGER PRIMARY KEY AUTOINCREMENT,
project_id INTEGER NOT NULL,
pattern TEXT NOT NULL,
job_name TEXT NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (project_id) REFERENCES project_mappings(id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS trigger_logs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
repository_name TEXT NOT NULL,
branch_name TEXT NOT NULL,
commit_sha TEXT NOT NULL,
job_name TEXT NOT NULL,
status TEXT NOT NULL,
error_message TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX IF NOT EXISTS idx_trigger_logs_repo ON trigger_logs(repository_name);
CREATE INDEX IF NOT EXISTS idx_trigger_logs_branch ON trigger_logs(branch_name);
CREATE INDEX IF NOT EXISTS idx_trigger_logs_created ON trigger_logs(created_at);
`
_, err := db.Exec(schema)
return err
}
// Close closes the database connection
func (db *DB) Close() error {
return db.DB.Close()
}

View File

@ -0,0 +1,358 @@
package database
import (
"database/sql"
"time"
)
// APIKey represents an API key record
type APIKey struct {
ID int64
Key string
Description string
CreatedAt time.Time
UpdatedAt time.Time
}
// ProjectMapping represents a project to Jenkins job mapping
type ProjectMapping struct {
ID int64
RepositoryName string
DefaultJob string
BranchJobs []BranchJob
BranchPatterns []BranchPattern
CreatedAt time.Time
UpdatedAt time.Time
}
// BranchJob represents a branch to job mapping
type BranchJob struct {
ID int64
ProjectID int64
BranchName string
JobName string
CreatedAt time.Time
UpdatedAt time.Time
}
// BranchPattern represents a branch pattern to job mapping
type BranchPattern struct {
ID int64
ProjectID int64
Pattern string
JobName string
CreatedAt time.Time
UpdatedAt time.Time
}
// TriggerLog represents a job trigger log entry
type TriggerLog struct {
ID int64
RepositoryName string
BranchName string
CommitSHA string
JobName string
Status string
ErrorMessage string
CreatedAt time.Time
}
// CreateAPIKey creates a new API key
func (db *DB) CreateAPIKey(key *APIKey) error {
query := `
INSERT INTO api_keys (key, description)
VALUES (?, ?)`
result, err := db.Exec(query, key.Key, key.Description)
if err != nil {
return err
}
key.ID, _ = result.LastInsertId()
return nil
}
// GetAPIKey retrieves an API key by its value
func (db *DB) GetAPIKey(key string) (*APIKey, error) {
var apiKey APIKey
query := `
SELECT id, key, description, created_at, updated_at
FROM api_keys
WHERE key = ?`
err := db.QueryRow(query, key).Scan(
&apiKey.ID,
&apiKey.Key,
&apiKey.Description,
&apiKey.CreatedAt,
&apiKey.UpdatedAt,
)
if err == sql.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, err
}
return &apiKey, nil
}
// DeleteAPIKey deletes an API key by its value
func (db *DB) DeleteAPIKey(key string) error {
query := `DELETE FROM api_keys WHERE key = ?`
result, err := db.Exec(query, key)
if err != nil {
return err
}
affected, err := result.RowsAffected()
if err != nil {
return err
}
if affected == 0 {
return sql.ErrNoRows
}
return nil
}
// GetAPIKeys retrieves all API keys
func (db *DB) GetAPIKeys() ([]APIKey, error) {
query := `
SELECT id, key, description, created_at, updated_at
FROM api_keys
ORDER BY created_at DESC`
rows, err := db.Query(query)
if err != nil {
return nil, err
}
defer rows.Close()
var keys []APIKey
for rows.Next() {
var key APIKey
err := rows.Scan(
&key.ID,
&key.Key,
&key.Description,
&key.CreatedAt,
&key.UpdatedAt,
)
if err != nil {
return nil, err
}
keys = append(keys, key)
}
return keys, nil
}
// CreateProjectMapping creates a new project mapping
func (db *DB) CreateProjectMapping(project *ProjectMapping) error {
tx, err := db.Begin()
if err != nil {
return err
}
defer tx.Rollback()
// Insert project mapping
query := `
INSERT INTO project_mappings (repository_name, default_job)
VALUES (?, ?)`
result, err := tx.Exec(query, project.RepositoryName, project.DefaultJob)
if err != nil {
return err
}
projectID, _ := result.LastInsertId()
project.ID = projectID
// Insert branch jobs
for _, job := range project.BranchJobs {
query = `
INSERT INTO branch_jobs (project_id, branch_name, job_name)
VALUES (?, ?, ?)`
_, err = tx.Exec(query, projectID, job.BranchName, job.JobName)
if err != nil {
return err
}
}
// Insert branch patterns
for _, pattern := range project.BranchPatterns {
query = `
INSERT INTO branch_patterns (project_id, pattern, job_name)
VALUES (?, ?, ?)`
_, err = tx.Exec(query, projectID, pattern.Pattern, pattern.JobName)
if err != nil {
return err
}
}
return tx.Commit()
}
// GetProjectMapping retrieves a project mapping by repository name
func (db *DB) GetProjectMapping(repoName string) (*ProjectMapping, error) {
var project ProjectMapping
// Get project mapping
query := `
SELECT id, repository_name, default_job, created_at, updated_at
FROM project_mappings
WHERE repository_name = ?`
err := db.QueryRow(query, repoName).Scan(
&project.ID,
&project.RepositoryName,
&project.DefaultJob,
&project.CreatedAt,
&project.UpdatedAt,
)
if err == sql.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, err
}
// Get branch jobs
query = `
SELECT id, branch_name, job_name, created_at, updated_at
FROM branch_jobs
WHERE project_id = ?`
rows, err := db.Query(query, project.ID)
if err != nil {
return nil, err
}
defer rows.Close()
for rows.Next() {
var job BranchJob
err := rows.Scan(
&job.ID,
&job.BranchName,
&job.JobName,
&job.CreatedAt,
&job.UpdatedAt,
)
if err != nil {
return nil, err
}
project.BranchJobs = append(project.BranchJobs, job)
}
// Get branch patterns
query = `
SELECT id, pattern, job_name, created_at, updated_at
FROM branch_patterns
WHERE project_id = ?`
rows, err = db.Query(query, project.ID)
if err != nil {
return nil, err
}
defer rows.Close()
for rows.Next() {
var pattern BranchPattern
err := rows.Scan(
&pattern.ID,
&pattern.Pattern,
&pattern.JobName,
&pattern.CreatedAt,
&pattern.UpdatedAt,
)
if err != nil {
return nil, err
}
project.BranchPatterns = append(project.BranchPatterns, pattern)
}
return &project, nil
}
// LogTrigger logs a job trigger event
func (db *DB) LogTrigger(log *TriggerLog) error {
query := `
INSERT INTO trigger_logs (repository_name, branch_name, commit_sha, job_name, status, error_message)
VALUES (?, ?, ?, ?, ?, ?)`
result, err := db.Exec(query,
log.RepositoryName,
log.BranchName,
log.CommitSHA,
log.JobName,
log.Status,
log.ErrorMessage,
)
if err != nil {
return err
}
log.ID, _ = result.LastInsertId()
return nil
}
// GetTriggerLogs retrieves trigger logs with optional filters
func (db *DB) GetTriggerLogs(repoName, branchName string, since time.Time, limit int) ([]TriggerLog, error) {
query := `
SELECT id, repository_name, branch_name, commit_sha, job_name, status, error_message, created_at
FROM trigger_logs
WHERE 1=1`
args := []interface{}{}
if repoName != "" {
query += " AND repository_name = ?"
args = append(args, repoName)
}
if branchName != "" {
query += " AND branch_name = ?"
args = append(args, branchName)
}
if !since.IsZero() {
query += " AND created_at >= ?"
args = append(args, since)
}
query += " ORDER BY created_at DESC"
if limit > 0 {
query += " LIMIT ?"
args = append(args, limit)
}
rows, err := db.Query(query, args...)
if err != nil {
return nil, err
}
defer rows.Close()
var logs []TriggerLog
for rows.Next() {
var log TriggerLog
err := rows.Scan(
&log.ID,
&log.RepositoryName,
&log.BranchName,
&log.CommitSHA,
&log.JobName,
&log.Status,
&log.ErrorMessage,
&log.CreatedAt,
)
if err != nil {
return nil, err
}
logs = append(logs, log)
}
return logs, nil
}

View File

@ -0,0 +1,150 @@
package handler
import (
"encoding/json"
"net/http"
"freeleaps.com/gitea-webhook-ambassador/internal/auth"
"freeleaps.com/gitea-webhook-ambassador/internal/config"
"freeleaps.com/gitea-webhook-ambassador/internal/database"
"freeleaps.com/gitea-webhook-ambassador/internal/logger"
)
// AdminHandler handles administrative API endpoints
type AdminHandler struct {
db *database.DB
config *config.Configuration
auth *auth.Middleware
}
// NewAdminHandler creates a new admin handler
func NewAdminHandler(db *database.DB, config *config.Configuration) *AdminHandler {
return &AdminHandler{
db: db,
config: config,
auth: auth.NewMiddleware(config.Server.SecretKey),
}
}
// CreateAPIKeyRequest represents a request to create a new API key
type CreateAPIKeyRequest struct {
Key string `json:"key"`
Description string `json:"description"`
}
// APIKeyResponse represents an API key response
type APIKeyResponse struct {
ID int64 `json:"id"`
Key string `json:"key"`
Description string `json:"description"`
CreatedAt string `json:"created_at"`
UpdatedAt string `json:"updated_at"`
}
// verifyAuth verifies the JWT token in the request
func (h *AdminHandler) verifyAuth(r *http.Request) error {
return h.auth.VerifyToken(r)
}
// HandleCreateAPIKey handles the creation of new API keys
func (h *AdminHandler) HandleCreateAPIKey(w http.ResponseWriter, r *http.Request) {
// Verify JWT token
if err := h.verifyAuth(r); err != nil {
http.Error(w, "Unauthorized", http.StatusUnauthorized)
return
}
// Parse request
var req CreateAPIKeyRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
http.Error(w, "Invalid request body", http.StatusBadRequest)
return
}
// Validate request
if req.Key == "" {
http.Error(w, "API key is required", http.StatusBadRequest)
return
}
// Create API key
apiKey := &database.APIKey{
Key: req.Key,
Description: req.Description,
}
if err := h.db.CreateAPIKey(apiKey); err != nil {
logger.Error("Failed to create API key: %v", err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
// Return response
response := APIKeyResponse{
ID: apiKey.ID,
Key: apiKey.Key,
Description: apiKey.Description,
CreatedAt: apiKey.CreatedAt.Format("2006-01-02T15:04:05Z07:00"),
UpdatedAt: apiKey.UpdatedAt.Format("2006-01-02T15:04:05Z07:00"),
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(response)
}
// HandleDeleteAPIKey handles the deletion of API keys
func (h *AdminHandler) HandleDeleteAPIKey(w http.ResponseWriter, r *http.Request) {
// Verify JWT token
if err := h.verifyAuth(r); err != nil {
http.Error(w, "Unauthorized", http.StatusUnauthorized)
return
}
// Get key from URL
key := r.URL.Query().Get("key")
if key == "" {
http.Error(w, "API key is required", http.StatusBadRequest)
return
}
// Delete API key
if err := h.db.DeleteAPIKey(key); err != nil {
logger.Error("Failed to delete API key: %v", err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
w.WriteHeader(http.StatusNoContent)
}
// HandleListAPIKeys handles listing all API keys
func (h *AdminHandler) HandleListAPIKeys(w http.ResponseWriter, r *http.Request) {
// Verify JWT token
if err := h.verifyAuth(r); err != nil {
http.Error(w, "Unauthorized", http.StatusUnauthorized)
return
}
// Get API keys
apiKeys, err := h.db.GetAPIKeys()
if err != nil {
logger.Error("Failed to get API keys: %v", err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
// Convert to response format
response := make([]APIKeyResponse, len(apiKeys))
for i, key := range apiKeys {
response[i] = APIKeyResponse{
ID: key.ID,
Key: key.Key,
Description: key.Description,
CreatedAt: key.CreatedAt.Format("2006-01-02T15:04:05Z07:00"),
UpdatedAt: key.UpdatedAt.Format("2006-01-02T15:04:05Z07:00"),
}
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(response)
}

View File

@ -0,0 +1,85 @@
package handler
import (
"encoding/json"
"net/http"
"freeleaps.com/gitea-webhook-ambassador/internal/auth"
"freeleaps.com/gitea-webhook-ambassador/internal/config"
"freeleaps.com/gitea-webhook-ambassador/internal/worker"
)
// HealthHandler handles health check requests
type HealthHandler struct {
workerPool *worker.Pool
config *config.Configuration
auth *auth.Middleware
}
// NewHealthHandler creates a new health check handler
func NewHealthHandler(workerPool *worker.Pool, config *config.Configuration) *HealthHandler {
return &HealthHandler{
workerPool: workerPool,
config: config,
auth: auth.NewMiddleware(config.Server.SecretKey),
}
}
// HealthResponse represents the health check response
type HealthResponse struct {
Status string `json:"status"`
Jenkins struct {
Status string `json:"status"`
Message string `json:"message,omitempty"`
} `json:"jenkins"`
WorkerPool struct {
ActiveWorkers int `json:"active_workers"`
QueueSize int `json:"queue_size"`
} `json:"worker_pool"`
}
// verifyAuth verifies the JWT token in the request
func (h *HealthHandler) verifyAuth(r *http.Request) error {
return h.auth.VerifyToken(r)
}
// HandleHealth handles health check requests
func (h *HealthHandler) HandleHealth(w http.ResponseWriter, r *http.Request) {
// Verify JWT token
if err := h.verifyAuth(r); err != nil {
http.Error(w, "Unauthorized", http.StatusUnauthorized)
return
}
response := HealthResponse{}
// Check Jenkins connection
if h.workerPool.IsJenkinsConnected() {
response.Jenkins.Status = "connected"
} else {
response.Jenkins.Status = "disconnected"
response.Jenkins.Message = "Unable to connect to Jenkins server"
}
// Get worker pool stats
stats := h.workerPool.GetStats()
response.WorkerPool.ActiveWorkers = stats.ActiveWorkers
response.WorkerPool.QueueSize = stats.QueueSize
// Set overall status
if response.Jenkins.Status == "connected" {
response.Status = "healthy"
} else {
response.Status = "unhealthy"
}
// Set response headers
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
// Write response
if err := json.NewEncoder(w).Encode(response); err != nil {
http.Error(w, "Failed to encode response", http.StatusInternalServerError)
return
}
}

View File

@ -0,0 +1,115 @@
package handler
import (
"encoding/json"
"net/http"
"strconv"
"time"
"freeleaps.com/gitea-webhook-ambassador/internal/auth"
"freeleaps.com/gitea-webhook-ambassador/internal/config"
"freeleaps.com/gitea-webhook-ambassador/internal/database"
"freeleaps.com/gitea-webhook-ambassador/internal/logger"
)
// LogsHandler handles trigger logs API endpoints
type LogsHandler struct {
db *database.DB
config *config.Configuration
auth *auth.Middleware
}
// NewLogsHandler creates a new logs handler
func NewLogsHandler(db *database.DB, config *config.Configuration) *LogsHandler {
return &LogsHandler{
db: db,
config: config,
auth: auth.NewMiddleware(config.Server.SecretKey),
}
}
// TriggerLogResponse represents a trigger log response
type TriggerLogResponse struct {
ID int64 `json:"id"`
RepositoryName string `json:"repository_name"`
BranchName string `json:"branch_name"`
CommitSHA string `json:"commit_sha"`
JobName string `json:"job_name"`
Status string `json:"status"`
ErrorMessage string `json:"error_message,omitempty"`
CreatedAt time.Time `json:"created_at"`
}
// verifyAuth verifies the JWT token in the request
func (h *LogsHandler) verifyAuth(r *http.Request) error {
return h.auth.VerifyToken(r)
}
// HandleGetTriggerLogs handles retrieving trigger logs
func (h *LogsHandler) HandleGetTriggerLogs(w http.ResponseWriter, r *http.Request) {
// Verify JWT token
if err := h.verifyAuth(r); err != nil {
http.Error(w, "Unauthorized", http.StatusUnauthorized)
return
}
// Get query parameters
repoName := r.URL.Query().Get("repository")
branchName := r.URL.Query().Get("branch")
sinceStr := r.URL.Query().Get("since")
limitStr := r.URL.Query().Get("limit")
// Parse since parameter
var since time.Time
var err error
if sinceStr != "" {
since, err = time.Parse(time.RFC3339, sinceStr)
if err != nil {
http.Error(w, "Invalid since parameter format (use RFC3339)", http.StatusBadRequest)
return
}
}
// Parse limit parameter
limit := 100 // default limit
if limitStr != "" {
limit, err = strconv.Atoi(limitStr)
if err != nil {
http.Error(w, "Invalid limit parameter", http.StatusBadRequest)
return
}
if limit <= 0 {
http.Error(w, "Limit must be greater than 0", http.StatusBadRequest)
return
}
if limit > 1000 {
limit = 1000 // maximum limit
}
}
// Get trigger logs
logs, err := h.db.GetTriggerLogs(repoName, branchName, since, limit)
if err != nil {
logger.Error("Failed to get trigger logs: %v", err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
// Convert to response format
response := make([]TriggerLogResponse, len(logs))
for i, log := range logs {
response[i] = TriggerLogResponse{
ID: log.ID,
RepositoryName: log.RepositoryName,
BranchName: log.BranchName,
CommitSHA: log.CommitSHA,
JobName: log.JobName,
Status: log.Status,
ErrorMessage: log.ErrorMessage,
CreatedAt: log.CreatedAt,
}
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(response)
}

View File

@ -0,0 +1,194 @@
package handler
import (
"encoding/json"
"net/http"
"time"
"freeleaps.com/gitea-webhook-ambassador/internal/auth"
"freeleaps.com/gitea-webhook-ambassador/internal/config"
"freeleaps.com/gitea-webhook-ambassador/internal/database"
"freeleaps.com/gitea-webhook-ambassador/internal/logger"
)
// ProjectHandler handles project mapping API endpoints
type ProjectHandler struct {
db *database.DB
config *config.Configuration
auth *auth.Middleware
}
// NewProjectHandler creates a new project handler
func NewProjectHandler(db *database.DB, config *config.Configuration) *ProjectHandler {
return &ProjectHandler{
db: db,
config: config,
auth: auth.NewMiddleware(config.Server.SecretKey),
}
}
// ProjectMappingRequest represents a request to create/update a project mapping
type ProjectMappingRequest struct {
RepositoryName string `json:"repository_name"`
DefaultJob string `json:"default_job"`
BranchJobs []struct {
BranchName string `json:"branch_name"`
JobName string `json:"job_name"`
} `json:"branch_jobs"`
BranchPatterns []struct {
Pattern string `json:"pattern"`
JobName string `json:"job_name"`
} `json:"branch_patterns"`
}
// ProjectMappingResponse represents a project mapping response
type ProjectMappingResponse struct {
ID int64 `json:"id"`
RepositoryName string `json:"repository_name"`
DefaultJob string `json:"default_job"`
BranchJobs []struct {
ID int64 `json:"id"`
BranchName string `json:"branch_name"`
JobName string `json:"job_name"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
} `json:"branch_jobs"`
BranchPatterns []struct {
ID int64 `json:"id"`
Pattern string `json:"pattern"`
JobName string `json:"job_name"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
} `json:"branch_patterns"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
}
// verifyAuth verifies the JWT token in the request
func (h *ProjectHandler) verifyAuth(r *http.Request) error {
return h.auth.VerifyToken(r)
}
// HandleCreateProjectMapping handles the creation of project mappings
func (h *ProjectHandler) HandleCreateProjectMapping(w http.ResponseWriter, r *http.Request) {
// Verify JWT token
if err := h.verifyAuth(r); err != nil {
http.Error(w, "Unauthorized", http.StatusUnauthorized)
return
}
// Parse request
var req ProjectMappingRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
http.Error(w, "Invalid request body", http.StatusBadRequest)
return
}
// Validate request
if req.RepositoryName == "" {
http.Error(w, "Repository name is required", http.StatusBadRequest)
return
}
// Create project mapping
project := &database.ProjectMapping{
RepositoryName: req.RepositoryName,
DefaultJob: req.DefaultJob,
}
// Add branch jobs
for _, job := range req.BranchJobs {
project.BranchJobs = append(project.BranchJobs, database.BranchJob{
BranchName: job.BranchName,
JobName: job.JobName,
})
}
// Add branch patterns
for _, pattern := range req.BranchPatterns {
project.BranchPatterns = append(project.BranchPatterns, database.BranchPattern{
Pattern: pattern.Pattern,
JobName: pattern.JobName,
})
}
if err := h.db.CreateProjectMapping(project); err != nil {
logger.Error("Failed to create project mapping: %v", err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
w.WriteHeader(http.StatusCreated)
}
// HandleGetProjectMapping handles retrieving project mappings
func (h *ProjectHandler) HandleGetProjectMapping(w http.ResponseWriter, r *http.Request) {
// Verify JWT token
if err := h.verifyAuth(r); err != nil {
http.Error(w, "Unauthorized", http.StatusUnauthorized)
return
}
// Get repository name from URL
repoName := r.URL.Query().Get("repository")
if repoName == "" {
http.Error(w, "Repository name is required", http.StatusBadRequest)
return
}
// Get project mapping
project, err := h.db.GetProjectMapping(repoName)
if err != nil {
logger.Error("Failed to get project mapping: %v", err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
if project == nil {
http.Error(w, "Project mapping not found", http.StatusNotFound)
return
}
// Convert to response format
response := ProjectMappingResponse{
ID: project.ID,
RepositoryName: project.RepositoryName,
DefaultJob: project.DefaultJob,
CreatedAt: project.CreatedAt,
UpdatedAt: project.UpdatedAt,
}
for _, job := range project.BranchJobs {
response.BranchJobs = append(response.BranchJobs, struct {
ID int64 `json:"id"`
BranchName string `json:"branch_name"`
JobName string `json:"job_name"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
}{
ID: job.ID,
BranchName: job.BranchName,
JobName: job.JobName,
CreatedAt: job.CreatedAt,
UpdatedAt: job.UpdatedAt,
})
}
for _, pattern := range project.BranchPatterns {
response.BranchPatterns = append(response.BranchPatterns, struct {
ID int64 `json:"id"`
Pattern string `json:"pattern"`
JobName string `json:"job_name"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
}{
ID: pattern.ID,
Pattern: pattern.Pattern,
JobName: pattern.JobName,
CreatedAt: pattern.CreatedAt,
UpdatedAt: pattern.UpdatedAt,
})
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(response)
}

View File

@ -0,0 +1,157 @@
package handler
import (
"encoding/json"
"io"
"net/http"
"regexp"
"freeleaps.com/gitea-webhook-ambassador/internal/config"
"freeleaps.com/gitea-webhook-ambassador/internal/database"
"freeleaps.com/gitea-webhook-ambassador/internal/logger"
"freeleaps.com/gitea-webhook-ambassador/internal/model"
"freeleaps.com/gitea-webhook-ambassador/internal/worker"
)
// WebhookHandler handles incoming Gitea webhooks
type WebhookHandler struct {
workerPool *worker.Pool
db *database.DB
config *config.Configuration
}
// NewWebhookHandler creates a new webhook handler
func NewWebhookHandler(workerPool *worker.Pool, db *database.DB, config *config.Configuration) *WebhookHandler {
return &WebhookHandler{
workerPool: workerPool,
db: db,
config: config,
}
}
// HandleWebhook processes incoming webhook requests
func (h *WebhookHandler) HandleWebhook(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Verify signature if secret token is set
secretHeader := h.config.Server.SecretHeader
serverSecretKey := h.config.Server.SecretKey
receivedSecretKey := r.Header.Get(secretHeader)
if receivedSecretKey == "" {
http.Error(w, "No secret key provided", http.StatusUnauthorized)
logger.Warn("No secret key provided in header")
return
}
if receivedSecretKey != serverSecretKey {
http.Error(w, "Invalid secret key", http.StatusUnauthorized)
logger.Warn("Invalid secret key provided")
return
}
// Read and parse the webhook payload
body, err := io.ReadAll(r.Body)
if err != nil {
http.Error(w, "Failed to read request body", http.StatusInternalServerError)
logger.Error("Failed to read webhook body: %v", err)
return
}
defer r.Body.Close()
var webhook model.GiteaWebhook
if err := json.Unmarshal(body, &webhook); err != nil {
http.Error(w, "Failed to parse webhook payload", http.StatusBadRequest)
logger.Error("Failed to parse webhook payload: %v", err)
return
}
// Get project mapping from database
project, err := h.db.GetProjectMapping(webhook.Repository.FullName)
if err != nil {
logger.Error("Failed to get project mapping: %v", err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
if project == nil {
logger.Info("No Jenkins job mapping for repository: %s", webhook.Repository.FullName)
w.WriteHeader(http.StatusOK) // Still return OK to not alarm Gitea
return
}
// Extract branch name from ref
branchName := webhook.GetBranchName()
// Determine which job to trigger based on branch name
jobName := h.determineJobName(project, branchName)
if jobName == "" {
logger.Info("No job configured to trigger for repository %s, branch %s",
webhook.Repository.FullName, branchName)
w.WriteHeader(http.StatusOK)
return
}
// Prepare parameters for Jenkins job
params := map[string]string{
"BRANCH_NAME": branchName,
"COMMIT_SHA": webhook.After,
"REPOSITORY_URL": webhook.Repository.CloneURL,
"REPOSITORY_NAME": webhook.Repository.FullName,
"PUSHER_NAME": webhook.Pusher.Login,
"PUSHER_EMAIL": webhook.Pusher.Email,
}
// Submit the job to the worker pool
job := worker.Job{
Name: jobName,
Parameters: params,
EventID: webhook.GetEventID(),
RepositoryName: webhook.Repository.FullName,
BranchName: branchName,
CommitSHA: webhook.After,
Attempts: 0,
}
if h.workerPool.Submit(job) {
logger.Info("Webhook received and queued for repository %s, branch %s, commit %s, job %s",
webhook.Repository.FullName, branchName, webhook.After, jobName)
w.WriteHeader(http.StatusAccepted)
} else {
logger.Warn("Failed to queue webhook: queue full")
http.Error(w, "Server busy, try again later", http.StatusServiceUnavailable)
}
}
// determineJobName selects the appropriate Jenkins job to trigger based on branch name
func (h *WebhookHandler) determineJobName(project *database.ProjectMapping, branchName string) string {
// First check for exact branch match
for _, job := range project.BranchJobs {
if job.BranchName == branchName {
logger.Debug("Found exact branch match for %s: job %s", branchName, job.JobName)
return job.JobName
}
}
// Then check for pattern-based matches
for _, pattern := range project.BranchPatterns {
matched, err := regexp.MatchString(pattern.Pattern, branchName)
if err != nil {
logger.Error("Error matching branch pattern %s: %v", pattern.Pattern, err)
continue
}
if matched {
logger.Debug("Branch %s matched pattern %s: job %s", branchName, pattern.Pattern, pattern.JobName)
return pattern.JobName
}
}
// Fall back to default job if available
if project.DefaultJob != "" {
logger.Debug("Using default job for branch %s: job %s", branchName, project.DefaultJob)
return project.DefaultJob
}
return ""
}

View File

@ -0,0 +1,101 @@
package jenkins
import (
"bytes"
"encoding/json"
"fmt"
"io"
"net/http"
"time"
"freeleaps.com/gitea-webhook-ambassador/internal/logger"
)
// Client represents a Jenkins API client
type Client struct {
url string
username string
token string
client *http.Client
}
// Config represents Jenkins client configuration
type Config struct {
URL string
Username string
Token string
Timeout time.Duration
}
// JobParameters represents parameters to pass to a Jenkins job
type JobParameters map[string]string
// New creates a new Jenkins client
func New(config Config) *Client {
return &Client{
url: config.URL,
username: config.Username,
token: config.Token,
client: &http.Client{
Timeout: config.Timeout,
},
}
}
// TriggerJob triggers a Jenkins job with the given parameters
func (c *Client) TriggerJob(jobName string, parameters map[string]string) error {
// Build request URL
url := fmt.Sprintf("%s/job/%s/buildWithParameters", c.url, jobName)
// Create request body
body, err := json.Marshal(parameters)
if err != nil {
return fmt.Errorf("failed to marshal parameters: %v", err)
}
// Create request
req, err := http.NewRequest(http.MethodPost, url, bytes.NewBuffer(body))
if err != nil {
return fmt.Errorf("failed to create request: %v", err)
}
// Set headers
req.Header.Set("Content-Type", "application/json")
req.SetBasicAuth(c.username, c.token)
// Send request
resp, err := c.client.Do(req)
if err != nil {
return fmt.Errorf("failed to send request: %v", err)
}
defer resp.Body.Close()
// Check response status
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusCreated {
body, _ := io.ReadAll(resp.Body)
return fmt.Errorf("unexpected status code: %d, body: %s", resp.StatusCode, string(body))
}
return nil
}
// IsConnected checks if Jenkins is accessible
func (c *Client) IsConnected() bool {
// Try to access Jenkins API
url := fmt.Sprintf("%s/api/json", c.url)
req, err := http.NewRequest(http.MethodGet, url, nil)
if err != nil {
logger.Error("Failed to create Jenkins request: %v", err)
return false
}
req.SetBasicAuth(c.username, c.token)
resp, err := c.client.Do(req)
if err != nil {
logger.Error("Failed to connect to Jenkins: %v", err)
return false
}
defer resp.Body.Close()
return resp.StatusCode == http.StatusOK
}

View File

@ -0,0 +1,163 @@
package logger
import (
"encoding/json"
"fmt"
"io"
"log"
"os"
"strings"
"sync"
"time"
)
type Level int
const (
DEBUG Level = iota
INFO
WARN
ERROR
)
var levelStrings = map[Level]string{
DEBUG: "DEBUG",
INFO: "INFO",
WARN: "WARN",
ERROR: "ERROR",
}
var stringToLevel = map[string]Level{
"debug": DEBUG,
"info": INFO,
"warn": WARN,
"error": ERROR,
}
type Logger struct {
logger *log.Logger
level Level
format string
mu sync.RWMutex
jsonWriter *jsonLogWriter
}
type jsonLogWriter struct {
out io.Writer
}
type Config struct {
Level string
Format string
File string
}
var defaultLogger *Logger
func init() {
defaultLogger = New(Config{
Level: "info",
Format: "text",
})
}
func New(config Config) *Logger {
var output io.Writer = os.Stdout
if config.File != "" {
file, err := os.OpenFile(config.File, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0666)
if err == nil {
output = io.MultiWriter(file, os.Stdout)
}
}
level := stringToLevel[strings.ToLower(config.Level)]
format := strings.ToLower(config.Format)
l := &Logger{
level: level,
format: format,
}
if format == "json" {
l.jsonWriter = &jsonLogWriter{out: output}
l.logger = log.New(l.jsonWriter, "", 0)
} else {
l.logger = log.New(output, "", log.LstdFlags|log.Lshortfile)
}
return l
}
func (w *jsonLogWriter) Write(p []byte) (n int, err error) {
entry := map[string]interface{}{
"timestamp": time.Now().Format(time.RFC3339),
"message": strings.TrimSpace(string(p)),
}
jsonData, err := json.Marshal(entry)
if err != nil {
return 0, err
}
return w.out.Write(append(jsonData, '\n'))
}
func (l *Logger) log(level Level, format string, v ...interface{}) {
l.mu.RLock()
defer l.mu.RUnlock()
if level < l.level {
return
}
msg := fmt.Sprintf(format, v...)
if l.format == "json" {
entry := map[string]interface{}{
"timestamp": time.Now().Format(time.RFC3339),
"level": levelStrings[level],
"message": msg,
}
jsonData, _ := json.Marshal(entry)
l.logger.Print(string(jsonData))
} else {
l.logger.Printf("[%s] %s", levelStrings[level], msg)
}
}
func (l *Logger) Debug(format string, v ...interface{}) {
l.log(DEBUG, format, v...)
}
func (l *Logger) Info(format string, v ...interface{}) {
l.log(INFO, format, v...)
}
func (l *Logger) Warn(format string, v ...interface{}) {
l.log(WARN, format, v...)
}
func (l *Logger) Error(format string, v ...interface{}) {
l.log(ERROR, format, v...)
}
// Global logger functions
func Debug(format string, v ...interface{}) {
defaultLogger.Debug(format, v...)
}
func Info(format string, v ...interface{}) {
defaultLogger.Info(format, v...)
}
func Warn(format string, v ...interface{}) {
defaultLogger.Warn(format, v...)
}
func Error(format string, v ...interface{}) {
defaultLogger.Error(format, v...)
}
func Configure(config Config) {
defaultLogger = New(config)
}

View File

@ -0,0 +1,57 @@
package model
// GiteaWebhook represents the webhook payload from Gitea
type GiteaWebhook struct {
Secret string `json:"secret"`
Ref string `json:"ref"`
Before string `json:"before"`
After string `json:"after"`
CompareURL string `json:"compare_url"`
Commits []Commit `json:"commits"`
Repository Repository `json:"repository"`
Pusher User `json:"pusher"`
}
// Commit represents a Git commit in the webhook payload
type Commit struct {
ID string `json:"id"`
Message string `json:"message"`
URL string `json:"url"`
Author User `json:"author"`
}
// Repository represents a Git repository in the webhook payload
type Repository struct {
ID int `json:"id"`
Name string `json:"name"`
Owner User `json:"owner"`
FullName string `json:"full_name"`
Private bool `json:"private"`
CloneURL string `json:"clone_url"`
SSHURL string `json:"ssh_url"`
HTMLURL string `json:"html_url"`
DefaultBranch string `json:"default_branch"`
}
// User represents a Gitea user in the webhook payload
type User struct {
ID int `json:"id"`
Login string `json:"login"`
FullName string `json:"full_name"`
Email string `json:"email,omitempty"`
Username string `json:"username,omitempty"`
}
// GetBranchName extracts the branch name from the ref
func (w *GiteaWebhook) GetBranchName() string {
const prefix = "refs/heads/"
if len(w.Ref) > len(prefix) {
return w.Ref[len(prefix):]
}
return w.Ref
}
// GetEventID generates a unique event ID for the webhook
func (w *GiteaWebhook) GetEventID() string {
return w.Repository.FullName + "-" + w.After
}

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,83 @@
.login-container {
height: 100vh;
display: flex;
align-items: center;
justify-content: center;
background-color: #f8f9fa;
}
.login-form {
width: 100%;
max-width: 330px;
padding: 15px;
margin: auto;
}
.sidebar {
position: fixed;
top: 0;
bottom: 0;
left: 0;
z-index: 100;
padding: 48px 0 0;
box-shadow: inset -1px 0 0 rgba(0, 0, 0, .1);
}
.sidebar-sticky {
position: relative;
top: 0;
height: calc(100vh - 48px);
padding-top: .5rem;
overflow-x: hidden;
overflow-y: auto;
}
.navbar-brand {
padding-top: .75rem;
padding-bottom: .75rem;
font-size: 1rem;
background-color: rgba(0, 0, 0, .25);
box-shadow: inset -1px 0 0 rgba(0, 0, 0, .25);
}
.navbar .navbar-toggler {
top: .25rem;
right: 1rem;
}
.main-content {
padding-top: 48px;
}
.card {
margin-bottom: 1rem;
}
.health-indicator {
width: 10px;
height: 10px;
border-radius: 50%;
display: inline-block;
margin-right: 5px;
}
.health-indicator.healthy {
background-color: #28a745;
}
.health-indicator.unhealthy {
background-color: #dc3545;
}
.log-entry {
font-family: monospace;
white-space: pre-wrap;
font-size: 0.9rem;
}
.api-key {
font-family: monospace;
background-color: #f8f9fa;
padding: 0.5rem;
border-radius: 0.25rem;
}

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,267 @@
// Global variable to store the JWT token
let authToken = localStorage.getItem("auth_token");
$(document).ready(function () {
// Initialize tooltips
$('[data-bs-toggle="tooltip"]').tooltip();
// Set up AJAX defaults to include auth token
$.ajaxSetup({
beforeSend: function (xhr, settings) {
// Don't add auth header for login request
if (settings.url === "/api/auth/login") {
return;
}
if (authToken) {
xhr.setRequestHeader("Authorization", "Bearer " + authToken);
}
},
error: function (xhr, status, error) {
// If we get a 401, redirect to login
if (xhr.status === 401) {
localStorage.removeItem("auth_token");
window.location.href = "/login";
return;
}
handleAjaxError(xhr, status, error);
},
});
// Handle login form submission
$("#loginForm").on("submit", function (e) {
e.preventDefault();
const secretKey = $("#secret_key").val();
$("#loginError").hide();
$.ajax({
url: "/api/auth/login",
method: "POST",
contentType: "application/json",
data: JSON.stringify({ secret_key: secretKey }),
success: function (response) {
if (response && response.token) {
// Store token and redirect
localStorage.setItem("auth_token", response.token);
authToken = response.token;
window.location.href = "/dashboard";
} else {
$("#loginError").text("Invalid response from server").show();
}
},
error: function (xhr) {
console.error("Login error:", xhr);
if (xhr.responseJSON && xhr.responseJSON.error) {
$("#loginError").text(xhr.responseJSON.error).show();
} else {
$("#loginError").text("Login failed. Please try again.").show();
}
$("#secret_key").val("").focus();
},
});
});
// Only load dashboard data if we're on the dashboard page
if (window.location.pathname === "/dashboard") {
if (!authToken) {
window.location.href = "/login";
return;
}
// Load initial data
loadProjects();
loadAPIKeys();
loadLogs();
checkHealth();
// Set up periodic health check
setInterval(checkHealth, 30000);
}
// Project management
$("#addProjectForm").on("submit", function (e) {
e.preventDefault();
const projectData = {
name: $("#projectName").val(),
jenkinsJob: $("#jenkinsJob").val(),
giteaRepo: $("#giteaRepo").val(),
};
$.ajax({
url: "/api/projects",
method: "POST",
contentType: "application/json",
data: JSON.stringify(projectData),
success: function () {
$("#addProjectModal").modal("hide");
loadProjects();
},
error: handleAjaxError,
});
});
// API key management
$("#generateKeyForm").on("submit", function (e) {
e.preventDefault();
$.ajax({
url: "/api/keys",
method: "POST",
contentType: "application/json",
data: JSON.stringify({ description: $("#keyDescription").val() }),
success: function () {
$("#generateKeyModal").modal("hide");
loadAPIKeys();
},
error: handleAjaxError,
});
});
// Log querying
$("#logQueryForm").on("submit", function (e) {
e.preventDefault();
loadLogs({
startTime: $("#startTime").val(),
endTime: $("#endTime").val(),
level: $("#logLevel").val(),
query: $("#logQuery").val(),
});
});
});
function loadProjects() {
$.get("/api/projects")
.done(function (data) {
const tbody = $("#projectsTable tbody");
tbody.empty();
data.projects.forEach(function (project) {
tbody.append(`
<tr>
<td>${escapeHtml(project.name)}</td>
<td>${escapeHtml(project.jenkinsJob)}</td>
<td>${escapeHtml(project.giteaRepo)}</td>
</tr>
`);
});
})
.fail(handleAjaxError);
}
function loadAPIKeys() {
$.get("/api/keys")
.done(function (data) {
const tbody = $("#apiKeysTable tbody");
tbody.empty();
data.keys.forEach(function (key) {
tbody.append(`
<tr>
<td>${escapeHtml(key.description)}</td>
<td><code class="api-key">${escapeHtml(
key.value
)}</code></td>
<td>${new Date(key.created).toLocaleString()}</td>
<td>
<button class="btn btn-sm btn-danger" onclick="revokeKey('${
key.id
}')">
Revoke
</button>
</td>
</tr>
`);
});
})
.fail(handleAjaxError);
}
function loadLogs(query = {}) {
$.get("/api/logs", query)
.done(function (data) {
const logContainer = $("#logEntries");
logContainer.empty();
data.logs.forEach(function (log) {
const levelClass =
{
error: "text-danger",
warn: "text-warning",
info: "text-info",
debug: "text-secondary",
}[log.level] || "";
logContainer.append(`
<div class="log-entry ${levelClass}">
<small>${new Date(log.timestamp).toISOString()}</small>
[${escapeHtml(log.level)}] ${escapeHtml(log.message)}
</div>
`);
});
})
.fail(handleAjaxError);
}
function checkHealth() {
$.get("/api/health")
.done(function (data) {
const indicator = $(".health-indicator");
indicator
.removeClass("healthy unhealthy")
.addClass(data.status === "healthy" ? "healthy" : "unhealthy");
$("#healthStatus").text(data.status);
})
.fail(function () {
const indicator = $(".health-indicator");
indicator.removeClass("healthy").addClass("unhealthy");
$("#healthStatus").text("unhealthy");
});
}
function deleteProject(id) {
if (!confirm("Are you sure you want to delete this project?")) return;
$.ajax({
url: `/api/projects/${id}`,
method: "DELETE",
success: loadProjects,
error: handleAjaxError,
});
}
function revokeKey(id) {
if (!confirm("Are you sure you want to revoke this API key?")) return;
$.ajax({
url: `/api/keys/${id}`,
method: "DELETE",
success: loadAPIKeys,
error: handleAjaxError,
});
}
function handleAjaxError(jqXHR, textStatus, errorThrown) {
const message =
jqXHR.responseJSON?.error || errorThrown || "An error occurred";
alert(`Error: ${message}`);
}
function escapeHtml(unsafe) {
return unsafe
.replace(/&/g, "&amp;")
.replace(/</g, "&lt;")
.replace(/>/g, "&gt;")
.replace(/"/g, "&quot;")
.replace(/'/g, "&#039;");
}
function getCookie(name) {
const cookies = document.cookie.split(";");
for (let cookie of cookies) {
const [cookieName, cookieValue] = cookie.split("=").map((c) => c.trim());
if (cookieName === name) {
console.debug(`Found cookie ${name}`);
return cookieValue;
}
}
console.debug(`Cookie ${name} not found`);
return null;
}

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,6 @@
package web
import "embed"
//go:embed templates/* assets/*
var WebAssets embed.FS

View File

@ -0,0 +1,126 @@
package handler
import (
"crypto/subtle"
"encoding/json"
"fmt"
"net/http"
"strings"
"time"
"freeleaps.com/gitea-webhook-ambassador/internal/logger"
"github.com/golang-jwt/jwt/v5"
)
type AuthMiddleware struct {
secretKey string
}
func NewAuthMiddleware(secretKey string) *AuthMiddleware {
logger.Debug("Creating auth middleware with secret key length: %d", len(secretKey))
return &AuthMiddleware{
secretKey: secretKey,
}
}
// LoginRequest represents the login request body
type LoginRequest struct {
SecretKey string `json:"secret_key"`
}
// LoginResponse represents the login response
type LoginResponse struct {
Token string `json:"token"`
}
// HandleLogin handles the login API request
func (a *AuthMiddleware) HandleLogin(w http.ResponseWriter, r *http.Request) {
// Only accept POST requests
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Parse JSON request
var req LoginRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
http.Error(w, "Invalid request body", http.StatusBadRequest)
return
}
// Validate secret key
if subtle.ConstantTimeCompare([]byte(req.SecretKey), []byte(a.secretKey)) != 1 {
w.WriteHeader(http.StatusUnauthorized)
json.NewEncoder(w).Encode(map[string]string{
"error": "Invalid secret key",
})
return
}
// Generate JWT token
token := jwt.NewWithClaims(jwt.SigningMethodHS256, jwt.MapClaims{
"exp": time.Now().Add(24 * time.Hour).Unix(),
"iat": time.Now().Unix(),
})
// Sign the token
tokenString, err := token.SignedString([]byte(a.secretKey))
if err != nil {
logger.Error("Failed to generate token: %v", err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
// Return the token
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(LoginResponse{
Token: tokenString,
})
}
// Authenticate middleware for protecting routes
func (a *AuthMiddleware) Authenticate(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Skip authentication for login page and static assets
if r.URL.Path == "/login" || strings.HasPrefix(r.URL.Path, "/css/") ||
strings.HasPrefix(r.URL.Path, "/js/") || strings.HasPrefix(r.URL.Path, "/img/") {
next.ServeHTTP(w, r)
return
}
// Get token from Authorization header
authHeader := r.Header.Get("Authorization")
if authHeader == "" {
logger.Debug("No Authorization header found")
http.Redirect(w, r, "/login", http.StatusSeeOther)
return
}
// Remove 'Bearer ' prefix
tokenString := strings.TrimPrefix(authHeader, "Bearer ")
// Parse and validate token
token, err := jwt.Parse(tokenString, func(token *jwt.Token) (interface{}, error) {
if _, ok := token.Method.(*jwt.SigningMethodHMAC); !ok {
return nil, fmt.Errorf("unexpected signing method: %v", token.Header["alg"])
}
return []byte(a.secretKey), nil
})
if err != nil || !token.Valid {
logger.Debug("Invalid token: %v", err)
if r.Header.Get("X-Requested-With") == "XMLHttpRequest" {
w.WriteHeader(http.StatusUnauthorized)
json.NewEncoder(w).Encode(map[string]string{
"error": "Invalid or expired token",
})
} else {
http.Redirect(w, r, "/login", http.StatusSeeOther)
}
return
}
// Token is valid, proceed
next.ServeHTTP(w, r)
})
}

View File

@ -0,0 +1,211 @@
package handler
import (
"embed"
"encoding/json"
"html/template"
"net/http"
"path"
"freeleaps.com/gitea-webhook-ambassador/internal/handler"
"freeleaps.com/gitea-webhook-ambassador/internal/logger"
)
type DashboardHandler struct {
templates *template.Template
fs embed.FS
projectHandler *handler.ProjectHandler
adminHandler *handler.AdminHandler
logsHandler *handler.LogsHandler
healthHandler *handler.HealthHandler
}
func NewDashboardHandler(fs embed.FS, projectHandler *handler.ProjectHandler, adminHandler *handler.AdminHandler, logsHandler *handler.LogsHandler, healthHandler *handler.HealthHandler) (*DashboardHandler, error) {
templates, err := template.ParseFS(fs, "templates/*.html")
if err != nil {
return nil, err
}
return &DashboardHandler{
templates: templates,
fs: fs,
projectHandler: projectHandler,
adminHandler: adminHandler,
logsHandler: logsHandler,
healthHandler: healthHandler,
}, nil
}
func (h *DashboardHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/login":
h.handleLogin(w, r)
case "/dashboard":
h.handleDashboard(w, r)
case "/api/projects":
h.handleProjects(w, r)
case "/api/keys":
h.handleAPIKeys(w, r)
case "/api/logs":
h.handleLogs(w, r)
case "/api/health":
h.handleHealth(w, r)
default:
// Serve static files
if path.Ext(r.URL.Path) != "" {
h.serveStaticFile(w, r)
return
}
http.NotFound(w, r)
}
}
func (h *DashboardHandler) handleLogin(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
logger.Debug("Serving login page")
h.templates.ExecuteTemplate(w, "login.html", nil)
}
func (h *DashboardHandler) handleDashboard(w http.ResponseWriter, r *http.Request) {
h.templates.ExecuteTemplate(w, "dashboard.html", nil)
}
func (h *DashboardHandler) handleProjects(w http.ResponseWriter, r *http.Request) {
switch r.Method {
case http.MethodGet:
h.projectHandler.HandleGetProjectMapping(w, r)
case http.MethodPost:
h.projectHandler.HandleCreateProjectMapping(w, r)
default:
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
}
}
func (h *DashboardHandler) handleAPIKeys(w http.ResponseWriter, r *http.Request) {
switch r.Method {
case http.MethodGet:
h.adminHandler.HandleListAPIKeys(w, r)
case http.MethodPost:
h.adminHandler.HandleCreateAPIKey(w, r)
case http.MethodDelete:
h.adminHandler.HandleDeleteAPIKey(w, r)
default:
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
}
}
func (h *DashboardHandler) handleLogs(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
h.logsHandler.HandleGetTriggerLogs(w, r)
}
func (h *DashboardHandler) handleHealth(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Capture the health handler response
recorder := newResponseRecorder(w)
h.healthHandler.HandleHealth(recorder, r)
// If it's not JSON or there was an error, just copy the response
if recorder.Header().Get("Content-Type") != "application/json" {
recorder.copyToResponseWriter(w)
return
}
// Parse the health check response and format it for the dashboard
var healthData map[string]interface{}
if err := json.Unmarshal(recorder.Body(), &healthData); err != nil {
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
// Format the response for the dashboard
response := map[string]string{
"status": "healthy",
}
if healthData["status"] != "ok" {
response["status"] = "unhealthy"
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(response)
}
func (h *DashboardHandler) serveStaticFile(w http.ResponseWriter, r *http.Request) {
// Remove leading slash and join with assets directory
filePath := path.Join("assets", r.URL.Path)
data, err := h.fs.ReadFile(filePath)
if err != nil {
http.NotFound(w, r)
return
}
// Set MIME type based on file extension
ext := path.Ext(r.URL.Path)
switch ext {
case ".css":
w.Header().Set("Content-Type", "text/css; charset=utf-8")
case ".js":
w.Header().Set("Content-Type", "application/javascript; charset=utf-8")
case ".png":
w.Header().Set("Content-Type", "image/png")
case ".jpg", ".jpeg":
w.Header().Set("Content-Type", "image/jpeg")
default:
w.Header().Set("Content-Type", "application/octet-stream")
}
// Set caching headers
w.Header().Set("Cache-Control", "public, max-age=31536000")
w.Write(data)
}
// responseRecorder is a custom ResponseWriter that records its mutations
type responseRecorder struct {
headers http.Header
body []byte
statusCode int
original http.ResponseWriter
}
func newResponseRecorder(w http.ResponseWriter) *responseRecorder {
return &responseRecorder{
headers: make(http.Header),
statusCode: http.StatusOK,
original: w,
}
}
func (r *responseRecorder) Header() http.Header {
return r.headers
}
func (r *responseRecorder) Write(body []byte) (int, error) {
r.body = append(r.body, body...)
return len(body), nil
}
func (r *responseRecorder) WriteHeader(statusCode int) {
r.statusCode = statusCode
}
func (r *responseRecorder) Body() []byte {
return r.body
}
func (r *responseRecorder) copyToResponseWriter(w http.ResponseWriter) {
for k, v := range r.headers {
w.Header()[k] = v
}
w.WriteHeader(r.statusCode)
w.Write(r.body)
}

View File

@ -0,0 +1,194 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Dashboard - Gitea Webhook Ambassador</title>
<link rel="stylesheet" href="/css/bootstrap.min.css">
<link rel="stylesheet" href="/css/dashboard.css">
</head>
<body>
<header class="navbar navbar-dark sticky-top bg-dark flex-md-nowrap p-0 shadow">
<a class="navbar-brand col-md-3 col-lg-2 me-0 px-3" href="#">Gitea Webhook Ambassador</a>
<div class="navbar-nav">
<div class="nav-item text-nowrap">
<span class="px-3 text-white">
<span class="health-indicator"></span>
<span id="healthStatus">checking...</span>
</span>
</div>
</div>
</header>
<div class="container-fluid">
<div class="row">
<nav id="sidebarMenu" class="col-md-3 col-lg-2 d-md-block bg-light sidebar collapse">
<div class="position-sticky pt-3">
<ul class="nav flex-column">
<li class="nav-item">
<a class="nav-link active" href="#projects" data-bs-toggle="tab">
Projects
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="#api-keys" data-bs-toggle="tab">
API Keys
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="#logs" data-bs-toggle="tab">
Logs
</a>
</li>
</ul>
</div>
</nav>
<main class="col-md-9 ms-sm-auto col-lg-10 px-md-4">
<div class="tab-content" id="myTabContent">
<!-- Projects Tab -->
<div class="tab-pane fade show active" id="projects">
<div class="d-flex justify-content-between flex-wrap flex-md-nowrap align-items-center pt-3 pb-2 mb-3 border-bottom">
<h1 class="h2">Projects</h1>
<button class="btn btn-primary" data-bs-toggle="modal" data-bs-target="#addProjectModal">
Add Project
</button>
</div>
<div class="table-responsive">
<table class="table table-striped" id="projectsTable">
<thead>
<tr>
<th>Name</th>
<th>Jenkins Job</th>
<th>Gitea Repository</th>
</tr>
</thead>
<tbody></tbody>
</table>
</div>
</div>
<!-- API Keys Tab -->
<div class="tab-pane fade" id="api-keys">
<div class="d-flex justify-content-between flex-wrap flex-md-nowrap align-items-center pt-3 pb-2 mb-3 border-bottom">
<h1 class="h2">API Keys</h1>
<button class="btn btn-primary" data-bs-toggle="modal" data-bs-target="#generateKeyModal">
Generate New Key
</button>
</div>
<div class="table-responsive">
<table class="table table-striped" id="apiKeysTable">
<thead>
<tr>
<th>Description</th>
<th>Key</th>
<th>Created</th>
<th>Actions</th>
</tr>
</thead>
<tbody></tbody>
</table>
</div>
</div>
<!-- Logs Tab -->
<div class="tab-pane fade" id="logs">
<div class="d-flex justify-content-between flex-wrap flex-md-nowrap align-items-center pt-3 pb-2 mb-3 border-bottom">
<h1 class="h2">Logs</h1>
</div>
<form id="logQueryForm" class="row g-3 mb-3">
<div class="col-md-3">
<label for="startTime" class="form-label">Start Time</label>
<input type="datetime-local" class="form-control" id="startTime">
</div>
<div class="col-md-3">
<label for="endTime" class="form-label">End Time</label>
<input type="datetime-local" class="form-control" id="endTime">
</div>
<div class="col-md-2">
<label for="logLevel" class="form-label">Log Level</label>
<select class="form-select" id="logLevel">
<option value="">All</option>
<option value="error">Error</option>
<option value="warn">Warning</option>
<option value="info">Info</option>
<option value="debug">Debug</option>
</select>
</div>
<div class="col-md-3">
<label for="logQuery" class="form-label">Search Query</label>
<input type="text" class="form-control" id="logQuery" placeholder="Search logs...">
</div>
<div class="col-md-1">
<label class="form-label">&nbsp;</label>
<button type="submit" class="btn btn-primary w-100">Search</button>
</div>
</form>
<div id="logEntries" class="border rounded p-3 bg-light"></div>
</div>
</div>
</main>
</div>
</div>
<!-- Add Project Modal -->
<div class="modal fade" id="addProjectModal" tabindex="-1">
<div class="modal-dialog">
<div class="modal-content">
<div class="modal-header">
<h5 class="modal-title">Add New Project</h5>
<button type="button" class="btn-close" data-bs-dismiss="modal"></button>
</div>
<form id="addProjectForm">
<div class="modal-body">
<div class="mb-3">
<label for="projectName" class="form-label">Project Name</label>
<input type="text" class="form-control" id="projectName" required>
</div>
<div class="mb-3">
<label for="jenkinsJob" class="form-label">Jenkins Job</label>
<input type="text" class="form-control" id="jenkinsJob" required>
</div>
<div class="mb-3">
<label for="giteaRepo" class="form-label">Gitea Repository</label>
<input type="text" class="form-control" id="giteaRepo" required>
</div>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Cancel</button>
<button type="submit" class="btn btn-primary">Add Project</button>
</div>
</form>
</div>
</div>
</div>
<!-- Generate API Key Modal -->
<div class="modal fade" id="generateKeyModal" tabindex="-1">
<div class="modal-dialog">
<div class="modal-content">
<div class="modal-header">
<h5 class="modal-title">Generate New API Key</h5>
<button type="button" class="btn-close" data-bs-dismiss="modal"></button>
</div>
<form id="generateKeyForm">
<div class="modal-body">
<div class="mb-3">
<label for="keyDescription" class="form-label">Key Description</label>
<input type="text" class="form-control" id="keyDescription" required>
</div>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Cancel</button>
<button type="submit" class="btn btn-primary">Generate Key</button>
</div>
</form>
</div>
</div>
</div>
<script src="/js/jquery-3.7.1.min.js"></script>
<script src="/js/bootstrap.bundle.min.js"></script>
<script src="/js/dashboard.js"></script>
</body>
</html>

View File

@ -0,0 +1,46 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Login - Gitea Webhook Ambassador</title>
<link rel="stylesheet" href="/css/bootstrap.min.css">
<link rel="stylesheet" href="/css/dashboard.css">
<style>
.login-container {
display: flex;
align-items: center;
justify-content: center;
min-height: 100vh;
background-color: #f5f5f5;
}
.login-form {
width: 100%;
max-width: 330px;
padding: 15px;
margin: auto;
background: white;
border-radius: 8px;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
}
</style>
</head>
<body>
<div class="login-container">
<form class="login-form" id="loginForm">
<h1 class="h3 mb-3 fw-normal text-center">Gitea Webhook Ambassador</h1>
<div class="alert alert-danger" role="alert" id="loginError" style="display: none;">
</div>
<div class="form-floating mb-3">
<input type="password" class="form-control" id="secret_key" name="secret_key" placeholder="Secret Key" required>
<label for="secret_key">Secret Key</label>
</div>
<button class="w-100 btn btn-lg btn-primary" type="submit">Sign in</button>
</form>
</div>
<script src="/js/jquery-3.7.1.min.js"></script>
<script src="/js/bootstrap.bundle.min.js"></script>
<script src="/js/dashboard.js"></script>
</body>
</html>

View File

@ -0,0 +1,181 @@
package worker
import (
"sync"
"time"
"freeleaps.com/gitea-webhook-ambassador/internal/database"
"freeleaps.com/gitea-webhook-ambassador/internal/jenkins"
"freeleaps.com/gitea-webhook-ambassador/internal/logger"
"github.com/panjf2000/ants/v2"
)
// Pool represents a worker pool for processing Jenkins jobs
type Pool struct {
pool *ants.Pool
jobQueue chan Job
client *jenkins.Client
db *database.DB
maxRetries int
retryDelay time.Duration
}
// Job represents a Jenkins job to be processed
type Job struct {
Name string
Parameters jenkins.JobParameters
EventID string
RepositoryName string
BranchName string
CommitSHA string
Attempts int
}
// Config holds the worker pool configuration
type Config struct {
PoolSize int
QueueSize int
MaxRetries int
RetryBackoff time.Duration
Client *jenkins.Client
DB *database.DB
}
// Stats represents worker pool statistics
type Stats struct {
ActiveWorkers int
QueueSize int
}
// processedEvents tracks processed webhook events for idempotency
var processedEvents sync.Map
// New creates a new worker pool
func New(config Config) (*Pool, error) {
pool, err := ants.NewPool(config.PoolSize, ants.WithNonblocking(true))
if err != nil {
return nil, err
}
return &Pool{
pool: pool,
jobQueue: make(chan Job, config.QueueSize),
client: config.Client,
db: config.DB,
maxRetries: config.MaxRetries,
retryDelay: config.RetryBackoff,
}, nil
}
// Submit adds a job to the queue
func (p *Pool) Submit(job Job) bool {
// Check if we've already processed this event
if _, exists := processedEvents.Load(job.EventID); exists {
logger.Info("Skipping already processed event: %s", job.EventID)
return true
}
// Store in processed events with a TTL
processedEvents.Store(job.EventID, time.Now())
select {
case p.jobQueue <- job:
return true
default:
logger.Warn("Failed to queue job: queue full")
return false
}
}
// processQueue handles the job queue
func (p *Pool) processQueue() {
for job := range p.jobQueue {
if err := p.pool.Submit(func() {
p.processJob(job)
}); err != nil {
logger.Error("Failed to submit job: %v", err)
}
}
}
// processJob is the worker function that processes each job
func (p *Pool) processJob(job Job) {
err := p.client.TriggerJob(job.Name, job.Parameters)
// Log the trigger attempt
triggerLog := &database.TriggerLog{
RepositoryName: job.RepositoryName,
BranchName: job.BranchName,
CommitSHA: job.CommitSHA,
JobName: job.Name,
Status: "SUCCESS",
}
if err != nil {
triggerLog.Status = "FAILED"
triggerLog.ErrorMessage = err.Error()
if job.Attempts < p.maxRetries {
job.Attempts++
// Exponential backoff
backoff := p.retryDelay << uint(job.Attempts-1)
time.Sleep(backoff)
select {
case p.jobQueue <- job:
logger.Info("Retrying job %s (attempt %d/%d) after %v",
job.Name, job.Attempts, p.maxRetries, backoff)
default:
logger.Error("Failed to queue retry for job %s: queue full", job.Name)
}
} else {
logger.Error("Job %s failed after %d attempts: %v",
job.Name, job.Attempts, err)
}
} else {
logger.Info("Successfully processed job %s for event %s",
job.Name, job.EventID)
}
// Save trigger log to database
if err := p.db.LogTrigger(triggerLog); err != nil {
logger.Error("Failed to log trigger: %v", err)
}
}
// Release releases the worker pool resources
func (p *Pool) Release() {
close(p.jobQueue)
p.pool.Release()
}
// IsJenkinsConnected checks if Jenkins connection is working
func (p *Pool) IsJenkinsConnected() bool {
return p.client.IsConnected()
}
// GetStats returns the current worker pool statistics
func (p *Pool) GetStats() Stats {
running := p.pool.Running()
return Stats{
ActiveWorkers: running,
QueueSize: p.pool.Waiting(),
}
}
// CleanupEvents removes expired events from the processedEvents map
func CleanupEvents(expireAfter time.Duration) {
for {
time.Sleep(time.Hour) // Run cleanup every hour
now := time.Now()
processedEvents.Range(func(key, value interface{}) bool {
if timestamp, ok := value.(time.Time); ok {
if now.Sub(timestamp) > expireAfter {
processedEvents.Delete(key)
logger.Debug("Cleaned up expired event: %v", key)
}
}
return true
})
}
}

View File

@ -1,776 +0,0 @@
package main
import (
"encoding/json"
"flag"
"fmt"
"io"
"log"
"net/http"
"os"
"path/filepath"
"regexp"
"strings"
"sync"
"time"
"github.com/fsnotify/fsnotify"
"github.com/go-playground/validator/v10"
"github.com/panjf2000/ants/v2"
"gopkg.in/yaml.v2"
)
// Configuration holds application configuration
type Configuration struct {
Server struct {
Port int `yaml:"port" validate:"required,gt=0"`
WebhookPath string `yaml:"webhookPath" validate:"required"`
SecretHeader string `yaml:"secretHeader" default:"Authorization"`
SecretKey string `yaml:"secretKey"`
} `yaml:"server"`
Jenkins struct {
URL string `yaml:"url" validate:"required,url"`
Username string `yaml:"username"`
Token string `yaml:"token"`
Timeout int `yaml:"timeout" default:"30"`
} `yaml:"jenkins"`
Gitea struct {
SecretToken string `yaml:"secretToken"`
Projects map[string]ProjectConfig `yaml:"projects" validate:"required"` // repo name -> project config
} `yaml:"gitea"`
Logging struct {
Level string `yaml:"level" default:"info" validate:"oneof=debug info warn error"`
Format string `yaml:"format" default:"text" validate:"oneof=text json"`
File string `yaml:"file"`
} `yaml:"logging"`
Worker struct {
PoolSize int `yaml:"poolSize" default:"10" validate:"gt=0"`
QueueSize int `yaml:"queueSize" default:"100" validate:"gt=0"`
MaxRetries int `yaml:"maxRetries" default:"3" validate:"gte=0"`
RetryBackoff int `yaml:"retryBackoff" default:"1" validate:"gt=0"` // seconds
} `yaml:"worker"`
EventCleanup struct {
Interval int `yaml:"interval" default:"3600"` // seconds
ExpireAfter int `yaml:"expireAfter" default:"7200"` // seconds
} `yaml:"eventCleanup"`
}
// ProjectConfig represents the configuration for a specific repository
type ProjectConfig struct {
DefaultJob string `yaml:"defaultJob"` // Default Jenkins job to trigger
BranchJobs map[string]string `yaml:"branchJobs,omitempty"` // Branch-specific jobs
BranchPatterns []BranchPattern `yaml:"branchPatterns,omitempty"`
}
// BranchPattern defines a pattern-based branch to job mapping
type BranchPattern struct {
Pattern string `yaml:"pattern"` // Regex pattern for branch name
Job string `yaml:"job"` // Jenkins job to trigger
}
// GiteaWebhook represents the webhook payload from Gitea
type GiteaWebhook struct {
Secret string `json:"secret"`
Ref string `json:"ref"`
Before string `json:"before"`
After string `json:"after"`
CompareURL string `json:"compare_url"`
Commits []struct {
ID string `json:"id"`
Message string `json:"message"`
URL string `json:"url"`
Author struct {
Name string `json:"name"`
Email string `json:"email"`
Username string `json:"username"`
} `json:"author"`
} `json:"commits"`
Repository struct {
ID int `json:"id"`
Name string `json:"name"`
Owner struct {
ID int `json:"id"`
Login string `json:"login"`
FullName string `json:"full_name"`
} `json:"owner"`
FullName string `json:"full_name"`
Private bool `json:"private"`
CloneURL string `json:"clone_url"`
SSHURL string `json:"ssh_url"`
HTMLURL string `json:"html_url"`
DefaultBranch string `json:"default_branch"`
} `json:"repository"`
Pusher struct {
ID int `json:"id"`
Login string `json:"login"`
FullName string `json:"full_name"`
Email string `json:"email"`
} `json:"pusher"`
}
type jobRequest struct {
jobName string
parameters map[string]string
eventID string
attempts int
}
var (
configFile = flag.String("config", "config.yaml", "Path to configuration file")
config Configuration
configMutex sync.RWMutex
validate = validator.New()
jobQueue chan jobRequest
httpClient *http.Client
logger *log.Logger
workerPool *ants.PoolWithFunc
// For idempotency
processedEvents sync.Map
// For config reloading
watcher *fsnotify.Watcher
)
func main() {
flag.Parse()
// Initialize basic logger temporarily
logger = log.New(os.Stdout, "", log.LstdFlags)
logger.Println("Starting Gitea Webhook Ambassador...")
// Load initial configuration
if err := loadConfig(*configFile); err != nil {
logger.Fatalf("Failed to load configuration: %v", err)
}
// Configure proper logger based on configuration
setupLogger()
// Setup config file watcher for auto-reload
setupConfigWatcher(*configFile)
// Start event cleanup goroutine
go cleanupEvents()
// Configure HTTP client with timeout
configMutex.RLock()
httpClient = &http.Client{
Timeout: time.Duration(config.Jenkins.Timeout) * time.Second,
}
// Initialize job queue
jobQueue = make(chan jobRequest, config.Worker.QueueSize)
configMutex.RUnlock()
// Initialize worker pool
initWorkerPool()
// Configure webhook handler
http.HandleFunc(config.Server.WebhookPath, handleWebhook)
http.HandleFunc("/health", handleHealthCheck)
// Start HTTP server
serverAddr := fmt.Sprintf(":%d", config.Server.Port)
logger.Printf("Server listening on %s", serverAddr)
if err := http.ListenAndServe(serverAddr, nil); err != nil {
logger.Fatalf("HTTP server error: %v", err)
}
}
// setupLogger configures the logger based on application settings
func setupLogger() {
configMutex.RLock()
defer configMutex.RUnlock()
// Determine log output
var logOutput io.Writer = os.Stdout
if config.Logging.File != "" {
file, err := os.OpenFile(config.Logging.File, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0666)
if err != nil {
logger.Printf("Failed to open log file %s: %v, using stdout instead", config.Logging.File, err)
} else {
logOutput = file
// Create a multiwriter to also log to stdout for important messages
logOutput = io.MultiWriter(file, os.Stdout)
}
}
// Create new logger with proper format
var prefix string
var flags int
// Set log format based on configuration
if config.Logging.Format == "json" {
// For JSON logging, we'll handle formatting in the custom writer
prefix = ""
flags = 0
logOutput = &jsonLogWriter{out: logOutput}
} else {
// Text format with timestamp
prefix = ""
flags = log.LstdFlags | log.Lshortfile
}
// Create the new logger
logger = log.New(logOutput, prefix, flags)
// Log level will be checked in our custom log functions (not implemented here)
logger.Printf("Logger configured with level=%s, format=%s, output=%s",
config.Logging.Level,
config.Logging.Format,
func() string {
if config.Logging.File == "" {
return "stdout"
}
return config.Logging.File
}())
}
func setupConfigWatcher(configPath string) {
var err error
watcher, err = fsnotify.NewWatcher()
if err != nil {
logger.Fatalf("Failed to create file watcher: %v", err)
}
// Extract directory containing the config file
configDir := filepath.Dir(configPath)
go func() {
for {
select {
case event, ok := <-watcher.Events:
if !ok {
return
}
// Check if the config file was modified
if event.Op&fsnotify.Write == fsnotify.Write &&
filepath.Base(event.Name) == filepath.Base(configPath) {
logger.Printf("Config file modified, reloading configuration")
if err := reloadConfig(configPath); err != nil {
logger.Printf("Error reloading config: %v", err)
}
}
case err, ok := <-watcher.Errors:
if !ok {
return
}
logger.Printf("Error watching config file: %v", err)
}
}
}()
// Start watching the directory containing the config file
err = watcher.Add(configDir)
if err != nil {
logger.Fatalf("Failed to watch config directory: %v", err)
}
logger.Printf("Watching config file for changes: %s", configPath)
}
func loadConfig(file string) error {
f, err := os.Open(file)
if err != nil {
return fmt.Errorf("cannot open config file: %v", err)
}
defer f.Close()
var newConfig Configuration
decoder := yaml.NewDecoder(f)
if err := decoder.Decode(&newConfig); err != nil {
return fmt.Errorf("cannot decode config: %v", err)
}
// Set defaults
if newConfig.Server.SecretHeader == "" {
newConfig.Server.SecretHeader = "X-Gitea-Signature"
}
if newConfig.Jenkins.Timeout == 0 {
newConfig.Jenkins.Timeout = 30
}
if newConfig.Worker.PoolSize == 0 {
newConfig.Worker.PoolSize = 10
}
if newConfig.Worker.QueueSize == 0 {
newConfig.Worker.QueueSize = 100
}
if newConfig.Worker.MaxRetries == 0 {
newConfig.Worker.MaxRetries = 3
}
if newConfig.Worker.RetryBackoff == 0 {
newConfig.Worker.RetryBackoff = 1
}
if newConfig.EventCleanup.Interval == 0 {
newConfig.EventCleanup.Interval = 3600
}
if newConfig.EventCleanup.ExpireAfter == 0 {
newConfig.EventCleanup.ExpireAfter = 7200
}
// Handle legacy configuration format (where Projects is map[string]string)
// This is to maintain backward compatibility with existing configs
if len(newConfig.Gitea.Projects) == 0 {
// Check if we're dealing with a legacy config
var legacyConfig struct {
Gitea struct {
Projects map[string]string `yaml:"projects"`
} `yaml:"gitea"`
}
// Reopen and reparse the file for legacy config
f.Seek(0, 0)
decoder = yaml.NewDecoder(f)
if err := decoder.Decode(&legacyConfig); err == nil && len(legacyConfig.Gitea.Projects) > 0 {
// Convert legacy config to new format
newConfig.Gitea.Projects = make(map[string]ProjectConfig)
for repo, jobName := range legacyConfig.Gitea.Projects {
newConfig.Gitea.Projects[repo] = ProjectConfig{
DefaultJob: jobName,
}
}
logWarn("Using legacy configuration format. Consider updating to new format.")
}
}
// Validate configuration
if err := validate.Struct(newConfig); err != nil {
return fmt.Errorf("invalid configuration: %v", err)
}
configMutex.Lock()
config = newConfig
configMutex.Unlock()
return nil
}
func reloadConfig(file string) error {
if err := loadConfig(file); err != nil {
return err
}
// Update logger configuration
setupLogger()
configMutex.RLock()
defer configMutex.RUnlock()
// Update HTTP client timeout
httpClient.Timeout = time.Duration(config.Jenkins.Timeout) * time.Second
// If worker pool size has changed, reinitialize worker pool
poolSize := workerPool.Cap()
if poolSize != config.Worker.PoolSize {
logger.Printf("Worker pool size changed from %d to %d, reinitializing",
poolSize, config.Worker.PoolSize)
// Must release the read lock before calling initWorkerPool which acquires a write lock
configMutex.RUnlock()
initWorkerPool()
configMutex.RLock()
}
// If queue size has changed, create a new channel and copy items
if cap(jobQueue) != config.Worker.QueueSize {
logger.Printf("Job queue size changed from %d to %d, recreating",
cap(jobQueue), config.Worker.QueueSize)
// Create new queue
newQueue := make(chan jobRequest, config.Worker.QueueSize)
// Close the current queue channel to stop accepting new items
close(jobQueue)
// Start a goroutine to drain the old queue and fill the new one
go func(oldQueue, newQueue chan jobRequest) {
for job := range oldQueue {
newQueue <- job
}
configMutex.Lock()
jobQueue = newQueue
configMutex.Unlock()
}(jobQueue, newQueue)
}
logger.Printf("Configuration reloaded successfully")
return nil
}
func initWorkerPool() {
configMutex.Lock()
defer configMutex.Unlock()
// Release existing pool if any
if workerPool != nil {
workerPool.Release()
}
var err error
workerPool, err = ants.NewPoolWithFunc(config.Worker.PoolSize, func(i interface{}) {
job := i.(jobRequest)
success := triggerJenkinsJob(job)
configMutex.RLock()
maxRetries := config.Worker.MaxRetries
retryBackoff := config.Worker.RetryBackoff
configMutex.RUnlock()
// If job failed but we haven't reached max retries
if !success && job.attempts < maxRetries {
job.attempts++
// Exponential backoff
backoff := time.Duration(retryBackoff<<uint(job.attempts-1)) * time.Second
time.Sleep(backoff)
configMutex.RLock()
select {
case jobQueue <- job:
logger.Printf("Retrying job %s (attempt %d/%d) after %v",
job.jobName, job.attempts, maxRetries, backoff)
default:
logger.Printf("Failed to queue retry for job %s: queue full", job.jobName)
}
configMutex.RUnlock()
}
})
if err != nil {
logger.Fatalf("Failed to initialize worker pool: %v", err)
}
logger.Printf("Worker pool initialized with %d workers", config.Worker.PoolSize)
// Start job queue processing
go processJobQueue()
}
func processJobQueue() {
for job := range jobQueue {
err := workerPool.Invoke(job)
if err != nil {
logger.Printf("Failed to process job: %v", err)
}
}
}
func handleHealthCheck(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
configMutex.RLock()
poolRunning := workerPool != nil
runningWorkers := workerPool.Running()
poolCap := workerPool.Cap()
queueSize := len(jobQueue)
queueCap := cap(jobQueue)
configMutex.RUnlock()
health := map[string]interface{}{
"status": "UP",
"time": time.Now().Format(time.RFC3339),
"workers": map[string]interface{}{
"running": poolRunning,
"active": runningWorkers,
"capacity": poolCap,
},
"queue": map[string]interface{}{
"size": queueSize,
"capacity": queueCap,
},
}
json.NewEncoder(w).Encode(health)
}
func handleWebhook(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Verify signature if secret token is set
configMutex.RLock()
secretHeader := config.Server.SecretHeader
serverSecretKey := config.Server.SecretKey
configMutex.RUnlock()
// If server secret key is set, use it as the secret token
receivedSecretKey := r.Header.Get(secretHeader)
if receivedSecretKey == "" {
http.Error(w, "Invalid server secret key", http.StatusUnauthorized)
logWarn("No secret key provided in header")
} else if receivedSecretKey != serverSecretKey {
http.Error(w, "Invalid server secret key", http.StatusUnauthorized)
logWarn("Invalid server secret key provided")
return
}
// Read and parse the webhook payload
body, err := io.ReadAll(r.Body)
if err != nil {
http.Error(w, "Failed to read request body", http.StatusInternalServerError)
logError("Failed to read webhook body: %v", err)
return
}
r.Body.Close()
var webhook GiteaWebhook
if err := json.Unmarshal(body, &webhook); err != nil {
http.Error(w, "Failed to parse webhook payload", http.StatusBadRequest)
logError("Failed to parse webhook payload: %v", err)
return
}
// Generate event ID for idempotency
eventID := webhook.Repository.FullName + "-" + webhook.After
// Check if we've already processed this event
if _, exists := processedEvents.Load(eventID); exists {
logInfo("Skipping already processed event: %s", eventID)
w.WriteHeader(http.StatusOK)
return
}
// Store in processed events with a TTL (we'll use a goroutine to remove after 1 hour)
processedEvents.Store(eventID, time.Now())
// Check if we have a Jenkins job mapping for this repository
configMutex.RLock()
projectConfig, exists := config.Gitea.Projects[webhook.Repository.FullName]
configMutex.RUnlock()
if !exists {
logInfo("No Jenkins job mapping for repository: %s", webhook.Repository.FullName)
w.WriteHeader(http.StatusOK) // Still return OK to not alarm Gitea
return
}
// Extract branch name from ref
branchName := strings.TrimPrefix(webhook.Ref, "refs/heads/")
// Determine which job to trigger based on branch name
jobName := determineJobName(projectConfig, branchName)
if jobName == "" {
logInfo("No job configured to trigger for repository %s, branch %s",
webhook.Repository.FullName, branchName)
w.WriteHeader(http.StatusOK)
return
}
// Prepare parameters for Jenkins job
params := map[string]string{
"BRANCH_NAME": branchName,
"COMMIT_SHA": webhook.After,
"REPOSITORY_URL": webhook.Repository.CloneURL,
"REPOSITORY_NAME": webhook.Repository.FullName,
"PUSHER_NAME": webhook.Pusher.Login,
"PUSHER_EMAIL": webhook.Pusher.Email,
}
// Queue the job for processing
configMutex.RLock()
select {
case jobQueue <- jobRequest{
jobName: jobName,
parameters: params,
eventID: eventID,
attempts: 0,
}:
logInfo("Webhook received and queued for repository %s, branch %s, commit %s, job %s",
webhook.Repository.FullName, branchName, webhook.After, jobName)
default:
logWarn("Failed to queue webhook: queue full")
http.Error(w, "Server busy, try again later", http.StatusServiceUnavailable)
configMutex.RUnlock()
return
}
configMutex.RUnlock()
w.WriteHeader(http.StatusAccepted)
}
// determineJobName selects the appropriate Jenkins job to trigger based on branch name
func determineJobName(config ProjectConfig, branchName string) string {
// First check for exact branch match
if jobName, ok := config.BranchJobs[branchName]; ok {
logDebug("Found exact branch match for %s: job %s", branchName, jobName)
return jobName
}
// Then check for pattern-based matches
for _, pattern := range config.BranchPatterns {
matched, err := regexp.MatchString(pattern.Pattern, branchName)
if err != nil {
logError("Error matching branch pattern %s: %v", pattern.Pattern, err)
continue
}
if matched {
logDebug("Branch %s matched pattern %s: job %s", branchName, pattern.Pattern, pattern.Job)
return pattern.Job
}
}
// Fall back to default job if available
if config.DefaultJob != "" {
logDebug("Using default job for branch %s: job %s", branchName, config.DefaultJob)
return config.DefaultJob
}
// No job found
logDebug("No job configured for branch %s", branchName)
return ""
}
func triggerJenkinsJob(job jobRequest) bool {
configMutex.RLock()
jenkinsBaseURL := strings.TrimSuffix(config.Jenkins.URL, "/")
jenkinsUser := config.Jenkins.Username
jenkinsToken := config.Jenkins.Token
configMutex.RUnlock()
// Handle Jenkins job paths correctly
// Jenkins jobs can be organized in folders, with proper URL format:
// /job/folder1/job/folder2/job/jobname
jobPath := job.jobName
// If job name contains slashes, format it properly for Jenkins URL
if strings.Contains(jobPath, "/") {
// Replace regular slashes with "/job/" for Jenkins URL format
parts := strings.Split(jobPath, "/")
jobPath = "job/" + strings.Join(parts, "/job/")
} else {
jobPath = "job/" + jobPath
}
jenkinsURL := fmt.Sprintf("%s/%s/build", jenkinsBaseURL, jobPath)
logDebug("Triggering Jenkins job URL: %s", jenkinsURL)
req, err := http.NewRequest("POST", jenkinsURL, nil)
if err != nil {
logError("Error creating Jenkins request for job %s: %v", job.jobName, err)
return false
}
// Add auth if credentials are provided
if jenkinsUser != "" && jenkinsToken != "" {
req.SetBasicAuth(jenkinsUser, jenkinsToken)
}
// Add parameters to URL query
q := req.URL.Query()
for key, value := range job.parameters {
q.Add(key, value)
}
req.URL.RawQuery = q.Encode()
// Execute request
resp, err := httpClient.Do(req)
if err != nil {
logError("Error triggering Jenkins job %s: %v", job.jobName, err)
return false
}
defer resp.Body.Close()
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
bodyBytes, _ := io.ReadAll(resp.Body)
logError("Jenkins returned error for job %s: status=%d, URL=%s, body=%s",
job.jobName, resp.StatusCode, jenkinsURL, string(bodyBytes))
return false
}
logInfo("Successfully triggered Jenkins job %s for event %s",
job.jobName, job.eventID)
return true
}
// Custom JSON log writer
type jsonLogWriter struct {
out io.Writer
}
func (w *jsonLogWriter) Write(p []byte) (n int, err error) {
// Parse the log message
message := string(p)
// Create JSON structure
entry := map[string]interface{}{
"timestamp": time.Now().Format(time.RFC3339),
"message": strings.TrimSpace(message),
"level": "info", // Default level, in a real implementation you'd parse this
}
// Convert to JSON
jsonData, err := json.Marshal(entry)
if err != nil {
return 0, err
}
// Write JSON with newline
return w.out.Write(append(jsonData, '\n'))
}
// Add these utility functions for level-based logging
func logDebug(format string, v ...interface{}) {
configMutex.RLock()
level := config.Logging.Level
configMutex.RUnlock()
if level == "debug" {
logger.Printf("[DEBUG] "+format, v...)
}
}
func logInfo(format string, v ...interface{}) {
configMutex.RLock()
level := config.Logging.Level
configMutex.RUnlock()
if level == "debug" || level == "info" {
logger.Printf("[INFO] "+format, v...)
}
}
func logWarn(format string, v ...interface{}) {
configMutex.RLock()
level := config.Logging.Level
configMutex.RUnlock()
if level == "debug" || level == "info" || level == "warn" {
logger.Printf("[WARN] "+format, v...)
}
}
func logError(format string, v ...interface{}) {
// Error level logs are always shown
logger.Printf("[ERROR] "+format, v...)
}
func cleanupEvents() {
for {
configMutex.RLock()
interval := time.Duration(config.EventCleanup.Interval) * time.Second
expireAfter := time.Duration(config.EventCleanup.ExpireAfter) * time.Second
configMutex.RUnlock()
time.Sleep(interval)
now := time.Now()
processedEvents.Range(func(key, value interface{}) bool {
if timestamp, ok := value.(time.Time); ok {
if now.Sub(timestamp) > expireAfter {
processedEvents.Delete(key)
logDebug("Cleaned up expired event: %v", key)
}
}
return true
})
}
}

@ -0,0 +1 @@
Subproject commit 69eeed3cb2fdde4028307bd61341d9111af07c34

View File

@ -1,61 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: vpa-admission-controller
namespace: freeleaps-infra-system
spec:
replicas: 1
selector:
matchLabels:
app: vpa-admission-controller
template:
metadata:
labels:
app: vpa-admission-controller
spec:
serviceAccountName: vpa-admission-controller
securityContext:
runAsNonRoot: true
runAsUser: 65534 # nobody
containers:
- name: admission-controller
image: registry.k8s.io/autoscaling/vpa-admission-controller:1.3.0
imagePullPolicy: IfNotPresent
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args: ["--v=4", "--stderrthreshold=info", "--reload-cert"]
volumeMounts:
- name: tls-certs
mountPath: "/etc/tls-certs"
readOnly: true
resources:
limits:
cpu: 200m
memory: 500Mi
requests:
cpu: 50m
memory: 200Mi
ports:
- containerPort: 8000
- name: prometheus
containerPort: 8944
volumes:
- name: tls-certs
secret:
secretName: vpa-tls-certs
---
apiVersion: v1
kind: Service
metadata:
name: vpa-webhook
namespace: freeleaps-infra-system
spec:
ports:
- port: 443
targetPort: 8000
selector:
app: vpa-admission-controller

View File

@ -1,37 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: vpa-recommender
namespace: freeleaps-infra-system
spec:
replicas: 1
selector:
matchLabels:
app: vpa-recommender
template:
metadata:
labels:
app: vpa-recommender
spec:
serviceAccountName: vpa-recommender
securityContext:
runAsNonRoot: true
runAsUser: 65534 # nobody
containers:
- name: recommender
image: registry.k8s.io/autoscaling/vpa-recommender:1.3.0
command: ["/recommender"]
args:
- --recommender-name=vpa-recommender
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 200m
memory: 1000Mi
requests:
cpu: 50m
memory: 500Mi
ports:
- name: prometheus
containerPort: 8942

View File

@ -1,39 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: vpa-updater
namespace: freeleaps-infra-system
spec:
replicas: 1
selector:
matchLabels:
app: vpa-updater
template:
metadata:
labels:
app: vpa-updater
spec:
serviceAccountName: vpa-updater
securityContext:
runAsNonRoot: true
runAsUser: 65534 # nobody
containers:
- name: updater
image: registry.k8s.io/autoscaling/vpa-updater:1.3.0
imagePullPolicy: IfNotPresent
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
resources:
limits:
cpu: 200m
memory: 1000Mi
requests:
cpu: 50m
memory: 500Mi
ports:
- name: prometheus
containerPort: 8943

View File

@ -1,435 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:metrics-reader
rules:
- apiGroups:
- "metrics.k8s.io"
resources:
- pods
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:vpa-actor
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- limitranges
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- get
- list
- watch
- create
- apiGroups:
- "poc.autoscaling.k8s.io"
resources:
- verticalpodautoscalers
verbs:
- get
- list
- watch
- apiGroups:
- "autoscaling.k8s.io"
resources:
- verticalpodautoscalers
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:vpa-status-actor
rules:
- apiGroups:
- "autoscaling.k8s.io"
resources:
- verticalpodautoscalers/status
verbs:
- get
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:vpa-checkpoint-actor
rules:
- apiGroups:
- "poc.autoscaling.k8s.io"
resources:
- verticalpodautoscalercheckpoints
verbs:
- get
- list
- watch
- create
- patch
- delete
- apiGroups:
- "autoscaling.k8s.io"
resources:
- verticalpodautoscalercheckpoints
verbs:
- get
- list
- watch
- create
- patch
- delete
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:evictioner
rules:
- apiGroups:
- "apps"
- "extensions"
resources:
- replicasets
verbs:
- get
- apiGroups:
- ""
resources:
- pods/eviction
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:metrics-reader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-reader
subjects:
- kind: ServiceAccount
name: vpa-recommender
namespace: freeleaps-infra-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:vpa-actor
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:vpa-actor
subjects:
- kind: ServiceAccount
name: vpa-recommender
namespace: freeleaps-infra-system
- kind: ServiceAccount
name: vpa-updater
namespace: freeleaps-infra-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:vpa-status-actor
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:vpa-status-actor
subjects:
- kind: ServiceAccount
name: vpa-recommender
namespace: freeleaps-infra-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:vpa-checkpoint-actor
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:vpa-checkpoint-actor
subjects:
- kind: ServiceAccount
name: vpa-recommender
namespace: freeleaps-infra-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:vpa-target-reader
rules:
- apiGroups:
- '*'
resources:
- '*/scale'
verbs:
- get
- watch
- apiGroups:
- ""
resources:
- replicationcontrollers
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- daemonsets
- deployments
- replicasets
- statefulsets
verbs:
- get
- list
- watch
- apiGroups:
- batch
resources:
- jobs
- cronjobs
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:vpa-target-reader-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:vpa-target-reader
subjects:
- kind: ServiceAccount
name: vpa-recommender
namespace: freeleaps-infra-system
- kind: ServiceAccount
name: vpa-admission-controller
namespace: freeleaps-infra-system
- kind: ServiceAccount
name: vpa-updater
namespace: freeleaps-infra-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:vpa-evictioner-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:evictioner
subjects:
- kind: ServiceAccount
name: vpa-updater
namespace: freeleaps-infra-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vpa-admission-controller
namespace: freeleaps-infra-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vpa-recommender
namespace: freeleaps-infra-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vpa-updater
namespace: freeleaps-infra-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:vpa-admission-controller
rules:
- apiGroups:
- ""
resources:
- pods
- configmaps
- nodes
- limitranges
verbs:
- get
- list
- watch
- apiGroups:
- "admissionregistration.k8s.io"
resources:
- mutatingwebhookconfigurations
verbs:
- create
- delete
- get
- list
- apiGroups:
- "poc.autoscaling.k8s.io"
resources:
- verticalpodautoscalers
verbs:
- get
- list
- watch
- apiGroups:
- "autoscaling.k8s.io"
resources:
- verticalpodautoscalers
verbs:
- get
- list
- watch
- apiGroups:
- "coordination.k8s.io"
resources:
- leases
verbs:
- create
- update
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:vpa-admission-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:vpa-admission-controller
subjects:
- kind: ServiceAccount
name: vpa-admission-controller
namespace: freeleaps-infra-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:vpa-status-reader
rules:
- apiGroups:
- "coordination.k8s.io"
resources:
- leases
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:vpa-status-reader-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:vpa-status-reader
subjects:
- kind: ServiceAccount
name: vpa-updater
namespace: freeleaps-infra-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: system:leader-locking-vpa-updater
namespace: freeleaps-infra-system
rules:
- apiGroups:
- "coordination.k8s.io"
resources:
- leases
verbs:
- create
- apiGroups:
- "coordination.k8s.io"
resourceNames:
- vpa-updater
resources:
- leases
verbs:
- get
- watch
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: system:leader-locking-vpa-updater
namespace: freeleaps-infra-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: system:leader-locking-vpa-updater
subjects:
- kind: ServiceAccount
name: vpa-updater
namespace: freeleaps-infra-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: system:leader-locking-vpa-recommender
namespace: freeleaps-infra-system
rules:
- apiGroups:
- "coordination.k8s.io"
resources:
- leases
verbs:
- create
- apiGroups:
- "coordination.k8s.io"
resourceNames:
# TODO: Clean vpa-recommender up once vpa-recommender-lease is used everywhere. See https://github.com/kubernetes/autoscaler/issues/7461.
- vpa-recommender
- vpa-recommender-lease
resources:
- leases
verbs:
- get
- watch
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: system:leader-locking-vpa-recommender
namespace: freeleaps-infra-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: system:leader-locking-vpa-recommender
subjects:
- kind: ServiceAccount
name: vpa-recommender
namespace: freeleaps-infra-system

View File

@ -1,834 +0,0 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
api-approved.kubernetes.io: https://github.com/kubernetes/kubernetes/pull/63797
controller-gen.kubebuilder.io/version: v0.16.5
name: verticalpodautoscalercheckpoints.autoscaling.k8s.io
spec:
group: autoscaling.k8s.io
names:
kind: VerticalPodAutoscalerCheckpoint
listKind: VerticalPodAutoscalerCheckpointList
plural: verticalpodautoscalercheckpoints
shortNames:
- vpacheckpoint
singular: verticalpodautoscalercheckpoint
scope: Namespaced
versions:
- name: v1
schema:
openAPIV3Schema:
description: |-
VerticalPodAutoscalerCheckpoint is the checkpoint of the internal state of VPA that
is used for recovery after recommender's restart.
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: |-
Specification of the checkpoint.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.
properties:
containerName:
description: Name of the checkpointed container.
type: string
vpaObjectName:
description: Name of the VPA object that stored VerticalPodAutoscalerCheckpoint
object.
type: string
type: object
status:
description: Data of the checkpoint.
properties:
cpuHistogram:
description: Checkpoint of histogram for consumption of CPU.
properties:
bucketWeights:
description: Map from bucket index to bucket weight.
type: object
x-kubernetes-preserve-unknown-fields: true
referenceTimestamp:
description: Reference timestamp for samples collected within
this histogram.
format: date-time
nullable: true
type: string
totalWeight:
description: Sum of samples to be used as denominator for weights
from BucketWeights.
type: number
type: object
firstSampleStart:
description: Timestamp of the fist sample from the histograms.
format: date-time
nullable: true
type: string
lastSampleStart:
description: Timestamp of the last sample from the histograms.
format: date-time
nullable: true
type: string
lastUpdateTime:
description: The time when the status was last refreshed.
format: date-time
nullable: true
type: string
memoryHistogram:
description: Checkpoint of histogram for consumption of memory.
properties:
bucketWeights:
description: Map from bucket index to bucket weight.
type: object
x-kubernetes-preserve-unknown-fields: true
referenceTimestamp:
description: Reference timestamp for samples collected within
this histogram.
format: date-time
nullable: true
type: string
totalWeight:
description: Sum of samples to be used as denominator for weights
from BucketWeights.
type: number
type: object
totalSamplesCount:
description: Total number of samples in the histograms.
type: integer
version:
description: Version of the format of the stored data.
type: string
type: object
type: object
served: true
storage: true
- name: v1beta2
schema:
openAPIV3Schema:
description: |-
VerticalPodAutoscalerCheckpoint is the checkpoint of the internal state of VPA that
is used for recovery after recommender's restart.
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: |-
Specification of the checkpoint.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.
properties:
containerName:
description: Name of the checkpointed container.
type: string
vpaObjectName:
description: Name of the VPA object that stored VerticalPodAutoscalerCheckpoint
object.
type: string
type: object
status:
description: Data of the checkpoint.
properties:
cpuHistogram:
description: Checkpoint of histogram for consumption of CPU.
properties:
bucketWeights:
description: Map from bucket index to bucket weight.
type: object
x-kubernetes-preserve-unknown-fields: true
referenceTimestamp:
description: Reference timestamp for samples collected within
this histogram.
format: date-time
nullable: true
type: string
totalWeight:
description: Sum of samples to be used as denominator for weights
from BucketWeights.
type: number
type: object
firstSampleStart:
description: Timestamp of the fist sample from the histograms.
format: date-time
nullable: true
type: string
lastSampleStart:
description: Timestamp of the last sample from the histograms.
format: date-time
nullable: true
type: string
lastUpdateTime:
description: The time when the status was last refreshed.
format: date-time
nullable: true
type: string
memoryHistogram:
description: Checkpoint of histogram for consumption of memory.
properties:
bucketWeights:
description: Map from bucket index to bucket weight.
type: object
x-kubernetes-preserve-unknown-fields: true
referenceTimestamp:
description: Reference timestamp for samples collected within
this histogram.
format: date-time
nullable: true
type: string
totalWeight:
description: Sum of samples to be used as denominator for weights
from BucketWeights.
type: number
type: object
totalSamplesCount:
description: Total number of samples in the histograms.
type: integer
version:
description: Version of the format of the stored data.
type: string
type: object
type: object
served: false
storage: false
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
api-approved.kubernetes.io: https://github.com/kubernetes/kubernetes/pull/63797
controller-gen.kubebuilder.io/version: v0.16.5
name: verticalpodautoscalers.autoscaling.k8s.io
spec:
group: autoscaling.k8s.io
names:
kind: VerticalPodAutoscaler
listKind: VerticalPodAutoscalerList
plural: verticalpodautoscalers
shortNames:
- vpa
singular: verticalpodautoscaler
scope: Namespaced
versions:
- additionalPrinterColumns:
- jsonPath: .spec.updatePolicy.updateMode
name: Mode
type: string
- jsonPath: .status.recommendation.containerRecommendations[0].target.cpu
name: CPU
type: string
- jsonPath: .status.recommendation.containerRecommendations[0].target.memory
name: Mem
type: string
- jsonPath: .status.conditions[?(@.type=='RecommendationProvided')].status
name: Provided
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1
schema:
openAPIV3Schema:
description: |-
VerticalPodAutoscaler is the configuration for a vertical pod
autoscaler, which automatically manages pod resources based on historical and
real time resource utilization.
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: |-
Specification of the behavior of the autoscaler.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.
properties:
recommenders:
description: |-
Recommender responsible for generating recommendation for this object.
List should be empty (then the default recommender will generate the
recommendation) or contain exactly one recommender.
items:
description: |-
VerticalPodAutoscalerRecommenderSelector points to a specific Vertical Pod Autoscaler recommender.
In the future it might pass parameters to the recommender.
properties:
name:
description: Name of the recommender responsible for generating
recommendation for this object.
type: string
required:
- name
type: object
type: array
resourcePolicy:
description: |-
Controls how the autoscaler computes recommended resources.
The resource policy may be used to set constraints on the recommendations
for individual containers.
If any individual containers need to be excluded from getting the VPA recommendations, then
it must be disabled explicitly by setting mode to "Off" under containerPolicies.
If not specified, the autoscaler computes recommended resources for all containers in the pod,
without additional constraints.
properties:
containerPolicies:
description: Per-container resource policies.
items:
description: |-
ContainerResourcePolicy controls how autoscaler computes the recommended
resources for a specific container.
properties:
containerName:
description: |-
Name of the container or DefaultContainerResourcePolicy, in which
case the policy is used by the containers that don't have their own
policy specified.
type: string
controlledResources:
description: |-
Specifies the type of recommendations that will be computed
(and possibly applied) by VPA.
If not specified, the default of [ResourceCPU, ResourceMemory] will be used.
items:
description: ResourceName is the name identifying various
resources in a ResourceList.
type: string
type: array
controlledValues:
description: |-
Specifies which resource values should be controlled.
The default is "RequestsAndLimits".
enum:
- RequestsAndLimits
- RequestsOnly
type: string
maxAllowed:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: |-
Specifies the maximum amount of resources that will be recommended
for the container. The default is no maximum.
type: object
minAllowed:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: |-
Specifies the minimal amount of resources that will be recommended
for the container. The default is no minimum.
type: object
mode:
description: Whether autoscaler is enabled for the container.
The default is "Auto".
enum:
- Auto
- "Off"
type: string
type: object
type: array
type: object
targetRef:
description: |-
TargetRef points to the controller managing the set of pods for the
autoscaler to control - e.g. Deployment, StatefulSet. VerticalPodAutoscaler
can be targeted at controller implementing scale subresource (the pod set is
retrieved from the controller's ScaleStatus) or some well known controllers
(e.g. for DaemonSet the pod set is read from the controller's spec).
If VerticalPodAutoscaler cannot use specified target it will report
ConfigUnsupported condition.
Note that VerticalPodAutoscaler does not require full implementation
of scale subresource - it will not use it to modify the replica count.
The only thing retrieved is a label selector matching pods grouped by
the target resource.
properties:
apiVersion:
description: apiVersion is the API version of the referent
type: string
kind:
description: 'kind is the kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
name:
description: 'name is the name of the referent; More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
type: string
required:
- kind
- name
type: object
x-kubernetes-map-type: atomic
updatePolicy:
description: |-
Describes the rules on how changes are applied to the pods.
If not specified, all fields in the `PodUpdatePolicy` are set to their
default values.
properties:
evictionRequirements:
description: |-
EvictionRequirements is a list of EvictionRequirements that need to
evaluate to true in order for a Pod to be evicted. If more than one
EvictionRequirement is specified, all of them need to be fulfilled to allow eviction.
items:
description: |-
EvictionRequirement defines a single condition which needs to be true in
order to evict a Pod
properties:
changeRequirement:
description: EvictionChangeRequirement refers to the relationship
between the new target recommendation for a Pod and its
current requests, what kind of change is necessary for
the Pod to be evicted
enum:
- TargetHigherThanRequests
- TargetLowerThanRequests
type: string
resources:
description: |-
Resources is a list of one or more resources that the condition applies
to. If more than one resource is given, the EvictionRequirement is fulfilled
if at least one resource meets `changeRequirement`.
items:
description: ResourceName is the name identifying various
resources in a ResourceList.
type: string
type: array
required:
- changeRequirement
- resources
type: object
type: array
minReplicas:
description: |-
Minimal number of replicas which need to be alive for Updater to attempt
pod eviction (pending other checks like PDB). Only positive values are
allowed. Overrides global '--min-replicas' flag.
format: int32
type: integer
updateMode:
description: |-
Controls when autoscaler applies changes to the pod resources.
The default is 'Auto'.
enum:
- "Off"
- Initial
- Recreate
- Auto
type: string
type: object
required:
- targetRef
type: object
status:
description: Current information about the autoscaler.
properties:
conditions:
description: |-
Conditions is the set of conditions required for this autoscaler to scale its target,
and indicates whether or not those conditions are met.
items:
description: |-
VerticalPodAutoscalerCondition describes the state of
a VerticalPodAutoscaler at a certain point.
properties:
lastTransitionTime:
description: |-
lastTransitionTime is the last time the condition transitioned from
one status to another
format: date-time
type: string
message:
description: |-
message is a human-readable explanation containing details about
the transition
type: string
reason:
description: reason is the reason for the condition's last transition.
type: string
status:
description: status is the status of the condition (True, False,
Unknown)
type: string
type:
description: type describes the current condition
type: string
required:
- status
- type
type: object
type: array
recommendation:
description: |-
The most recently computed amount of resources recommended by the
autoscaler for the controlled pods.
properties:
containerRecommendations:
description: Resources recommended by the autoscaler for each
container.
items:
description: |-
RecommendedContainerResources is the recommendation of resources computed by
autoscaler for a specific container. Respects the container resource policy
if present in the spec. In particular the recommendation is not produced for
containers with `ContainerScalingMode` set to 'Off'.
properties:
containerName:
description: Name of the container.
type: string
lowerBound:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: |-
Minimum recommended amount of resources. Observes ContainerResourcePolicy.
This amount is not guaranteed to be sufficient for the application to operate in a stable way, however
running with less resources is likely to have significant impact on performance/availability.
type: object
target:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: Recommended amount of resources. Observes ContainerResourcePolicy.
type: object
uncappedTarget:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: |-
The most recent recommended resources target computed by the autoscaler
for the controlled pods, based only on actual resource usage, not taking
into account the ContainerResourcePolicy.
May differ from the Recommendation if the actual resource usage causes
the target to violate the ContainerResourcePolicy (lower than MinAllowed
or higher that MaxAllowed).
Used only as status indication, will not affect actual resource assignment.
type: object
upperBound:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: |-
Maximum recommended amount of resources. Observes ContainerResourcePolicy.
Any resources allocated beyond this value are likely wasted. This value may be larger than the maximum
amount of application is actually capable of consuming.
type: object
required:
- target
type: object
type: array
type: object
type: object
required:
- spec
type: object
served: true
storage: true
subresources:
status: {}
- deprecated: true
deprecationWarning: autoscaling.k8s.io/v1beta2 API is deprecated
name: v1beta2
schema:
openAPIV3Schema:
description: |-
VerticalPodAutoscaler is the configuration for a vertical pod
autoscaler, which automatically manages pod resources based on historical and
real time resource utilization.
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: |-
Specification of the behavior of the autoscaler.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.
properties:
resourcePolicy:
description: |-
Controls how the autoscaler computes recommended resources.
The resource policy may be used to set constraints on the recommendations
for individual containers. If not specified, the autoscaler computes recommended
resources for all containers in the pod, without additional constraints.
properties:
containerPolicies:
description: Per-container resource policies.
items:
description: |-
ContainerResourcePolicy controls how autoscaler computes the recommended
resources for a specific container.
properties:
containerName:
description: |-
Name of the container or DefaultContainerResourcePolicy, in which
case the policy is used by the containers that don't have their own
policy specified.
type: string
maxAllowed:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: |-
Specifies the maximum amount of resources that will be recommended
for the container. The default is no maximum.
type: object
minAllowed:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: |-
Specifies the minimal amount of resources that will be recommended
for the container. The default is no minimum.
type: object
mode:
description: Whether autoscaler is enabled for the container.
The default is "Auto".
enum:
- Auto
- "Off"
type: string
type: object
type: array
type: object
targetRef:
description: |-
TargetRef points to the controller managing the set of pods for the
autoscaler to control - e.g. Deployment, StatefulSet. VerticalPodAutoscaler
can be targeted at controller implementing scale subresource (the pod set is
retrieved from the controller's ScaleStatus) or some well known controllers
(e.g. for DaemonSet the pod set is read from the controller's spec).
If VerticalPodAutoscaler cannot use specified target it will report
ConfigUnsupported condition.
Note that VerticalPodAutoscaler does not require full implementation
of scale subresource - it will not use it to modify the replica count.
The only thing retrieved is a label selector matching pods grouped by
the target resource.
properties:
apiVersion:
description: apiVersion is the API version of the referent
type: string
kind:
description: 'kind is the kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
name:
description: 'name is the name of the referent; More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
type: string
required:
- kind
- name
type: object
x-kubernetes-map-type: atomic
updatePolicy:
description: |-
Describes the rules on how changes are applied to the pods.
If not specified, all fields in the `PodUpdatePolicy` are set to their
default values.
properties:
updateMode:
description: |-
Controls when autoscaler applies changes to the pod resources.
The default is 'Auto'.
enum:
- "Off"
- Initial
- Recreate
- Auto
type: string
type: object
required:
- targetRef
type: object
status:
description: Current information about the autoscaler.
properties:
conditions:
description: |-
Conditions is the set of conditions required for this autoscaler to scale its target,
and indicates whether or not those conditions are met.
items:
description: |-
VerticalPodAutoscalerCondition describes the state of
a VerticalPodAutoscaler at a certain point.
properties:
lastTransitionTime:
description: |-
lastTransitionTime is the last time the condition transitioned from
one status to another
format: date-time
type: string
message:
description: |-
message is a human-readable explanation containing details about
the transition
type: string
reason:
description: reason is the reason for the condition's last transition.
type: string
status:
description: status is the status of the condition (True, False,
Unknown)
type: string
type:
description: type describes the current condition
type: string
required:
- status
- type
type: object
type: array
recommendation:
description: |-
The most recently computed amount of resources recommended by the
autoscaler for the controlled pods.
properties:
containerRecommendations:
description: Resources recommended by the autoscaler for each
container.
items:
description: |-
RecommendedContainerResources is the recommendation of resources computed by
autoscaler for a specific container. Respects the container resource policy
if present in the spec. In particular the recommendation is not produced for
containers with `ContainerScalingMode` set to 'Off'.
properties:
containerName:
description: Name of the container.
type: string
lowerBound:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: |-
Minimum recommended amount of resources. Observes ContainerResourcePolicy.
This amount is not guaranteed to be sufficient for the application to operate in a stable way, however
running with less resources is likely to have significant impact on performance/availability.
type: object
target:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: Recommended amount of resources. Observes ContainerResourcePolicy.
type: object
uncappedTarget:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: |-
The most recent recommended resources target computed by the autoscaler
for the controlled pods, based only on actual resource usage, not taking
into account the ContainerResourcePolicy.
May differ from the Recommendation if the actual resource usage causes
the target to violate the ContainerResourcePolicy (lower than MinAllowed
or higher that MaxAllowed).
Used only as status indication, will not affect actual resource assignment.
type: object
upperBound:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: |-
Maximum recommended amount of resources. Observes ContainerResourcePolicy.
Any resources allocated beyond this value are likely wasted. This value may be larger than the maximum
amount of application is actually capable of consuming.
type: object
required:
- target
type: object
type: array
type: object
type: object
required:
- spec
type: object
served: false
storage: false
subresources:
status: {}

View File

@ -1,22 +0,0 @@
#!/bin/bash
# Copyright 2018 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Unregisters the admission controller webhook.
set -e
echo "Unregistering VPA admission controller webhook"
kubectl delete -n freeleaps-infra-system mutatingwebhookconfiguration.v1.admissionregistration.k8s.io vpa-webhook-config

View File

@ -1,70 +0,0 @@
#!/bin/bash
# Copyright 2018 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Generates the a CA cert, a server key, and a server cert signed by the CA.
# reference:
# https://github.com/kubernetes/kubernetes/blob/master/plugin/pkg/admission/webhook/gencerts.sh
set -o errexit
set -o nounset
set -o pipefail
CN_BASE="vpa_webhook"
TMP_DIR="/tmp/vpa-certs"
echo "Generating certs for the VPA Admission Controller in ${TMP_DIR}."
mkdir -p ${TMP_DIR}
cat > ${TMP_DIR}/server.conf << EOF
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth, serverAuth
subjectAltName = DNS:vpa-webhook.freeleaps-infra-system.svc
EOF
# Create a certificate authority
openssl genrsa -out ${TMP_DIR}/caKey.pem 2048
set +o errexit
openssl req -x509 -new -nodes -key ${TMP_DIR}/caKey.pem -days 100000 -out ${TMP_DIR}/caCert.pem -subj "/CN=${CN_BASE}_ca" -addext "subjectAltName = DNS:${CN_BASE}_ca"
if [[ $? -ne 0 ]]; then
echo "ERROR: Failed to create CA certificate for self-signing. If the error is \"unknown option -addext\", update your openssl version or deploy VPA from the vpa-release-0.8 branch."
exit 1
fi
set -o errexit
# Create a server certificate
openssl genrsa -out ${TMP_DIR}/serverKey.pem 2048
# Note the CN is the DNS name of the service of the webhook.
openssl req -new -key ${TMP_DIR}/serverKey.pem -out ${TMP_DIR}/server.csr -subj "/CN=vpa-webhook.freeleaps-infra-system.svc" -config ${TMP_DIR}/server.conf
openssl x509 -req -in ${TMP_DIR}/server.csr -CA ${TMP_DIR}/caCert.pem -CAkey ${TMP_DIR}/caKey.pem -CAcreateserial -out ${TMP_DIR}/serverCert.pem -days 100000 -extensions SAN -extensions v3_req -extfile ${TMP_DIR}/server.conf
echo "Uploading certs to the cluster."
kubectl create secret --namespace=freeleaps-infra-system generic vpa-tls-certs --from-file=${TMP_DIR}/caKey.pem --from-file=${TMP_DIR}/caCert.pem --from-file=${TMP_DIR}/serverKey.pem --from-file=${TMP_DIR}/serverCert.pem
if [ "${1:-unset}" = "e2e" ]; then
openssl genrsa -out ${TMP_DIR}/e2eKey.pem 2048
openssl req -new -key ${TMP_DIR}/e2eKey.pem -out ${TMP_DIR}/e2e.csr -subj "/CN=vpa-webhook.freeleaps-infra-system.svc" -config ${TMP_DIR}/server.conf
openssl x509 -req -in ${TMP_DIR}/e2e.csr -CA ${TMP_DIR}/caCert.pem -CAkey ${TMP_DIR}/caKey.pem -CAcreateserial -out ${TMP_DIR}/e2eCert.pem -days 100000 -extensions SAN -extensions v3_req -extfile ${TMP_DIR}/server.conf
echo "Uploading rotation e2e test certs to the cluster."
kubectl create secret --namespace=freeleaps-infra-system generic vpa-e2e-certs --from-file=${TMP_DIR}/e2eKey.pem --from-file=${TMP_DIR}/e2eCert.pem
fi
# Clean up after we're done.
echo "Deleting ${TMP_DIR}."
rm -rf ${TMP_DIR}

View File

@ -1,52 +0,0 @@
#!/bin/bash
# Copyright 2018 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o nounset
set -o pipefail
SCRIPT_ROOT=$(dirname ${BASH_SOURCE})/..
function print_help {
echo "ERROR! Usage: vpa-process-yaml.sh <YAML files>+"
echo "Script will output content of YAML files separated with YAML document"
echo "separator and substituting REGISTRY and TAG for pod images"
}
if [ $# -eq 0 ]; then
print_help
exit 1
fi
DEFAULT_REGISTRY="registry.k8s.io/autoscaling"
DEFAULT_TAG="1.3.0"
REGISTRY_TO_APPLY=${REGISTRY-$DEFAULT_REGISTRY}
TAG_TO_APPLY=${TAG-$DEFAULT_TAG}
if [ "${REGISTRY_TO_APPLY}" != "${DEFAULT_REGISTRY}" ]; then
(>&2 echo "WARNING! Using image repository from REGISTRY env variable (${REGISTRY_TO_APPLY}) instead of ${DEFAULT_REGISTRY}.")
fi
if [ "${TAG_TO_APPLY}" != "${DEFAULT_TAG}" ]; then
(>&2 echo "WARNING! Using tag from TAG env variable (${TAG_TO_APPLY}) instead of the default (${DEFAULT_TAG}).")
fi
for i in $*; do
sed -e "s,${DEFAULT_REGISTRY}/\([a-z-]*\):.*,${REGISTRY_TO_APPLY}/\1:${TAG_TO_APPLY}," $i
echo ""
echo "---"
done

View File

@ -1,24 +0,0 @@
#!/bin/bash
# Copyright 2018 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Generates the a CA cert, a server key, and a server cert signed by the CA.
# reference:
# https://github.com/kubernetes/kubernetes/blob/master/plugin/pkg/admission/webhook/gencerts.sh
set -e
echo "Deleting VPA Admission Controller certs."
kubectl delete secret --namespace=freeleaps-infra-system vpa-tls-certs
kubectl delete secret --namespace=freeleaps-infra-system --ignore-not-found=true vpa-e2e-certs

View File

@ -1,51 +0,0 @@
#!/bin/bash
# Copyright 2018 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o nounset
set -o pipefail
VERSION="1.3.0"
SCRIPT_ROOT=$(dirname ${BASH_SOURCE})/${VERSION}
ACTION=$1
COMPONENTS="vpa-v1-crd-gen vpa-rbac updater-deployment recommender-deployment admission-controller-deployment"
function script_path {
if test -f "${SCRIPT_ROOT}/${1}.yaml"; then
echo "${SCRIPT_ROOT}/${1}.yaml"
else
echo "${1}.yaml not found in ${SCRIPT_ROOT}"
fi
}
if [ $# -gt 1 ]; then
COMPONENTS="$2-deployment"
fi
for i in $COMPONENTS; do
if [ $i == admission-controller-deployment ] ; then
if [[ ${ACTION} == create || ${ACTION} == apply ]] ; then
# Allow gencerts to fail silently if certs already exist
(bash ${SCRIPT_ROOT}/../hack/gencerts.sh || true)
elif [ ${ACTION} == delete ] ; then
(bash ${SCRIPT_ROOT}/../hack/rmcerts.sh || true)
(bash ${SCRIPT_ROOT}/../hack/delete-webhook.sh || true)
fi
fi
${SCRIPT_ROOT}/../hack/process-yaml.sh $(script_path $i) | kubectl ${ACTION} -f - || true
done

View File

@ -2183,6 +2183,9 @@ kube-state-metrics:
- action: keep
regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
sourceLabels: [__name__]
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
replacement: pod_label_$1
## RelabelConfigs to apply to samples before scraping
## ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#relabelconfig
@ -2194,6 +2197,9 @@ kube-state-metrics:
targetLabel: node
replacement: $1
action: replace
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
replacement: pod_label_$1
selfMonitor:
enabled: false

View File

@ -0,0 +1,32 @@
{{- if .Values.authentication.vpa }}
---
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: {{ .Release.Name }}-vpa
namespace: {{ .Release.Namespace }}
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .Release.Name }}
resourcePolicy:
containerPolicies:
- containerName: '*'
{{- if .Values.authentication.vpa.minAllowed.enabled }}
minAllowed:
cpu: {{ .Values.authentication.vpa.minAllowed.cpu }}
memory: {{ .Values.authentication.vpa.minAllowed.memory }}
{{- end }}
{{- if .Values.authentication.vpa.maxAllowed.enabled }}
maxAllowed:
cpu: {{ .Values.authentication.vpa.maxAllowed.cpu }}
memory: {{ .Values.authentication.vpa.maxAllowed.memory }}
{{- end }}
{{- if .Values.authentication.vpa.controlledResources }}
controlledResources:
{{- range .Values.authentication.vpa.controlledResources }}
- {{ . }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -89,3 +89,15 @@ authentication:
mongodbUri: mongodb+srv://jetli:8IHKx6dZK8BfugGp@freeleaps2.hanbj.mongodb.net/
metricsEnabled: 'false'
probesEnabled: 'true'
vpa:
minAllowed:
enabled: false
cpu: 100m
memory: 64Mi
maxAllowed:
enabled: true
cpu: 100m
memory: 256Mi
controlledResources:
- cpu
- memory

View File

@ -83,4 +83,16 @@ authentication:
# METRICS_ENABLED
metricsEnabled: "false"
# PROBES_ENABLED
probesEnabled: "false"
probesEnabled: "false"
vpa:
minAllowed:
enabled: false
cpu: 100m
memory: 64Mi
maxAllowed:
enabled: true
cpu: 100m
memory: 256Mi
controlledResources:
- cpu
- memory