Files
haproxy-mcp/CLAUDE.md
root 432154c850 Initial commit: HAProxy MCP Server
- Zero-reload domain management with map-based routing
- 100 pool backends with 10 server slots each
- Runtime API integration for dynamic configuration
- Auto-restore servers from persistent config on startup
- 17 MCP tools for domain/server management

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 11:37:06 +00:00

10 KiB

CLAUDE.md

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

Project Overview

HAProxy Dynamic Load Balancer Management Suite - a system for runtime management of HAProxy without restarts. Provides MCP interface for dynamic domain/server management with HTTP, HTTPS, and HTTP/3 (QUIC) support.

Key Feature: Zero-reload domain management using HAProxy map files and pool backends.

Architecture

┌──────────────────────────────────────────────────┐
│ HAProxy (host network)                           │
│ TCP 80/443 (HTTP/1.1, HTTP/2) + UDP 443 (HTTP/3) │
│ Runtime API: localhost:9999                      │
│ MCP Proxy: mcp.inouter.com → localhost:8000      │
└──────────────┬───────────────────────────────────┘
               │
         ┌─────▼─────┐
         │ MCP       │
         │ Server    │
         │ :8000     │
         └───────────┘

Domain Routing Flow

Request → HAProxy Frontend
    ↓
domains.map lookup (Host header)
    ↓
pool_N backend (N = 1-100)
    ↓
Server (from servers.json, auto-restored on startup)

Services

HAProxy (Podman Quadlet)

systemctl start haproxy      # Start
systemctl stop haproxy       # Stop
systemctl restart haproxy    # Restart
systemctl status haproxy     # Status
  • Quadlet config: /etc/containers/systemd/haproxy.container
  • Network mode: host (direct access to host services)

MCP Server (systemd)

systemctl start haproxy-mcp     # Start
systemctl stop haproxy-mcp      # Stop
systemctl restart haproxy-mcp   # Restart
journalctl -u haproxy-mcp -f    # Logs
  • Transport: streamable-http
  • Internal: http://localhost:8000/mcp
  • External: https://mcp.inouter.com/mcp (via HAProxy)
  • Auto-restore: Servers restored from servers.json on startup

Zero-Reload Domain Management

How It Works

  1. domains.map: Maps domain → pool backend (e.g., example.com pool_1)
  2. Pool backends: 100 pre-configured backends (pool_1 to pool_100)
  3. Runtime API: Add/remove map entries and configure servers without reload
  4. servers.json: Persistent server configuration, auto-restored on MCP startup

Adding a Domain

Default (HTTP/HTTPS/H3):

haproxy_add_domain("example.com", "10.0.0.1")
# Creates: pool_N_1 (80), pool_N_ssl_1 (443), pool_N_h3_1 (443)

Custom port (HTTP only):

haproxy_add_domain("api.example.com", "10.0.0.1", 8080)
# Creates: pool_N_1 (8080) only

This will:

  1. Find available pool (e.g., pool_5)
  2. Add to domains.map: example.com pool_5
  3. Update HAProxy runtime map: add map ... example.com pool_5
  4. Configure server(s) in pool via Runtime API
  5. Save to servers.json for persistence

No HAProxy reload required!

Port Logic

Setting Servers Created
Default (80/443) HTTP + HTTPS + HTTP/3 (3 servers)
Custom port HTTP only (1 server)

Files

File Purpose
domains.map Domain → Pool mapping
servers.json Server IP/port persistence
haproxy.cfg Pool backend definitions (static)

SSL/TLS Certificates

Current Setup

Item Value
ACME Client certbot + dns-cloudflare
Cert Directory /opt/haproxy/certs/ (auto-loaded by HAProxy)
Deploy Hook /etc/letsencrypt/renewal-hooks/deploy/haproxy-deploy.sh

How It Works

  1. HAProxy binds with crt /etc/haproxy/certs/ (directory, not file)
  2. All .pem files in directory are auto-loaded
  3. SNI selects correct certificate based on domain
  4. Certbot deploy hook auto-creates combined PEM on issue/renewal

Adding New Certificate

# Issue certificate (Let's Encrypt)
certbot certonly \
  --dns-cloudflare \
  --dns-cloudflare-credentials ~/.secrets/cloudflare.ini \
  -d "newdomain.com" -d "*.newdomain.com"

Deploy hook automatically:

  1. Combines fullchain.pem + privkey.pem/opt/haproxy/certs/newdomain.com.pem
  2. Restarts HAProxy
  3. No manual cfg modification needed!

Google Trust Services (optional)

certbot certonly \
  --server https://dv.acme-v02.api.pki.goog/directory \
  --eab-kid "KEY_ID" \
  --eab-hmac-key "HMAC_KEY" \
  --dns-cloudflare \
  --dns-cloudflare-credentials ~/.secrets/cloudflare.ini \
  -d "domain.com" -d "*.domain.com"

Certificate Commands

# List certificates
certbot certificates

# Renew all (dry run)
certbot renew --dry-run

# Renew all (force)
certbot renew --force-renewal

# Check loaded certs in HAProxy
ls -la /opt/haproxy/certs/

Cloudflare Credentials

  • File: ~/.secrets/cloudflare.ini
  • Format: dns_cloudflare_api_token = TOKEN

HTTP/3 (QUIC)

Frontend supports all protocols:

TCP 443: HTTP/1.1, HTTP/2
UDP 443: HTTP/3 (QUIC)

Clients receive alt-svc: h3=":443"; ma=86400 header for HTTP/3 upgrade.

MCP Security

HAProxy proxies MCP with Bearer Token authentication (configured in frontend ACL).

Access Rules

Source Token Required
Tailscale (100.64.0.0/10) No
External (CF Worker, etc.) Yes

Token Location

  • File: /opt/haproxy/conf/mcp-token.env

Connection Examples

Tailscale (no auth):

claude mcp add --transport http haproxy http://100.108.39.107:8000/mcp

External (with auth):

curl -X POST https://mcp.inouter.com/mcp \
  -H "Authorization: Bearer <token>" \
  -H "Content-Type: application/json" \
  -H "Accept: application/json, text/event-stream" \
  -d '{"jsonrpc":"2.0","method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}},"id":0}'

MCP Tools (17 total)

Domain Management

Tool Description
haproxy_list_domains List all domains with pool mappings and servers
haproxy_add_domain Add domain to available pool (no reload)
haproxy_remove_domain Remove domain from pool (no reload)

Server Management

Tool Description
haproxy_list_servers List servers for a domain
haproxy_add_server Add server to slot (1-10), auto-saved
haproxy_remove_server Remove server from slot, auto-saved
haproxy_set_server_state Set state: ready/drain/maint
haproxy_set_server_weight Set weight (0-256)
haproxy_get_server_health Get UP/DOWN/MAINT status

Monitoring

Tool Description
haproxy_stats HAProxy statistics
haproxy_backends List backends
haproxy_list_frontends List frontends with status
haproxy_get_connections Active connections per server

Configuration

Tool Description
haproxy_reload Validate and reload config
haproxy_check_config Validate config syntax
haproxy_save_state Save server state to disk (legacy)
haproxy_restore_state Restore state from disk (legacy)

Key Conventions

Pool-Based Routing

  • Domains map to pools: example.compool_N
  • 100 pools available: pool_1 to pool_100
  • Each pool has 10 server slots

Server Naming

Pool backends use this naming:

  • pool_N_1 to pool_N_10 - HTTP (always created)
  • pool_N_ssl_1 to pool_N_ssl_10 - HTTPS (default ports only)
  • pool_N_h3_1 to pool_N_h3_10 - HTTP/3 QUIC (default ports only)

Examples:

example.com (80/443 default):
  └─ pool_5_1, pool_5_ssl_1, pool_5_h3_1

api.example.com (custom port 8080):
  └─ pool_6_1 only

Persistence

  • domains.map: Survives HAProxy restart
  • servers.json: Auto-restored by MCP on startup
  • No manual save required - haproxy_add_server auto-saves

HAProxy Runtime API

# Show domain mappings
echo "show map /usr/local/etc/haproxy/domains.map" | nc localhost 9999

# Add domain mapping (runtime only)
echo "add map /usr/local/etc/haproxy/domains.map example.com pool_5" | nc localhost 9999

# Show server state
echo "show servers state" | nc localhost 9999

# Show backends
echo "show backend" | nc localhost 9999

# Set server address
echo "set server pool_1/pool_1_1 addr 10.0.0.1 port 80" | nc localhost 9999

# Enable server
echo "set server pool_1/pool_1_1 state ready" | nc localhost 9999

Directory Structure

/opt/haproxy/
├── mcp/                 # MCP server (streamable-http)
│   └── server.py        # Main MCP server (~1100 lines, 17 tools)
├── conf/
│   ├── haproxy.cfg      # Main HAProxy config (100 pool backends)
│   ├── domains.map      # Domain → Pool mapping
│   ├── servers.json     # Server persistence (auto-managed)
│   └── mcp-token.env    # Bearer token for MCP auth
├── certs/               # SSL/TLS certificates (HAProxy PEM format)
├── data/                # Legacy state files
├── scripts/             # Utility scripts
└── run/                 # Runtime socket files

/etc/letsencrypt/
├── live/                # Certbot certificates
└── renewal-hooks/deploy/
    └── haproxy-deploy.sh  # Auto-combine PEM & restart HAProxy

/etc/containers/systemd/
└── haproxy.container    # Podman Quadlet config

Configuration Files

File Purpose
/opt/haproxy/conf/haproxy.cfg HAProxy main config
/opt/haproxy/conf/domains.map Domain routing map
/opt/haproxy/conf/servers.json Server persistence
/opt/haproxy/certs/*.pem SSL certificates (auto-loaded)
/etc/containers/systemd/haproxy.container Podman Quadlet
/etc/systemd/system/haproxy-mcp.service MCP service
/opt/haproxy/conf/mcp-token.env MCP auth token
/etc/letsencrypt/renewal-hooks/deploy/haproxy-deploy.sh Cert deploy hook
~/.secrets/cloudflare.ini Cloudflare API token

Startup Sequence

1. systemd starts haproxy.service
   ↓
2. HAProxy loads haproxy.cfg (100 pool backends)
   ↓
3. HAProxy loads domains.map (domain → pool mapping)
   ↓
4. systemd starts haproxy-mcp.service
   ↓
5. MCP server reads servers.json
   ↓
6. MCP server restores servers via Runtime API
   ↓
7. Ready to serve traffic