Remove SSL/QUIC backend templates - all backends now use HTTP only with SSL termination at HAProxy frontend. This improves performance (~33% faster than HTTPS backends based on benchmarks). Changes: - server.py: Remove https_port parameter from all functions - haproxy.cfg: Remove ssl/h3 server templates from pool backends - CLAUDE.md: Update docs for HTTP-only backends and acme.sh Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
9.8 KiB
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Project Overview
HAProxy Dynamic Load Balancer Management Suite - a system for runtime management of HAProxy without restarts. Provides MCP interface for dynamic domain/server management. Frontend supports HTTP/HTTPS/HTTP3, backends use HTTP only (SSL termination at HAProxy).
Key Feature: Zero-reload domain management using HAProxy map files and pool backends.
Architecture
┌──────────────────────────────────────────────────┐
│ HAProxy (host network) │
│ TCP 80/443 (HTTP/1.1, HTTP/2) + UDP 443 (HTTP/3) │
│ Runtime API: localhost:9999 │
│ MCP Proxy: mcp.inouter.com → localhost:8000 │
└──────────────┬───────────────────────────────────┘
│
┌─────▼─────┐
│ MCP │
│ Server │
│ :8000 │
└───────────┘
Domain Routing Flow
Request → HAProxy Frontend
↓
domains.map lookup (Host header)
↓
pool_N backend (N = 1-100)
↓
Server (from servers.json, auto-restored on startup)
Services
HAProxy (Podman Quadlet)
systemctl start haproxy # Start
systemctl stop haproxy # Stop
systemctl restart haproxy # Restart
systemctl status haproxy # Status
- Quadlet config:
/etc/containers/systemd/haproxy.container - Network mode:
host(direct access to host services)
MCP Server (systemd)
systemctl start haproxy-mcp # Start
systemctl stop haproxy-mcp # Stop
systemctl restart haproxy-mcp # Restart
journalctl -u haproxy-mcp -f # Logs
- Transport:
streamable-http - Internal:
http://localhost:8000/mcp - External:
https://mcp.inouter.com/mcp(via HAProxy) - Auto-restore: Servers restored from
servers.jsonon startup
Zero-Reload Domain Management
How It Works
- domains.map: Maps domain → pool backend (e.g.,
example.com pool_1) - Pool backends: 100 pre-configured backends (
pool_1topool_100) - Runtime API: Add/remove map entries and configure servers without reload
- servers.json: Persistent server configuration, auto-restored on MCP startup
Adding a Domain
haproxy_add_domain("example.com", "10.0.0.1")
# Creates: pool_N_1 → 10.0.0.1:80
haproxy_add_domain("api.example.com", "10.0.0.1", 8080)
# Creates: pool_N_1 → 10.0.0.1:8080
This will:
- Find available pool (e.g.,
pool_5) - Add to
domains.map:example.com pool_5 - Update HAProxy runtime map:
add map ... example.com pool_5 - Configure server in pool via Runtime API (HTTP only)
- Save to
servers.jsonfor persistence
No HAProxy reload required!
Backend Configuration
- Backends always use HTTP (port 80 or custom)
- SSL/TLS termination happens at HAProxy frontend
- Each pool has 10 server slots (pool_N_1 to pool_N_10)
Files
| File | Purpose |
|---|---|
domains.map |
Domain → Pool mapping |
servers.json |
Server IP/port persistence |
haproxy.cfg |
Pool backend definitions (static) |
SSL/TLS Certificates
Current Setup
| Item | Value |
|---|---|
| ACME Client | acme.sh (Google Trust Services CA) |
| Cert Directory | /opt/haproxy/certs/ (auto-loaded by HAProxy) |
| acme.sh Home | ~/.acme.sh/ |
How It Works
- HAProxy binds with
crt /etc/haproxy/certs/(directory, not file) - All
.pemfiles in directory are auto-loaded - SNI selects correct certificate based on domain
- acme.sh deploy hook auto-creates combined PEM on issue/renewal
Adding New Certificate
# Issue certificate (Google Trust Services)
~/.acme.sh/acme.sh --issue \
--dns dns_cf \
-d "newdomain.com" -d "*.newdomain.com" \
--reloadcmd "cat ~/.acme.sh/newdomain.com_ecc/fullchain.cer ~/.acme.sh/newdomain.com_ecc/newdomain.com.key > /opt/haproxy/certs/newdomain.com.pem && podman exec haproxy kill -USR2 1"
Certificate Commands
# List certificates
~/.acme.sh/acme.sh --list
# Renew all
~/.acme.sh/acme.sh --cron
# Force renew specific domain
~/.acme.sh/acme.sh --renew -d "domain.com" --force
# Check loaded certs in HAProxy
ls -la /opt/haproxy/certs/
Cloudflare Credentials
- File:
~/.secrets/cloudflare.ini - Format:
dns_cloudflare_api_token = TOKEN - Also export:
export CF_Token="your_token"
HTTP/3 (QUIC)
Frontend supports all protocols:
TCP 443: HTTP/1.1, HTTP/2
UDP 443: HTTP/3 (QUIC)
Clients receive alt-svc: h3=":443"; ma=86400 header for HTTP/3 upgrade.
MCP Security
HAProxy proxies MCP with Bearer Token authentication (configured in frontend ACL).
Access Rules
| Source | Token Required |
|---|---|
| Tailscale (100.64.0.0/10) | No |
| External (CF Worker, etc.) | Yes |
Token Location
- File:
/opt/haproxy/conf/mcp-token.env
Connection Examples
Tailscale (no auth):
claude mcp add --transport http haproxy http://100.108.39.107:8000/mcp
External (with auth):
curl -X POST https://mcp.inouter.com/mcp \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-H "Accept: application/json, text/event-stream" \
-d '{"jsonrpc":"2.0","method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}},"id":0}'
MCP Tools (17 total)
Domain Management
| Tool | Description |
|---|---|
haproxy_list_domains |
List all domains with pool mappings and servers |
haproxy_add_domain |
Add domain to available pool (no reload) |
haproxy_remove_domain |
Remove domain from pool (no reload) |
Server Management
| Tool | Description |
|---|---|
haproxy_list_servers |
List servers for a domain |
haproxy_add_server |
Add server to slot (1-10), auto-saved |
haproxy_remove_server |
Remove server from slot, auto-saved |
haproxy_set_server_state |
Set state: ready/drain/maint |
haproxy_set_server_weight |
Set weight (0-256) |
haproxy_get_server_health |
Get UP/DOWN/MAINT status |
Monitoring
| Tool | Description |
|---|---|
haproxy_stats |
HAProxy statistics |
haproxy_backends |
List backends |
haproxy_list_frontends |
List frontends with status |
haproxy_get_connections |
Active connections per server |
Configuration
| Tool | Description |
|---|---|
haproxy_reload |
Validate and reload config |
haproxy_check_config |
Validate config syntax |
haproxy_save_state |
Save server state to disk (legacy) |
haproxy_restore_state |
Restore state from disk (legacy) |
Key Conventions
Pool-Based Routing
- Domains map to pools:
example.com→pool_N - 100 pools available:
pool_1topool_100 - Each pool has 10 server slots
Server Naming
Pool backends use simple naming: pool_N_1 to pool_N_10
Examples:
example.com → pool_5
└─ pool_5_1 (slot 1)
└─ pool_5_2 (slot 2)
└─ ... up to pool_5_10
api.example.com → pool_6
└─ pool_6_1 (slot 1)
Persistence
- domains.map: Survives HAProxy restart
- servers.json: Auto-restored by MCP on startup
- No manual save required -
haproxy_add_serverauto-saves
HAProxy Runtime API
# Show domain mappings
echo "show map /usr/local/etc/haproxy/domains.map" | nc localhost 9999
# Add domain mapping (runtime only)
echo "add map /usr/local/etc/haproxy/domains.map example.com pool_5" | nc localhost 9999
# Show server state
echo "show servers state" | nc localhost 9999
# Show backends
echo "show backend" | nc localhost 9999
# Set server address
echo "set server pool_1/pool_1_1 addr 10.0.0.1 port 80" | nc localhost 9999
# Enable server
echo "set server pool_1/pool_1_1 state ready" | nc localhost 9999
Directory Structure
/opt/haproxy/
├── mcp/ # MCP server (streamable-http)
│ └── server.py # Main MCP server (~1100 lines, 17 tools)
├── conf/
│ ├── haproxy.cfg # Main HAProxy config (100 pool backends)
│ ├── domains.map # Domain → Pool mapping
│ ├── servers.json # Server persistence (auto-managed)
│ └── mcp-token.env # Bearer token for MCP auth
├── certs/ # SSL/TLS certificates (HAProxy PEM format)
├── data/ # Legacy state files
├── scripts/ # Utility scripts
└── run/ # Runtime socket files
~/.acme.sh/
├── *_ecc/ # Certificate directories (one per domain)
└── acme.sh # ACME client script
/etc/containers/systemd/
└── haproxy.container # Podman Quadlet config
Configuration Files
| File | Purpose |
|---|---|
/opt/haproxy/conf/haproxy.cfg |
HAProxy main config |
/opt/haproxy/conf/domains.map |
Domain routing map |
/opt/haproxy/conf/servers.json |
Server persistence |
/opt/haproxy/certs/*.pem |
SSL certificates (auto-loaded) |
/etc/containers/systemd/haproxy.container |
Podman Quadlet |
/etc/systemd/system/haproxy-mcp.service |
MCP service |
/opt/haproxy/conf/mcp-token.env |
MCP auth token |
~/.acme.sh/ |
acme.sh certificates and config |
~/.secrets/cloudflare.ini |
Cloudflare API token |
Startup Sequence
1. systemd starts haproxy.service
↓
2. HAProxy loads haproxy.cfg (100 pool backends)
↓
3. HAProxy loads domains.map (domain → pool mapping)
↓
4. systemd starts haproxy-mcp.service
↓
5. MCP server reads servers.json
↓
6. MCP server restores servers via Runtime API
↓
7. Ready to serve traffic