Files
haproxy-mcp/CLAUDE.md
kaffa ab5b4aa648 docs: Update CLAUDE.md with new features
- Update tool count: 20 → 21
- Add haproxy_set_domain_state to server management tools
- Add container status to health check example
- Update server.py description (~1600 lines, 21 tools)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 13:59:03 +00:00

12 KiB

CLAUDE.md

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

Project Overview

HAProxy Dynamic Load Balancer Management Suite - a system for runtime management of HAProxy without restarts. Provides MCP interface for dynamic domain/server management. Frontend supports HTTP/HTTPS/HTTP3, backends use HTTP only (SSL termination at HAProxy).

Key Feature: Zero-reload domain management using HAProxy map files and pool backends.

Architecture

┌──────────────────────────────────────────────────┐
│ HAProxy (host network)                           │
│ TCP 80/443 (HTTP/1.1, HTTP/2) + UDP 443 (HTTP/3) │
│ Runtime API: localhost:9999                      │
│ MCP Proxy: mcp.inouter.com → localhost:8000      │
└──────────────┬───────────────────────────────────┘
               │
         ┌─────▼─────┐
         │ MCP       │
         │ Server    │
         │ :8000     │
         └───────────┘

Domain Routing Flow

Request → HAProxy Frontend
    ↓
domains.map lookup (Host header)
    ↓
pool_N backend (N = 1-100)
    ↓
Server (from servers.json, auto-restored on startup)

Services

HAProxy (Podman Quadlet)

systemctl start haproxy      # Start
systemctl stop haproxy       # Stop
systemctl restart haproxy    # Restart
systemctl status haproxy     # Status
  • Quadlet config: /etc/containers/systemd/haproxy.container
  • Network mode: host (direct access to host services)

MCP Server (systemd)

systemctl start haproxy-mcp     # Start
systemctl stop haproxy-mcp      # Stop
systemctl restart haproxy-mcp   # Restart
journalctl -u haproxy-mcp -f    # Logs
  • Transport: streamable-http
  • Internal: http://localhost:8000/mcp
  • External: https://mcp.inouter.com/mcp (via HAProxy)
  • Auto-restore: Servers restored from servers.json on startup

Environment Variables

Variable Default Description
MCP_HOST 0.0.0.0 MCP server bind host
MCP_PORT 8000 MCP server port
HAPROXY_HOST localhost HAProxy Runtime API host
HAPROXY_PORT 9999 HAProxy Runtime API port
HAPROXY_STATE_FILE /opt/haproxy/data/servers.state State file path
HAPROXY_MAP_FILE /opt/haproxy/conf/domains.map Map file path
HAPROXY_SERVERS_FILE /opt/haproxy/conf/servers.json Servers config path
HAPROXY_POOL_COUNT 100 Number of pool backends
HAPROXY_MAX_SLOTS 10 Max servers per pool
HAPROXY_CONTAINER haproxy Podman container name
LOG_LEVEL INFO Logging level (DEBUG/INFO/WARNING/ERROR)

Zero-Reload Domain Management

How It Works

  1. domains.map: Maps domain → pool backend (e.g., example.com pool_1)
  2. Pool backends: 100 pre-configured backends (pool_1 to pool_100)
  3. Runtime API: Add/remove map entries and configure servers without reload
  4. servers.json: Persistent server configuration, auto-restored on MCP startup

Adding a Domain

haproxy_add_domain("example.com", "10.0.0.1")
# Creates: pool_N_1 → 10.0.0.1:80

haproxy_add_domain("api.example.com", "10.0.0.1", 8080)
# Creates: pool_N_1 → 10.0.0.1:8080

This will:

  1. Find available pool (e.g., pool_5)
  2. Add to domains.map: example.com pool_5
  3. Update HAProxy runtime map: add map ... example.com pool_5
  4. Configure server in pool via Runtime API (HTTP only)
  5. Save to servers.json for persistence

No HAProxy reload required!

Backend Configuration

  • Backends always use HTTP (port 80 or custom)
  • SSL/TLS termination happens at HAProxy frontend
  • Each pool has 10 server slots (pool_N_1 to pool_N_10)
  • IPv6 supported: Both IPv4 and IPv6 addresses accepted

Bulk Server Operations

# Add multiple servers at once
haproxy_add_servers("api.example.com", '[
  {"slot": 1, "ip": "10.0.0.1", "http_port": 80},
  {"slot": 2, "ip": "10.0.0.2", "http_port": 80},
  {"slot": 3, "ip": "2001:db8::1", "http_port": 80}
]')

Files

File Purpose
domains.map Domain → Pool mapping
servers.json Server IP/port persistence
haproxy.cfg Pool backend definitions (static)

SSL/TLS Certificates

Current Setup

Item Value
ACME Client acme.sh (Google Trust Services CA)
Cert Directory /opt/haproxy/certs/ (auto-loaded by HAProxy)
acme.sh Home ~/.acme.sh/

How It Works

  1. HAProxy binds with crt /etc/haproxy/certs/ (directory, not file)
  2. All .pem files in directory are auto-loaded
  3. SNI selects correct certificate based on domain
  4. acme.sh deploy hook auto-creates combined PEM on issue/renewal

Adding New Certificate

# Issue certificate (Google Trust Services)
~/.acme.sh/acme.sh --issue \
  --dns dns_cf \
  -d "newdomain.com" -d "*.newdomain.com" \
  --reloadcmd "cat ~/.acme.sh/newdomain.com_ecc/fullchain.cer ~/.acme.sh/newdomain.com_ecc/newdomain.com.key > /opt/haproxy/certs/newdomain.com.pem && podman exec haproxy kill -USR2 1"

Certificate Commands

# List certificates
~/.acme.sh/acme.sh --list

# Renew all
~/.acme.sh/acme.sh --cron

# Force renew specific domain
~/.acme.sh/acme.sh --renew -d "domain.com" --force

# Check loaded certs in HAProxy
ls -la /opt/haproxy/certs/

Cloudflare Credentials

  • File: ~/.secrets/cloudflare.ini
  • Format: dns_cloudflare_api_token = TOKEN
  • Also export: export CF_Token="your_token"

HTTP/3 (QUIC)

Frontend supports all protocols:

TCP 443: HTTP/1.1, HTTP/2
UDP 443: HTTP/3 (QUIC)

Clients receive alt-svc: h3=":443"; ma=86400 header for HTTP/3 upgrade.

MCP Security

HAProxy proxies MCP with Bearer Token authentication (configured in frontend ACL).

Access Rules

Source Token Required
Tailscale (100.64.0.0/10) No
External (CF Worker, etc.) Yes

Token Location

  • File: /opt/haproxy/conf/mcp-token.env

Connection Examples

Tailscale (no auth):

claude mcp add --transport http haproxy http://100.108.39.107:8000/mcp

External (with auth):

curl -X POST https://mcp.inouter.com/mcp \
  -H "Authorization: Bearer <token>" \
  -H "Content-Type: application/json" \
  -H "Accept: application/json, text/event-stream" \
  -d '{"jsonrpc":"2.0","method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}},"id":0}'

Health Check

System Health (haproxy_health)

Returns overall system status:

{
  "status": "healthy",
  "components": {
    "mcp": {"status": "ok"},
    "haproxy": {"status": "ok", "version": "3.3.2", "uptime_sec": 3600},
    "config_files": {"status": "ok", "files": {"map_file": "ok", "servers_file": "ok"}},
    "container": {"status": "ok", "state": "running"}
  }
}

Domain Health (haproxy_domain_health)

Returns backend server status for a specific domain:

{
  "domain": "api.example.com",
  "backend": "pool_3",
  "status": "healthy",
  "servers": [
    {"name": "pool_3_1", "addr": "10.0.0.1:80", "status": "UP", "check_status": "L4OK"}
  ],
  "healthy_count": 1,
  "total_count": 1
}

Status values: healthy (all UP), degraded (partial UP), down (all DOWN), no_servers

MCP Tools (21 total)

Domain Management

Tool Description
haproxy_list_domains List domains (use include_wildcards=true for wildcards)
haproxy_add_domain Add domain to available pool (no reload), supports IPv6
haproxy_remove_domain Remove domain from pool (no reload)

Server Management

Tool Description
haproxy_list_servers List servers for a domain
haproxy_add_server Add server to slot (1-10), auto-saved
haproxy_add_servers Bulk add servers (JSON array), auto-saved
haproxy_remove_server Remove server from slot, auto-saved
haproxy_set_server_state Set state: ready/drain/maint
haproxy_set_server_weight Set weight (0-256)
haproxy_set_domain_state Set all servers of domain to ready/drain/maint

Health Check

Tool Description
haproxy_health System health (MCP, HAProxy, config files)
haproxy_domain_health Domain-specific health with server status
haproxy_get_server_health Get UP/DOWN/MAINT status for all servers

Monitoring

Tool Description
haproxy_stats HAProxy statistics
haproxy_backends List backends
haproxy_list_frontends List frontends with status
haproxy_get_connections Active connections per server

Configuration

Tool Description
haproxy_reload Validate, reload config, auto-restore servers
haproxy_check_config Validate config syntax
haproxy_save_state Save server state to disk (legacy)
haproxy_restore_state Restore state from disk (legacy)

Key Conventions

Pool-Based Routing

  • Domains map to pools: example.compool_N
  • 100 pools available: pool_1 to pool_100
  • Each pool has 10 server slots

Server Naming

Pool backends use simple naming: pool_N_1 to pool_N_10

Examples:

example.com → pool_5
  └─ pool_5_1 (slot 1)
  └─ pool_5_2 (slot 2)
  └─ ... up to pool_5_10

api.example.com → pool_6
  └─ pool_6_1 (slot 1)

Persistence

  • domains.map: Survives HAProxy restart
  • servers.json: Auto-restored by MCP on startup
  • No manual save required - haproxy_add_server auto-saves

HAProxy Runtime API

# Show domain mappings
echo "show map /usr/local/etc/haproxy/domains.map" | nc localhost 9999

# Add domain mapping (runtime only)
echo "add map /usr/local/etc/haproxy/domains.map example.com pool_5" | nc localhost 9999

# Show server state
echo "show servers state" | nc localhost 9999

# Show backends
echo "show backend" | nc localhost 9999

# Set server address
echo "set server pool_1/pool_1_1 addr 10.0.0.1 port 80" | nc localhost 9999

# Enable server
echo "set server pool_1/pool_1_1 state ready" | nc localhost 9999

Directory Structure

/opt/haproxy/
├── mcp/                 # MCP server (streamable-http)
│   └── server.py        # Main MCP server (~1600 lines, 21 tools)
├── conf/
│   ├── haproxy.cfg      # Main HAProxy config (100 pool backends)
│   ├── domains.map      # Domain → Pool mapping
│   ├── servers.json     # Server persistence (auto-managed)
│   └── mcp-token.env    # Bearer token for MCP auth
├── certs/               # SSL/TLS certificates (HAProxy PEM format)
├── data/                # Legacy state files
├── scripts/             # Utility scripts
└── run/                 # Runtime socket files

~/.acme.sh/
├── *_ecc/               # Certificate directories (one per domain)
└── acme.sh              # ACME client script

/etc/containers/systemd/
└── haproxy.container    # Podman Quadlet config

Configuration Files

File Purpose
/opt/haproxy/conf/haproxy.cfg HAProxy main config
/opt/haproxy/conf/domains.map Domain routing map
/opt/haproxy/conf/servers.json Server persistence
/opt/haproxy/certs/*.pem SSL certificates (auto-loaded)
/etc/containers/systemd/haproxy.container Podman Quadlet
/etc/systemd/system/haproxy-mcp.service MCP service
/opt/haproxy/conf/mcp-token.env MCP auth token
~/.acme.sh/ acme.sh certificates and config
~/.secrets/cloudflare.ini Cloudflare API token

Startup Sequence

1. systemd starts haproxy.service
   ↓
2. HAProxy loads haproxy.cfg (100 pool backends)
   ↓
3. HAProxy loads domains.map (domain → pool mapping)
   ↓
4. systemd starts haproxy-mcp.service
   ↓
5. MCP server reads servers.json
   ↓
6. MCP server restores servers via Runtime API
   ↓
7. Ready to serve traffic