Skip to content

Docker Compose

NimbleBrain ships two container images: platform (agent engine + API on port 27247, internal) and web (React UI served by Caddy, exposed on port 27246). Docker Compose runs both on a single machine with a shared internal network.

  • Docker Engine 24+ and Docker Compose v2
  • An Anthropic API key
  • At least 2 GB of available memory

Create a project directory and add these two files:

nimblebrain.json
{
"$schema": "https://schemas.nimblebrain.ai/nimblebrain.json",
"bundles": [],
"skillDirs": ["/data/skills"]
}
docker-compose.yml
services:
platform:
image: nimblebrain/platform:latest
build: .
restart: unless-stopped
volumes:
- workspace:/data
- ./nimblebrain.json:/app/nimblebrain.json:ro
- ./skills:/app/skills:ro
environment:
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
- GOOGLE_GENERATIVE_AI_API_KEY=${GOOGLE_GENERATIVE_AI_API_KEY:-}
- NB_API_KEY=${NB_API_KEY}
networks:
- internal
web:
image: nimblebrain/web:latest
build: web/
restart: unless-stopped
ports:
- "27246:8080"
environment:
- PLATFORM_URL=http://platform:27247
depends_on:
platform:
condition: service_healthy
networks:
- internal
volumes:
workspace:
networks:
internal:
ServiceImagePortPurpose
platformnimblebrain/platform3000 (internal)Agent engine, HTTP API, MCP bundle manager
webnimblebrain/web27246 (host) → 8080 (container)React SPA served by Caddy, proxies /v1/* to platform

The web container runs Caddy, which serves the static UI files and reverse-proxies all /v1/* API requests to the platform container. The platform container is not exposed to the host — all traffic flows through the web container.

Volume/MountContainer pathPurpose
workspace (named volume)/dataWorkspace data — conversations, logs, installed bundles, configs
./nimblebrain.json (bind mount, read-only)/app/nimblebrain.jsonPlatform configuration
./skills (bind mount, read-only)/app/skillsCustom skill files
VariableRequiredDescription
ANTHROPIC_API_KEYYesYour Anthropic API key for Claude
OPENAI_API_KEYNoOpenAI API key (for OpenAI model provider)
GOOGLE_GENERATIVE_AI_API_KEYNoGoogle Gemini API key (for Google model provider)
NB_API_KEYRecommendedAPI authentication key. Minimum 8 characters. Protects all endpoints except /v1/health
ALLOWED_ORIGINSProductionComma-separated list of allowed CORS origins (e.g., https://nb.example.com). Required for cross-origin cookie auth

The platform image has a built-in health check:

HEALTHCHECK --interval=30s --timeout=5s \
CMD curl -f http://localhost:27247/v1/health || exit 1

The web service uses depends_on with condition: service_healthy, so it only starts after the platform passes its first health check.

  1. Set environment variables

    Create a .env file in your project directory:

    .env
    ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
    NB_API_KEY=change-me-to-a-strong-secret-at-least-32-chars

    Or export them directly:

    Terminal window
    export ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
    export NB_API_KEY=change-me-to-a-strong-secret-at-least-32-chars
  2. Start the services

    Terminal window
    docker compose up -d
    ✔ Network project_internal Created
    ✔ Volume "project_workspace" Created
    ✔ Container project-platform-1 Healthy
    ✔ Container project-web-1 Started
  3. Verify

    Terminal window
    docker compose ps
    NAME STATUS PORTS
    project-platform-1 running (healthy) 27247/tcp
    project-web-1 running 0.0.0.0:27246->8080/tcp

    Check the health endpoint directly:

    Terminal window
    curl http://localhost:27246/v1/health
    {"status":"ok","uptime":42}
  4. Open the UI

    Go to http://localhost:27246. Enter your NB_API_KEY value to log in.

Terminal window
# Stop (preserves volumes)
docker compose down
# Stop and remove volumes (deletes all workspace data)
docker compose down -v
# Restart
docker compose restart
  1. Pull the latest images

    Terminal window
    docker compose pull
  2. Recreate the containers

    Terminal window
    docker compose up -d

    Compose replaces only the containers whose images changed. The workspace volume persists across updates.

Workspace data lives in the workspace named Docker volume. The default mount point is /data inside the platform container.

Terminal window
docker run --rm \
-v project_workspace:/data \
-v "$(pwd)":/backup \
alpine tar czf /backup/nimblebrain-backup.tar.gz -C /data .
Terminal window
docker compose down
docker run --rm \
-v project_workspace:/data \
-v "$(pwd)":/backup \
alpine sh -c "rm -rf /data/* && tar xzf /backup/nimblebrain-backup.tar.gz -C /data"
docker compose up -d

The nimblebrain/platform image is built from oven/bun:1.3-alpine and includes:

ComponentVersionPurpose
Bun1.3JavaScript/TypeScript runtime
Python3.13Runs Python-based MCP bundles
Node.js + npmSystemRuns Node-based MCP bundles
mpak CLILatestDownloads and manages bundles from the mpak registry
curlSystemUsed by the built-in health check

The entrypoint is bun run src/cli/index.ts serve, which starts the HTTP API server on port 27247 with NB_WORK_DIR=/data and NB_HOST=0.0.0.0.

The web container’s Caddy server already handles reverse proxying for the default setup. If you want to place your own reverse proxy in front of the stack (for TLS termination, custom domain, etc.), point it at port 27246:

nginx.conf
server {
listen 443 ssl;
server_name nb.example.com;
ssl_certificate /etc/ssl/certs/nb.example.com.pem;
ssl_certificate_key /etc/ssl/private/nb.example.com-key.pem;
location / {
proxy_pass http://127.0.0.1:27246;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffering off;
}
}

Check the logs:

Terminal window
docker compose logs platform

Common causes:

  • ANTHROPIC_API_KEY is missing or invalid
  • NB_API_KEY is set but fewer than 8 characters — the server exits with NB_API_KEY is too short (minimum 8 characters)

The web container waits for the platform health check to pass. If the platform is unhealthy, the web container stays in a “waiting” state. Fix the platform first.

Change the host port mapping in docker-compose.yml:

ports:
- "9090:8080" # Access the UI on port 9090 instead