Hanzo
Hanzo Skills Reference

Hanzo Stack - Full Integrated Development Environment

Hanzo Stack provides the complete integrated development environment for running all Hanzo services locally.

Overview

Hanzo Stack provides the complete integrated development environment for running all Hanzo services locally. One command to start everything: LLM Gateway, Chat, Console, Cloud, IAM, KMS, PostgreSQL, Redis, MongoDB, MinIO.

NOTE: The stack uses three compose files (production, development, core) and git submodules for all services, allowing each service to track its own repo while being orchestrated together.

Why Hanzo Stack?

  • One-command setup: make dev starts everything
  • Three compose profiles: Production, development (hot-reload), core (minimal)
  • Git submodules: Each service tracks its own repo
  • Full ecosystem: All Hanzo services running locally
  • Hot-reload: Development mode with file watching
  • Consistent: Same compose config for all developers
  • Offline-capable: Local models via LLM Gateway + Ollama

When to use

  • Local development across multiple Hanzo services
  • Testing integrations between Hanzo components
  • Full-stack AI application development
  • Onboarding new team members
  • Reproducing production issues locally

Hard requirements

  1. Docker with compose v2
  2. 16GB+ RAM (recommended for full stack)
  3. make (standard on macOS/Linux)

Quick reference

ItemValue
Repogithub.com/hanzoai (experiments/stack directory)
Setupmake setup
Start prodmake up
Start devmake dev
Start coremake core
Statusmake status
Stopmake down
Logsmake logs

Three Compose Files

FilePurposeServices
compose.ymlProduction — optimized, no hot-reloadAll services, release images
compose.dev.ymlDevelopment — hot-reload, source mountsAll services, volume mounts
compose.core.ymlCore — minimal set for quick startLLM Gateway, Chat, PostgreSQL, Redis
# Production mode
make up            # Uses compose.yml

# Development mode (hot-reload)
make dev           # Uses compose.yml + compose.dev.yml override

# Core only (minimal, fast startup)
make core          # Uses compose.core.yml

Git Submodules

Each service is a git submodule tracking its own repo:

experiments/stack/
├── services/
│   ├── llm/          → github.com/hanzoai/llm (LLM Gateway)
│   ├── chat/         → github.com/hanzoai/chat (Chat UI)
│   ├── console/      → github.com/hanzoai/console (Console)
│   ├── cloud/        → github.com/hanzoai/cloud (Cloud Dashboard)
│   ├── search/       → github.com/hanzoai/search (Search)
│   ├── commerce/     → github.com/hanzoai/commerce (Commerce API)
│   └── ...
├── compose.yml           # Production compose
├── compose.dev.yml       # Development overrides
├── compose.core.yml      # Minimal core services
├── .env.example          # Environment template
├── Makefile              # All commands
└── README.md
# Initialize submodules
git submodule update --init --recursive

# Update all submodules to latest
git submodule update --remote

# Update specific service
cd services/chat && git pull origin main

Service Ports

ServicePortURLDescription
Search UI3000http://localhost:3000AI-powered search
Chat UI3081http://localhost:3081Hanzo Chat
LLM Gateway4000http://localhost:4000Unified LLM proxy
Payment API4242http://localhost:4242Commerce API
Admin UI5173http://localhost:5173Admin dashboard
Core API8000http://localhost:8000Main API server
PostgreSQL5432postgresql://localhost:5432Primary database
Redis6379redis://localhost:6379Cache/queues
MongoDB27017mongodb://localhost:27017Document storage
MinIO9000http://localhost:9000S3-compatible storage
Prometheus9090http://localhost:9090Metrics collection

One-file quickstart

# Clone and setup
git clone --recurse-submodules https://github.com/hanzoai/experiments.git
cd experiments/stack

# Configure environment
make setup    # Creates .env from .env.example, prompts for API keys

# Start all services (pick one)
make core     # Minimal: LLM + Chat + Postgres + Redis
make dev      # Full stack with hot-reload
make up       # Full stack, production mode

# Verify
make status   # Check all services are healthy
curl http://localhost:4000/v1/models  # List available models
curl http://localhost:3081            # Open Chat UI

Makefile Commands

make setup        # Initial setup (env, pull images, init submodules)
make up           # Start all services (production mode)
make dev          # Start with hot-reload (development)
make core         # Start minimal core services only
make down         # Stop all services
make restart      # Restart all services
make status       # Show service status and health
make logs         # Stream all logs
make logs-llm     # Stream LLM Gateway logs only
make logs-chat    # Stream Chat logs only
make clean        # Remove volumes and data
make pull         # Pull latest images
make build        # Build local images
make test         # Run integration tests
make reset        # Full reset (clean + setup)
make update       # Update all git submodules

Environment Configuration

# .env (created by make setup)

# Required: At least one LLM provider
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
HANZO_API_KEY=hanzo-...

# Optional: Local models
OLLAMA_URL=http://host.docker.internal:11434

# Database
DATABASE_URL=postgresql://hanzo:hanzo@postgres:5432/hanzo
REDIS_URL=redis://redis:6379

# IAM
HANZO_IAM_URL=https://hanzo.id
HANZO_IAM_CLIENT_ID=app-hanzo
HANZO_IAM_CLIENT_SECRET=secret

# KMS
KMS_ENDPOINT=https://kms.hanzo.ai
KMS_CLIENT_ID=...
KMS_CLIENT_SECRET=...

Compose Architecture

┌─────────────────────────────────────────┐
│     compose.yml + compose.dev.yml       │
├─────────────────────────────────────────┤
│                                         │
│  ┌──────────┐  ┌──────────┐  ┌───────┐ │
│  │ Chat UI  │  │ Search   │  │ Admin │ │
│  │  :3081   │  │  :3000   │  │ :5173 │ │
│  └────┬─────┘  └────┬─────┘  └───┬───┘ │
│       │              │            │     │
│  ┌────┴──────────────┴────────────┴───┐ │
│  │        LLM Gateway :4000          │ │
│  └────────────────┬───────────────────┘ │
│                   │                     │
│  ┌────────────────┴───────────────────┐ │
│  │          Core API :8000            │ │
│  └──┬─────────┬──────────┬───────────┘ │
│     │         │          │             │
│  ┌──┴───┐ ┌──┴───┐ ┌───┴────┐ ┌─────┐│
│  │Postgres│ │Redis │ │MongoDB │ │MinIO││
│  │ :5432 │ │:6379 │ │ :27017 │ │:9000││
│  └───────┘ └──────┘ └────────┘ └─────┘│
└─────────────────────────────────────────┘

Troubleshooting

IssueCauseSolution
Port conflictService already runninglsof -i :PORT && kill PID
OOMToo many servicesUse make core for minimal stack
Slow startImage pullmake pull beforehand
DB connection failPostgres not readyWait or make restart
Chat not loadingMissing OPENAI_API_KEYAdd at least one LLM key to .env
Submodule emptyNot initializedgit submodule update --init --recursive
# Debug specific service
docker compose logs -f llm-gateway
docker compose exec postgres psql -U hanzo

# Reset everything
make clean && make setup && make dev

# Port conflicts
lsof -i :4000   # Find what's using the port
  • hanzo/hanzo-llm-gateway.md - LLM proxy (port 4000)
  • hanzo/hanzo-chat.md - Chat UI (port 3081)
  • hanzo/hanzo-database.md - PostgreSQL/Redis setup
  • hanzo/hanzo-o11y.md - Monitoring (Prometheus port 9090)
  • hanzo/hanzo-universe.md - Production K8s (vs local stack)

How is this guide?

Last updated on

On this page