enzu includes a FastAPI server for running tasks over HTTP.
uv pip install "enzu[server]"
# or: pip install "enzu[server]"export OPENAI_API_KEY=sk-...
# Optional auth (requires X-API-Key on requests):
# export ENZU_API_KEY=your-secret
# Optional defaults:
# export ENZU_DEFAULT_MODEL=gpt-4o
# export ENZU_DEFAULT_PROVIDER=openaiuvicorn enzu.server:app --host 0.0.0.0 --port 8000curl http://localhost:8000/v1/run \
-H "Content-Type: application/json" \
-d '{"task":"Say hello","model":"gpt-4o","provider":"openai"}'Response shape:
{
"answer": "...",
"request_id": "...",
"model": "...",
"usage": {
"total_tokens": 123,
"prompt_tokens": 45,
"completion_tokens": 78,
"cost_usd": 0.0123
}
}Create a session:
curl http://localhost:8000/v1/sessions \
-H "Content-Type: application/json" \
-d '{"model":"gpt-4o","provider":"openai","max_cost_usd":1.0}'Run inside the session:
curl http://localhost:8000/v1/sessions/<session_id>/run \
-H "Content-Type: application/json" \
-d '{"task":"What did I just say?"}'Fetch session state:
curl http://localhost:8000/v1/sessions/<session_id>If ENZU_API_KEY is set, include X-API-Key on every request.
- Sessions are in-memory and scoped to a single server process.
- Use
ENZU_DEFAULT_MODELandENZU_DEFAULT_PROVIDERto avoid sending model/provider each request. GET /healthis available for load balancers.