██████╗ ██████╗ ███████╗███████╗██████╗ ██╗ ██╗ █████╗ ██╗ ██╔═══██╗██╔══██╗██╔════╝██╔════╝██╔══██╗██║ ██║██╔══██╗██║ ██║ ██║██████╔╝███████╗█████╗ ██████╔╝██║ ██║███████║██║ ██║ ██║██╔══██╗╚════██║██╔══╝ ██╔══██╗╚██╗ ██╔╝██╔══██║██║ ╚██████╔╝██████╔╝███████║███████╗██║ ██║ ╚████╔╝ ██║ ██║███████╗ ╚═════╝ ╚═════╝ ╚══════╝╚══════╝╚═╝ ╚═╝ ╚═══╝ ╚═╝ ╚═╝╚══════╝
Discover, share, and monitor AI coding agents with full observability built in.
If you find Observal useful, please consider giving it a star. It helps others discover the project and keeps development going.
Observal is a self-hosted AI agent registry with built-in observability. Think Docker Hub, but for AI coding agents.
Browse agents created by others, publish your own, and pull complete agent configurations — all defined in a portable YAML format that templates out to Claude Code, Kiro CLI, Cursor, Gemini CLI, and more. Every agent bundles its MCP servers, skills, hooks, prompts, and sandboxes into a single installable package. One command to install, zero manual config.
Every interaction generates traces, spans, and sessions that flow into a telemetry pipeline. The built-in eval engine scores agent sessions so you can measure performance and make your agents better over time.
|
Agent Registry Browse, search, and install published agents |
Dashboard Agent scores, recent sessions, top downloads |
|
Trace Detail Every tool call: models, token counts, 16 turns |
Insight Report AI-generated analysis of agent usage patterns |
|
Error Log Classified errors with drill-through to sessions |
Review Queue Admin approve/reject workflow for submissions |
Full docs live at observal.gitbook.io (sourced from /docs in this repo).
| Start here | Go to |
|---|---|
| 5-minute install and first trace | Quickstart |
| Understand the data model | Core Concepts |
| Instrument your existing MCP servers | Observe MCP traffic |
| Run Observal on your infrastructure | Self-Hosting |
| Look up a CLI command | CLI Reference |
See CHANGELOG.md for recent updates.
See SETUP.md for the full setup guide.
git clone https://github.com/BlazeUp-AI/Observal.git && cd Observal
cp .env.example .env
make up
uv tool install --editable .
observal auth login| IDE | Support |
|---|---|
| Claude Code | Full — skills, hooks, MCP, rules, OTLP telemetry |
| Kiro CLI | Full — superpowers, hooks, MCP, steering files, OTLP telemetry |
| Gemini CLI | Tested — hooks, MCP, rules, OTLP telemetry |
| Cursor | Tested — MCP + shim telemetry, rules |
| VS Code | Limited — MCP + shim telemetry, rules |
| Copilot CLI | Limited — hooks, MCP + shim telemetry, rules |
| Codex CLI | Limited — rules |
| OpenCode | Limited — JS plugin hooks, MCP + shim telemetry, rules |
Compatibility matrix and per-IDE setup: Integrations.
| Component | Technology |
|---|---|
| Frontend | Next.js 16, React 19, Tailwind CSS 4, shadcn/ui, Recharts |
| Backend | Python 3.11+, FastAPI, Strawberry GraphQL, Uvicorn |
| Databases | PostgreSQL 16 (registry), ClickHouse (telemetry) |
| Queue | Redis + arq |
| CLI | Python, Typer, Rich |
| Eval engine | AWS Bedrock / OpenAI-compatible LLMs |
| Telemetry | OpenTelemetry Collector |
| Deployment | Docker Compose (10 services) |
See CONTRIBUTING.md. The short version:
- Fork and clone
make hooksto install pre-commit hooks- Create a feature branch
- Run
make lintandmake test - Open a PR
See AGENTS.md for internal codebase context.
make test # quick
make test-v # verboseAll tests mock external services. No Docker needed.
Have a question, idea, or want to share what you've built? Head to GitHub Discussions. Please use Discussions for questions; open Issues for confirmed bugs and concrete feature requests.
Join the Observal Discord to chat directly with the maintainers and other community members.
To report a vulnerability, please use GitHub Private Vulnerability Reporting or email contact@blazeup.app. Do not open a public issue. See SECURITY.md.
Apache License 2.0. See LICENSE.





