Minion is Agent's Brain. Minion is designed to execute any type of queries, offering a variety of features that demonstrate its flexibility and intelligence.
pip install minionx
# With optional dependencies
pip install minionx[litellm] # LiteLLM support (100+ LLM providers)
pip install minionx[anthropic] # Anthropic Claude
pip install minionx[bedrock] # AWS Bedrock
pip install minionx[gradio] # Gradio web UI
pip install minionx[all] # All optional dependenciesgit clone https://github.com/femto/minion.git && cd minion
pip install -e .
cp config/config.yaml.example config/config.yaml
cp config/.env.example config/.envgit clone https://github.com/femto/minion.git && cd minion
cp config/config.yaml.example config/config.yaml
# Set your API key
export OPENAI_API_KEY=your-api-key
# Build and run (basic install)
docker-compose build
docker-compose run --rm minion
# Build with optional dependencies
docker-compose build --build-arg EXTRAS="gradio,web,anthropic"
# Or install all extras
docker-compose build --build-arg EXTRAS="all"
# Run a specific example
docker-compose run --rm minion python examples/mcp/mcp_agent_example.pyEdit config/config.yaml:
models:
"default":
api_type: "openai"
base_url: "${DEFAULT_BASE_URL}"
api_key: "${DEFAULT_API_KEY}"
model: "gpt-4.1"
temperature: 0See Configuration for more details on configuration options.
from minion.agents.code_agent import CodeAgent
# Create agent
agent = await CodeAgent.create(
name="Minion Code Assistant",
llm="your-model",
tools=all_tools, # optional
)
# Run task
async for event in await agent.run_async("your task here"):
print(event)See examples/mcp/mcp_agent_example.py for a complete example with MCP tools.
from minion.main.brain import Brain
brain = Brain()
obs, score, *_ = await brain.step(query="what's the solution 234*568")
print(obs)See Brain Usage Guide for more examples.
Click to watch the demo video on YouTube.
The flowchart demonstrates the complete process from query to final result:
- First receives the user query (Query)
- System generates a solution (Solution)
- Performs solution verification (Check)
- If unsatisfactory, makes improvements (Improve) and returns to generate new solutions
- If satisfactory, outputs the final result (Final Result)
- CodeAgent Documentation - Powerful Python code execution agent
- Brain Usage Guide - Using brain.step() for various tasks
- Skills Guide - Extend agent capabilities with modular skills
- Benchmarks - Performance results on GSM8K, Game of 24, AIME, Humaneval
- Route Parameter Guide - Route options for different reasoning strategies
- Auto-decay Guide - Automatic context management for large tool responses (Experimental)
- Project Config:
MINION_ROOT/config/config.yaml- Default project configuration - User Config:
~/.minion/config.yaml- User-specific overrides
When both configuration files exist:
- Project Config takes precedence over User Config
This allows you to:
- Keep sensitive data (API keys) in your user config
- Share project defaults through the project config
Variable Substitution: Use ${VAR_NAME} syntax to reference environment variables directly in config values:
models:
"default":
api_key: "${OPENAI_API_KEY}"
base_url: "${OPENAI_BASE_URL}"
api_type: "openai"
model: "gpt-4.1"
temperature: 0.3
"azure-gpt-4o":
api_type: "azure"
api_key: "${AZURE_OPENAI_API_KEY}"
base_url: "${AZURE_OPENAI_ENDPOINT}" # e.g., https://your-resource.openai.azure.com/
api_version: "2024-06-01"
model: "gpt-4o" # deployment name
temperature: 0Loading .env Files: Use env_file to load environment variables from .env files (follows Docker .env file format):
env_file:
- .env # loaded first
- .env.local # loaded second, can override values from .envInline Environment Variables: Define environment variables directly in config:
environment:
MY_VAR: "value"
ANOTHER_VAR: "another_value"Variables from all sources (system environment, .env files, inline environment) will be available for ${VAR_NAME} substitution throughout the configuration.
| api_type | Description | Required Fields |
|---|---|---|
openai |
OpenAI API or compatible (Ollama, vLLM, LocalAI) | api_key, base_url, model |
azure |
Azure OpenAI Service | api_key, base_url, api_version, model |
azure_inference |
Azure AI Model Inference (DeepSeek, Phi) | api_key, base_url, model |
azure_anthropic |
Azure hosted Anthropic models | api_key, base_url, model |
bedrock |
AWS Bedrock (sync) | access_key_id, secret_access_key, region, model |
bedrock_async |
AWS Bedrock (async, better performance) | access_key_id, secret_access_key, region, model |
litellm |
Unified interface for 100+ providers | api_key, model (with provider prefix) |
LiteLLM Model Prefixes: Use anthropic/claude-3-5-sonnet, bedrock/anthropic.claude-3, gemini/gemini-1.5-pro, ollama/llama3.2, etc. See LiteLLM docs for all supported providers.
See config/config.yaml.example for complete examples of all supported providers.
MINION_ROOT is determined automatically:
- Checks
MINION_ROOTenvironment variable (if set) - Auto-detects by finding
.git,.project_root, or.gitignorein parent directories - Falls back to current working directory
Check the startup log:
INFO | minion.const:get_minion_root:44 - MINION_ROOT set to: <some_path>
Warning: Be cautious - LLM can generate potentially harmful code.
- minion-agent Production agent system with multi-agent coordination, browser automation, and research capabilities
- minion-code Minion's implementation of Claude Code
WeChat Group (minion-agent discussion):
The project uses optional dependency groups to avoid installing unnecessary packages. Install only what you need:
# Development tools (pytest, black, ruff)
pip install -e ".[dev]"
# LiteLLM - unified interface for 100+ LLM providers
pip install -e ".[litellm]"
# Google ADK and LiteLLM support
pip install -e ".[google]"
# Browser automation (browser-use)
pip install -e ".[browser]"
# Gradio web UI
pip install -e ".[gradio]"
# UTCP support
pip install -e ".[utcp]"
# AWS Bedrock support
pip install -e ".[bedrock]"
# Anthropic Claude support
pip install -e ".[anthropic]"
# Web tools (httpx, beautifulsoup4, etc.)
pip install -e ".[web]"
# Install ALL optional dependencies
pip install -e ".[all]"
# You can also combine multiple groups:
pip install -e ".[dev,gradio,anthropic,litellm]"


