Skip to content
/ minion Public

👷‍♂️High performance agent framework that can do everything. Minion is designed to execute any type of queries, offering a variety of features that demonstrate its flexibility and intelligence.

License

Notifications You must be signed in to change notification settings

femto/minion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

487 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Documentation Status Install Discord Twitter Follow Ask DeepWiki

Minion

Run in Smithery

Minion is Agent's Brain. Minion is designed to execute any type of queries, offering a variety of features that demonstrate its flexibility and intelligence.

Minion

Installation

Install from PyPI

pip install minionx

# With optional dependencies
pip install minionx[litellm]      # LiteLLM support (100+ LLM providers)
pip install minionx[anthropic]   # Anthropic Claude
pip install minionx[bedrock]     # AWS Bedrock
pip install minionx[gradio]      # Gradio web UI
pip install minionx[all]         # All optional dependencies

Install from Source

git clone https://github.com/femto/minion.git && cd minion
pip install -e .
cp config/config.yaml.example config/config.yaml
cp config/.env.example config/.env

Docker Installation

git clone https://github.com/femto/minion.git && cd minion
cp config/config.yaml.example config/config.yaml

# Set your API key
export OPENAI_API_KEY=your-api-key

# Build and run (basic install)
docker-compose build
docker-compose run --rm minion

# Build with optional dependencies
docker-compose build --build-arg EXTRAS="gradio,web,anthropic"
# Or install all extras
docker-compose build --build-arg EXTRAS="all"

# Run a specific example
docker-compose run --rm minion python examples/mcp/mcp_agent_example.py

Edit config/config.yaml:

models:
  "default":
    api_type: "openai"
    base_url: "${DEFAULT_BASE_URL}"
    api_key: "${DEFAULT_API_KEY}"
    model: "gpt-4.1"
    temperature: 0

See Configuration for more details on configuration options.

Quick Start

Using CodeAgent (Recommended)

from minion.agents.code_agent import CodeAgent

# Create agent
agent = await CodeAgent.create(
    name="Minion Code Assistant",
    llm="your-model",
    tools=all_tools,  # optional
)

# Run task
async for event in await agent.run_async("your task here"):
    print(event)

See examples/mcp/mcp_agent_example.py for a complete example with MCP tools.

Using Brain

from minion.main.brain import Brain

brain = Brain()
obs, score, *_ = await brain.step(query="what's the solution 234*568")
print(obs)

See Brain Usage Guide for more examples.

Quick Demo

Minion Quick Demo

Click to watch the demo video on YouTube.

Working Principle

Minion

The flowchart demonstrates the complete process from query to final result:

  1. First receives the user query (Query)
  2. System generates a solution (Solution)
  3. Performs solution verification (Check)
  4. If unsatisfactory, makes improvements (Improve) and returns to generate new solutions
  5. If satisfactory, outputs the final result (Final Result)

Documentation

Configuration

Configuration File Locations

  1. Project Config: MINION_ROOT/config/config.yaml - Default project configuration
  2. User Config: ~/.minion/config.yaml - User-specific overrides

Configuration Priority

When both configuration files exist:

  • Project Config takes precedence over User Config

This allows you to:

  • Keep sensitive data (API keys) in your user config
  • Share project defaults through the project config

Environment Variables

Variable Substitution: Use ${VAR_NAME} syntax to reference environment variables directly in config values:

models:
  "default":
    api_key: "${OPENAI_API_KEY}"
    base_url: "${OPENAI_BASE_URL}"
    api_type: "openai"
    model: "gpt-4.1"
    temperature: 0.3
  "azure-gpt-4o":
    api_type: "azure"
    api_key: "${AZURE_OPENAI_API_KEY}"
    base_url: "${AZURE_OPENAI_ENDPOINT}"  # e.g., https://your-resource.openai.azure.com/
    api_version: "2024-06-01"
    model: "gpt-4o"  # deployment name
    temperature: 0

Loading .env Files: Use env_file to load environment variables from .env files (follows Docker .env file format):

env_file:
  - .env        # loaded first
  - .env.local  # loaded second, can override values from .env

Inline Environment Variables: Define environment variables directly in config:

environment:
  MY_VAR: "value"
  ANOTHER_VAR: "another_value"

Variables from all sources (system environment, .env files, inline environment) will be available for ${VAR_NAME} substitution throughout the configuration.

Supported API Types

api_type Description Required Fields
openai OpenAI API or compatible (Ollama, vLLM, LocalAI) api_key, base_url, model
azure Azure OpenAI Service api_key, base_url, api_version, model
azure_inference Azure AI Model Inference (DeepSeek, Phi) api_key, base_url, model
azure_anthropic Azure hosted Anthropic models api_key, base_url, model
bedrock AWS Bedrock (sync) access_key_id, secret_access_key, region, model
bedrock_async AWS Bedrock (async, better performance) access_key_id, secret_access_key, region, model
litellm Unified interface for 100+ providers api_key, model (with provider prefix)

LiteLLM Model Prefixes: Use anthropic/claude-3-5-sonnet, bedrock/anthropic.claude-3, gemini/gemini-1.5-pro, ollama/llama3.2, etc. See LiteLLM docs for all supported providers.

See config/config.yaml.example for complete examples of all supported providers.

MINION_ROOT Detection

MINION_ROOT is determined automatically:

  1. Checks MINION_ROOT environment variable (if set)
  2. Auto-detects by finding .git, .project_root, or .gitignore in parent directories
  3. Falls back to current working directory

Check the startup log:

INFO | minion.const:get_minion_root:44 - MINION_ROOT set to: <some_path>

Warning: Be cautious - LLM can generate potentially harmful code.

Related Projects

  • minion-agent Production agent system with multi-agent coordination, browser automation, and research capabilities
  • minion-code Minion's implementation of Claude Code

Community and Support

Discord

Twitter Follow

WeChat Group (minion-agent discussion):

WeChat Group

Optional Dependencies

The project uses optional dependency groups to avoid installing unnecessary packages. Install only what you need:

# Development tools (pytest, black, ruff)
pip install -e ".[dev]"

# LiteLLM - unified interface for 100+ LLM providers
pip install -e ".[litellm]"

# Google ADK and LiteLLM support
pip install -e ".[google]"

# Browser automation (browser-use)
pip install -e ".[browser]"

# Gradio web UI
pip install -e ".[gradio]"

# UTCP support
pip install -e ".[utcp]"

# AWS Bedrock support
pip install -e ".[bedrock]"

# Anthropic Claude support
pip install -e ".[anthropic]"

# Web tools (httpx, beautifulsoup4, etc.)
pip install -e ".[web]"

# Install ALL optional dependencies
pip install -e ".[all]"

# You can also combine multiple groups:
pip install -e ".[dev,gradio,anthropic,litellm]"

About

👷‍♂️High performance agent framework that can do everything. Minion is designed to execute any type of queries, offering a variety of features that demonstrate its flexibility and intelligence.

Topics

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages