This guide provides detailed instructions for setting up the development environment and contributing to the MultiMind SDK project.
- Python 3.8 or higher
- Git
- Virtual environment tool (venv, conda, etc.)
- CUDA toolkit (for GPU support)
- API keys for supported models (OpenAI, Anthropic, Mistral)
-
Clone the Repository
git clone https://github.com/multimind-dev/multimind-sdk.git cd multimind-sdk -
Create Virtual Environment
# Using venv python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate # Or using conda conda create -n multimind python=3.8 conda activate multimind
-
Install Development Dependencies
# Install in editable mode with development dependencies pip install -e ".[dev]" # Install pre-commit hooks pre-commit install
-
Configure Environment Variables Create a
.envfile in the project root:OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key MISTRAL_API_KEY=your_mistral_api_key HUGGINGFACE_API_KEY=your_huggingface_api_key
-
Verify Installation
# Run tests pytest # Run examples python examples/basic_agent.py python examples/prompt_chain.py python examples/task_runner.py python examples/mcp_workflow.py python examples/usage_tracking.py
multimind-sdk/
├── multimind/ # Main package
│ ├── models/ # Model wrappers
│ ├── router/ # Model routing
│ ├── rag/ # RAG support
│ ├── fine_tuning/ # Training logic
│ ├── agents/ # Agent system
│ ├── orchestration/ # Workflow management
│ ├── mcp/ # Model Composition Protocol
│ ├── integrations/ # Framework integrations
│ ├── logging/ # Monitoring and logging
│ └── cli/ # Command-line interface
├── examples/ # Example scripts
├── tests/ # Test suite
├── docs/ # Documentation
└── configs/ # Configuration templates
We use several tools to maintain code quality:
# Format code
black .
isort .
# Check code style
flake8
mypy .
# Run pre-commit hooks
pre-commit run --all-files-
Unit Tests (
tests/unit/)- Model wrappers
- Agent system
- Tools
- Memory management
- Configuration
-
Integration Tests (
tests/integration/)- Agent workflows
- MCP execution
- Prompt chains
- Task runner
- Usage tracking
-
Example Tests (
tests/examples/)- Basic agent usage
- Prompt chaining
- Task running
- MCP workflows
- Usage tracking
# Run all tests
pytest
# Run specific test categories
pytest tests/unit/ # Unit tests
pytest tests/integration/ # Integration tests
pytest tests/examples/ # Example tests
# Run with coverage
pytest --cov=multimind
# Run specific test file
pytest tests/unit/test_agent.py
# Run specific test
pytest tests/unit/test_agent.py::test_agent_creationExample test structure:
import pytest
from multimind.agents import Agent, AgentMemory
from multimind.models import OpenAIModel
def test_agent_creation():
# Arrange
model = OpenAIModel(model="gpt-3.5-turbo")
memory = AgentMemory(max_history=50)
# Act
agent = Agent(
model=model,
memory=memory,
system_prompt="You are a helpful assistant."
)
# Assert
assert agent.model == model
assert agent.memory == memory
assert agent.system_prompt == "You are a helpful assistant."
@pytest.mark.asyncio
async def test_agent_run():
agent = create_test_agent() # Helper fixture
async def main():
# Act
response = await agent.run("What is 2+2?")
# Assert
assert response is not None
assert "4" in response.lower()
# To execute the example
# asyncio.run(main())The examples/ directory contains working examples of all major features:
-
Basic Agent (
basic_agent.py)- Agent creation
- Model usage
- Tool integration
- Memory management
-
Prompt Chain (
prompt_chain.py)- Multi-step reasoning
- Variable substitution
- Code review workflow
-
Task Runner (
task_runner.py)- Task dependencies
- Research workflow
- Context management
-
MCP Workflow (
mcp_workflow.py)- Workflow definition
- Model composition
- Step execution
-
Usage Tracking (
usage_tracking.py)- Usage monitoring
- Cost tracking
- Export/reporting
To run examples:
# Run all examples
python examples/basic_agent.py
python examples/prompt_chain.py
python examples/task_runner.py
python examples/mcp_workflow.py
python examples/usage_tracking.py
# Run with specific model
OPENAI_API_KEY=your_key python examples/basic_agent.py- Use Google-style docstrings
- Include type hints
- Document all public APIs
- Add examples in docstrings
Example:
def create_agent(
model: BaseModel,
memory: Optional[AgentMemory] = None,
tools: Optional[List[BaseTool]] = None,
system_prompt: str = "You are a helpful assistant."
) -> Agent:
"""Create a new agent with the specified configuration.
Args:
model: The language model to use.
memory: Optional memory for conversation history.
tools: Optional list of tools for the agent.
system_prompt: The system prompt to use.
Returns:
An initialized Agent instance.
Example:
>>> model = OpenAIModel(model="gpt-3.5-turbo")
>>> agent = create_agent(model, system_prompt="You are a math tutor.")
>>> response = await agent.run("What is 2+2?")
>>> print(response)
"The answer is 4."
"""
pass- Update README.md for significant changes
- Add/update docstrings
- Include usage examples
- Document configuration options
main: Production-ready codedevelop: Development branchfeature/*: New featuresfix/*: Bug fixesdocs/*: Documentation updates
Follow conventional commits:
<type>(<scope>): <description>
[optional body]
[optional footer]
Types:
feat: New featurefix: Bug fixdocs: Documentationstyle: Formattingrefactor: Code restructuringtest: Testingchore: Maintenance
- Create feature/fix branch
- Make changes
- Run tests and checks:
# Run all checks pytest black . --check isort . --check flake8 mypy .
- Update documentation
- Create PR with template
- Request review
-
New Model Wrapper
from multimind.models.base import BaseModel class NewModelWrapper(BaseModel): def __init__(self, config: Dict[str, Any]): super().__init__(config) async def generate(self, prompt: str) -> str: # Implementation pass
-
New Agent Tool
from multimind.agents.tools.base import BaseTool class NewTool(BaseTool): def __init__(self, config: Dict[str, Any]): super().__init__(config) async def execute(self, input_data: Any) -> Any: # Implementation pass
-
New MCP Step Type
from multimind.mcp.base import BaseStep class NewStepType(BaseStep): def __init__(self, config: Dict[str, Any]): super().__init__(config) async def execute(self, context: Dict[str, Any]) -> Dict[str, Any]: # Implementation pass
-
Profiling
# Using cProfile python -m cProfile -o output.prof script.py # Using line_profiler kernprof -l script.py
-
Memory Profiling
# Using memory_profiler mprof run script.py mprof plot
-
Using pdb
import pdb; pdb.set_trace()
-
Using logging
import logging logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger(__name__) logger.debug("Debug message") logger.info("Info message") logger.warning("Warning message") logger.error("Error message")
Workflow files in .github/workflows/:
test.yml: Run testslint.yml: Check code styledocs.yml: Build documentationrelease.yml: Create releases
# Run all checks
./scripts/check.sh
# Run specific checks
./scripts/check.sh test
./scripts/check.sh lint
./scripts/check.sh docs- Update version in
setup.py - Update changelog
- Create release branch
- Run full test suite
- Build documentation
- Create GitHub release
- Publish to PyPI
-
API Key Issues
- Check environment variables
- Verify API key permissions
- Check rate limits
-
Model Loading Issues
- Check model availability
- Verify model configuration
- Check GPU memory (if using)
-
Memory Issues
- Reduce batch size
- Enable gradient checkpointing
- Use mixed precision training
- Search issues
- Ask in Discussions
Before contributing to the MultiMind SDK, you must sign our Contributor License Agreement. This agreement ensures that the project has the necessary rights to use, modify, and distribute your contributions.
The CLA serves several important purposes:
- Establishes clear terms for contributions
- Protects both contributors and the project
- Ensures the project can be used under its chosen license
- Provides a record of contributor consent
-
Review the Agreement
- Read the CLA document
- Understand your rights and obligations
- Review the license terms
-
Sign the Agreement
- Individual contributors: Sign through CLA Assistant
- Corporate contributors: Contact the project maintainers for a corporate CLA
-
Link Your GitHub Account
- Connect your GitHub account to CLA Assistant
- This allows automatic verification of your contributions
-
Verify Status
- Check your CLA status in pull requests
- CLA Assistant will comment on PRs with status
- Ensure your email matches in Git config and CLA
-
For Individual Contributors
- Must be 18 years or older
- Must have authority to grant the rights
- Must provide valid contact information
- Must use consistent identity across contributions
-
For Corporate Contributors
- Must have authority to bind the organization
- Must provide company details
- Must designate authorized contributors
- Must maintain current contact information
-
For All Contributors
- Must agree to the terms of the MIT License
- Must warrant contributions are original work
- Must not include third-party code without permission
- Must disclose any relevant patents or IP
- All pull requests require CLA verification
- CLA Assistant automatically checks status
- Maintainers will not merge PRs without CLA
- Existing contributors must sign for new contributions
If you need to update your CLA information:
- Contact the project maintainers
- Provide updated information
- Sign a new agreement if necessary
- Update your Git configuration
-
Do I need to sign for each contribution?
- No, one signature covers all contributions
- Keep your CLA information current
-
What if I change employers?
- Update your CLA if contributing on behalf of a company
- Personal contributions remain covered
-
Can I revoke my CLA?
- No, but you can stop contributing
- Existing contributions remain licensed
-
What about small changes?
- All contributions require CLA
- No exceptions for size or type
For questions about the CLA process, contact the project maintainers at support@multimind.dev.
For more details, see the Architecture Overview and API Reference.