A full-stack AI chat application with a FastAPI backend and TypeScript/React CLI frontend. Forge provides an interactive terminal-based interface for chatting with AI models powered by Ollama, with support for tools.
- Terminal-based Chat Interface: Beautiful CLI interface built with React and Ink
- Ollama Integration: Seamless integration with Ollama for local AI model inference
- Streaming Responses: Real-time streaming of AI responses for better UX
- Session Management: Persistent chat history with session support
- Command System: Built-in commands for model management and navigation
- 1. Add file reading/writing tools
- 2. Add tool confirmation capabilities.
- 3. Add model downloading util at the frontend.
- 4. Add command running tools for the models.
- 5. Add MCP server support.
- 6. Fix rerendering bugs in the frontend.
- FastAPI: Modern Python web framework for building APIs
- SQLAlchemy: ORM for database management
- Ollama: Local LLM inference engine
- Agno: Agent framework for AI interactions
- Tavily: Internet search API integration
- SQLite: Lightweight database for chat history
- TypeScript: Type-safe JavaScript
- React: UI library
- Ink: React renderer for CLI applications
- Pastel: CLI framework for building terminal apps
Before installing Forge, ensure you have the following installed:
- Python 3.8+: Required for the backend
- Node.js 16+: Required for the frontend
- Ollama: Must be installed and running locally
- Download from ollama.ai
- At least one model must be downloaded (e.g.,
qwen2.5:14b)
- Tavily API Key: Required for internet search functionality
- Sign up at tavily.com to get your API key
-
Clone the repository:
git clone https://www.github.com/loeclos/forge.git cd forge -
Install Backend Dependencies: With conda:
conda env create -f environment.yml
With pip:
cd backend pip install -r requirements.txt -
Install Frontend Dependencies:
cd ../frontend npm install # or (recommended) pnpm install
-
Configure Environment Variables:
Create a
.env.localfile in thebackenddirectory:TAVILY_API_KEY=your_tavily_api_key_here
Create a
.env.localfile in thefrontenddirectory:MAIN_ENDPOINT=http://127.0.0.1:8000
-
Build Frontend:
cd frontend npm run build # or pnpm build
For detailed installation instructions, see docs/INSTALLATION.md.
-
Start the Backend Server:
cd backend uvicorn app.main:app --reload --host 127.0.0.1 --port 8000 -
Start the Frontend CLI (in a new terminal):
cd frontend npm run build node dist/cli.js # or pnpm build && node dist/cli.js
-
Start Backend:
cd backend uvicorn app.main:app --host 0.0.0.0 --port 8000 -
Run Frontend:
cd frontend npm run build node dist/cli.js
The backend configuration is managed in backend/app/core/config.py. Key settings:
APP_NAME: Application name (default: "Sagaforge")MODEL: Default Ollama model to use (default: "qwen2.5:14b")TAVILY_API_KEY: API key for Tavily search (from environment variable)DATABASE_URL: SQLite database path
The frontend configuration is managed via environment variables in .env.local:
MAIN_ENDPOINT: Backend API endpoint (default: "http://127.0.0.1:8000")
forge/
├── backend/ # FastAPI backend application
│ ├── app/
│ │ ├── api/ # API route handlers
│ │ │ └── v1/ # API version 1 routes
│ │ ├── config/ # Configuration modules
│ │ ├── core/ # Core application settings
│ │ ├── db/ # Database files
│ │ ├── logs/ # Application logs
│ │ ├── services/ # Business logic services
│ │ ├── tools/ # AI agent tools
│ │ ├── database.py # Database configuration
│ │ └── main.py # FastAPI application entry point
│ └── requirements.txt # Python dependencies
│
├── frontend/ # TypeScript/React CLI frontend
│ ├── source/ # Source code
│ │ ├── commands/ # CLI commands
│ │ ├── components/ # React components
│ │ ├── hooks/ # React hooks
│ │ ├── services/ # Service layer
│ │ ├── types/ # TypeScript type definitions
│ │ ├── utils/ # Utility functions
│ │ └── cli.tsx # CLI entry point
│ ├── dist/ # Compiled JavaScript output
│ └── package.json # Node.js dependencies
│
└── docs/ # Documentation
├── INSTALLATION.md # Detailed installation guide
├── API.md # API reference
├── backend/ # Backend documentation
└── frontend/ # Frontend documentation
- Launch the frontend CLI application
- You'll be prompted with a security question about trusting files in the current directory
- Select "Yes, proceed" to continue
- Start typing your message and press Enter
- The AI will respond in real-time with streaming output
Type / followed by a command name to access built-in commands:
/models- List all available Ollama models/model- Show the currently active model/change- Change the active model/exit- Exit the application
Welcome to Forge!
Forge can write, test and debug code right from your terminal.
Describe a task to get started or enter ? for help.
❯ What is the capital of France?
The capital of France is Paris.
For complete API documentation, see docs/API.md.
The backend provides the following main endpoints:
GET /api/models/all- Get all available modelsGET /api/models/current- Get current modelPOST /api/models/change- Change current modelPOST /api/models/download/{model_name}- Download a modelGET /api/models/alive- Check if Ollama is runningPOST /api/chat- Send a chat messageGET /api/utils/getcwd- Get current working directory
If you just see a blank message box for a while, this could be for two reasons:
- The model you chose is too big and runs really slowly on your machine.
- The model you chose doesn't have tool integration.
To fix 1, try pulling a smaller model from the same family (i.e. qwen3:0.6b instead of qwen3:7b) or you can just wait. The response should show up eventually.
For 2, you must make sure that the model you got supports tools. All models that allow tools are listed here: https://ollama.com/search?c=tools
If you see "Ollama either not installed or not running":
- Ensure Ollama is installed and running
- Check that at least one model is downloaded:
ollama list - Verify Ollama is accessible:
ollama ps
If the frontend can't connect to the backend:
- Verify the backend is running on the correct port
- Check that
MAIN_ENDPOINTin.env.localmatches the backend URL - Ensure no firewall is blocking the connection
If you get "Model not found" errors:
- List available models:
/modelscommand - Download the model: Use Ollama CLI:
ollama pull <model_name>
For more troubleshooting tips, see docs/INSTALLATION.md.
- Installation Guide - Detailed setup instructions
- API Reference - Complete API documentation
Contributions are welcome! Please feel free to submit a Pull Request.
For issues, questions, or contributions, please open an issue on the repository.