Multi-agent infrastructure to run a startup from idea to scale. Voice-first AI operating system.
Idea → Voice Dump → Idea Evaluator → [Ops Agent, Product Agent, Sales & Marketing Agent]
3 agents, 41 skills covering every function of a startup:
Strategy, business analysis, and operational planning.
- Executive Summary (problem breakdown, why-now, market trends, risks, exit strategy)
- Competitive Analysis & Differentiation
- Business Model & Monetization
- ICP / Personas / Audience Behavior
- Positioning
- PRD (Product Requirements Document)
- Solution & GTM (Go-to-Market)
Takes a PRD and turns it into a working product.
- Engineering (architecture, stack selection, API design)
- Backend Code
- Frontend Code
- Prompt Engineering
- QA (test plans, automated tests)
- Design (design system, UX principles)
- Wireframes
- Asset Manager
- Website Architecture
- MVP Scoping
All go-to-market execution.
- Content Engine (video, written, image)
- Lead Generation & Enrichment
- Outreach (paid ads, email campaigns, Product Hunt)
- Community (events, education, partnerships)
- Customer Success (signup flow, landing pages, launch posts)
- Analysis (A/B testing, feedback analysis)
- Content Scheduling
- Documentation
- Voice Dump — Speak or type your idea
- Idea Evaluator — AI evaluates feasibility, novelty, and market fit
- Delegation — Tasks are routed to the right agent and skill
- Execution — Each skill produces a specific deliverable
- Memory Bus — All context is shared across agents for coherence
# Install
npm install
# Set your API key
export ANTHROPIC_API_KEY=sk-ant-...
# Run the voice terminal (text mode)
npm start
# Or run a specific skill directly
npx tsx src/index.ts/idea <text> — Evaluate a new startup idea
/run <skill-id> — Run a specific skill (e.g., /run ops:prd)
/agents — List all agents and skills
/memory — Show shared context
/quit — Exit
# Type check
npm run lint
# Run tests
npm test
# Watch mode
npm run dev- TypeScript — Strict mode, no
any - Claude API (Anthropic SDK) — LLM backbone
- Pino — Structured logging
- Zod — Schema validation
- Vitest — Testing
The Voice Terminal supports pluggable transcription (Whisper API, Deepgram) and synthesis (OpenAI TTS, ElevenLabs). In text mode by default — voice mode integrates with clui-cc (Voice Terminal Electron app).
MIT