MACA is an EXPERIMENTAL coding agent that implements the Agent Client Protocol allowing any ACP-compatible IDE to integrate it.
- Works hard to reduce the number of LLM turns to complete a task, proactively providing as context:
- A project map (based on language server symbols and diagnostic messages).
- Relevant code snippets for the prompt (via vector search over code embeddings).
- Always adds up-to-date context (like file contents) for each LLM invocation. History rewrites happen every some many turns, to still make effective use of LLM context caching.
- Allows the LLM to request specific function/class/method/variable definitions instead of dumping large files.
- Allows the LLM to drop pieces of context it no longer needs.
- Automatically manages git branches, worktrees and squash+rebase merges into
mainonce you're happy, so you can fearlessly parallelize things. - Uses only git for session management; a session is a branch, each prompt and each LLM turn is a commit. Tool calls and results are stored in the commit message.
- Forces the LLM to use a plan, test, implement, refactor process.
- Performs safe project-wide symbol renames using the language server.
- Runs shell commands in devcontainers to isolate them from your host system while providing all the necessary tools.
- Automatically downloads and runs relevant language servers in Podman containers.
- Node.js
- Podman (for LSP servers and shell command isolation)
- An OpenRouter API key (for LLM calls and code embeddings)
Start your ACP-client (IDE, IDE plugin, or something like Toad) pointing at the maca.ts executable as your agent. If your client does not support authMethods, set the OPENROUTER_API_KEY in your environment.
The agent communicates over JSON-RPC on stdin/stdout per the ACP spec.
toad acp /path/to/maca.tsIntegration tests use a test runner that spawns MACA as a subprocess with MACA_TEST_MODE=1, which disables real LLM and embedding calls.
npm test # run all
npm test tests/basic.test.hjson # run a specific test file
npm test -- --update-prompts # rewrite test files to include observed LLM prompts (first argument to fakeLLM)
Tests are hjson files containing an array of steps. Each step is [command, ...args]:
* ["setupFile", path, content] — create a file in baseRoot (before session)
* ["writeFile", path, content] — write a file in baseRoot (after session started, committed)
* ["fakeLLM", expectedPrompt, cannedResponse] — verifies an expected LLM prompt and returns a canned response
* ["sendPrompt", text] — send a prompt
* ["expectUpdate", {field: "prefix{..}"}] — wait for a matching session/update
* ["awaitDone"] — wait for the last prompt to complete
* ["checkFile", path, content] — assert file contents in workRoot
expectUpdate and expectedPrompt do partial matching: objects can leave out fields. Within strings {..} can be used as a wildcard. expectedPrompt can also be a shorter array that only covers the tail of the full message list — useful for only asserting the new messages added in a later turn.
The fakeLLM command queues prompt/response pairs, so you must provide them before they are needed by the agent. That means you need to put a fakeLLM step before the sendPrompt that triggers it. Look at tests/basic.test.hjson for a simple example.
For writing new tests, it is easiest to initially leave the fakeLLM expectedPrompt at null, then use --update-prompts to fill it in, and then verify that the observed prompt looks correct.
After making changes to the way the agent composes prompts, make sure all tests are committed, run --update-prompts, and verify the diff to ensure the new prompts look correct.
--update-prompts automatically trims redundant prefix messages from consecutive fakeLLM entries: after the first turn, any messages that are identical to the previous turn's prompt are removed, leaving only the last repeated message as an anchor point. This keeps test files compact.
- A class 'definition' should not include its method bodies
- More tests
- LLM streaming
- Add a GET_SYMBOL_TYPE command