The write-enabled, AI-augmented MCP server for Metabase — create dashboards, ask questions in plain English, and get automated insights through Claude, on any Metabase version.
Metabase shipped an official MCP server in v0.60 focused on read and search. This server complements it with write operations, AI-generated insights, production security controls, and support for Metabase versions older than v0.60.
| Capability | @ai-1luvc0d3/metabase-mcp | Metabase Official (v0.60+) | Other community servers |
|---|---|---|---|
| Read dashboards / cards / databases | ✅ | ✅ | ✅ |
| Write ops (create/update/delete cards, dashboards, collections) | ✅ | ❌ | partial |
| Batch execution (parallel multi-op in one call) | ✅ | ❌ | ❌ |
| Workflow pipelines (chained steps with output references) | ✅ | ❌ | ❌ |
| Natural language → SQL (+ explain / optimize / validate) | ✅ | partial | ❌ |
| Automated insights & trend analysis | ✅ | ❌ | ❌ |
| SQL injection guardrails | ✅ | n/a | ❌ |
| Tiered rate limiting (read / write / LLM) | ✅ | n/a | ❌ |
| Audit logging with risk levels | ✅ | n/a | ❌ |
| Token-optimized compact responses (default) | ✅ | ❌ | partial |
| Server modes (read / write / full) | ✅ | ❌ | ❌ |
| Works on Metabase < v0.60 (no upgrade required) | ✅ | ❌ | varies |
| OAuth per-user permission scoping | ❌ (API key) | ✅ | varies |
Use this if: you want Claude to create content in Metabase, you want AI-generated insights on query results, or you're on a Metabase version older than v0.60.
Use Metabase's official MCP if: you're on v0.60+, only need read/search, and want per-user permission scoping via OAuth.
- 30 tools across read, batch, workflow, write, NLQ, and insight categories
- Batch execution -- run up to 20 read operations in parallel in a single call
- Workflow pipelines -- chain tools sequentially with
$stepName.pathoutput references between steps - Compact responses by default -- all tools return compact JSON (~50% token reduction); opt into pretty-printing with
format: "default" - Natural language to SQL -- ask questions, get SQL + results (powered by Claude)
- SQL guardrails -- injection detection, DDL/DML blocking, dangerous pattern enforcement
- Tiered rate limiting -- configurable per-minute limits for read, write, and LLM operations
- Audit logging -- every operation logged with risk assessment
- Three server modes --
read(safe default),write, orfull(with AI insights) - Schema caching -- fast NLQ context for large databases
- Download the latest
metabase-mcp-*.mcpbfrom GitHub Releases - Double-click to install in Claude Desktop
- Enter your Metabase URL and API key when prompted — stored securely in the OS keychain
npx @ai-1luvc0d3/metabase-mcpnpm install -g @ai-1luvc0d3/metabase-mcp
metabase-mcpgit clone https://github.com/1luvc0d3/metabase-mcp.git
cd metabase-mcp
npm install
npm run build
npm startSet environment variables or create a .env file (see .env.example):
| Variable | Required | Default | Description |
|---|---|---|---|
METABASE_URL |
Yes | - | Your Metabase instance URL |
METABASE_API_KEY |
Yes | - | Metabase API key |
MCP_MODE |
No | read |
Server mode: read, write, or full |
ANTHROPIC_API_KEY |
No | - | Enables NLQ and insight tools |
METABASE_TIMEOUT |
No | 30000 |
Request timeout (ms) |
METABASE_MAX_ROWS |
No | 10000 |
Max rows returned per query |
LOG_LEVEL |
No | info |
Logging: debug, info, warn, error |
RATE_LIMIT_REQUESTS_PER_MINUTE |
No | 60 |
Rate limit threshold |
- Go to your Metabase instance
- Navigate to Admin > Settings > API Keys
- Click Create API Key
- Copy the key and set it as
METABASE_API_KEY
Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"metabase": {
"command": "npx",
"args": ["@ai-1luvc0d3/metabase-mcp"],
"env": {
"METABASE_URL": "https://your-metabase.example.com",
"METABASE_API_KEY": "mb_your_api_key_here",
"MCP_MODE": "read"
}
}
}
}| Mode | Tools | Description |
|---|---|---|
read |
12 + NLQ | Read-only access, batch execution, and workflow pipelines |
write |
22 + NLQ | Adds create/update/delete for cards, dashboards, collections |
full |
30 | All tools including automated insights and trend analysis |
Read (always available)
list_dashboards, get_dashboard, list_cards, get_card, execute_card, list_databases, get_database_schema, execute_query, search_content, get_collections
Batch & Workflow (always available)
batch_execute, run_workflow
Write (write/full modes)
create_card, update_card, delete_card, create_dashboard, update_dashboard, delete_dashboard, add_card_to_dashboard, remove_card_from_dashboard, create_collection, move_to_collection
NLQ (requires ANTHROPIC_API_KEY)
nlq_to_sql, explain_sql, optimize_sql, validate_sql
Insights (full mode + ANTHROPIC_API_KEY)
ask_data, generate_insights, compare_metrics, trend_analysis
You: What dashboards do we have related to customer retention?
Claude uses search_content to find retention-related dashboards, then get_dashboard to summarize the key metrics. You see a ranked list with the most relevant results.
You: Run the "Monthly Active Users" card for the last 90 days
Claude calls list_cards to locate the card, then execute_card with the appropriate time filter. Results come back as a table you can ask follow-up questions about ("what was the biggest dip and when?").
You: Show me the top 10 products by revenue last quarter from the sales database
Claude calls list_databases to find the sales database, get_database_schema to inspect the relevant tables, then generates and runs a SELECT query via execute_query. The query is validated against the SQL guardrails (no DROP/DELETE/UNION, single statement only) before execution. Audit log entry is written with the query and row count.
You: DROP TABLE users
Request is blocked. Claude surfaces: "Blocked SQL pattern detected: DROP — this operation is not allowed." The block is logged as a high-risk audit event.
You: Which support agents closed the most tickets this week, and how does that compare to last week?
Claude uses nlq_to_sql with the database schema as context to generate a comparative SQL query. You can ask it to explain_sql in plain English before running, or optimize_sql to suggest performance improvements — all before hitting your database.
You: Save the MAU trend query we just ran as a card called "MAU — Last 90 Days" in the Growth collection
Claude calls get_collections to find "Growth", then create_card with your validated SQL. The card now lives in your Metabase library and can be re-executed by name in future conversations via execute_card — no LLM tokens spent on re-generating the query.
You: Get me the details for dashboards 1, 3, and 7, plus the schema for the sales database
Claude uses batch_execute to run all four operations in parallel in a single call:
{
"operations": [
{ "tool": "get_dashboard", "args": { "dashboard_id": 1 } },
{ "tool": "get_dashboard", "args": { "dashboard_id": 3 } },
{ "tool": "get_dashboard", "args": { "dashboard_id": 7 } },
{ "tool": "get_database_schema", "args": { "database_id": 2 } }
]
}One tool call instead of four. Results come back with per-operation success/failure, so partial failures don't block the rest.
You: Find dashboards about revenue, get the first one's cards, and run the top card
Claude uses run_workflow to chain the steps with output references:
{
"steps": [
{ "name": "find", "tool": "search_content", "args": { "query": "revenue", "type": "dashboard" } },
{ "name": "dash", "tool": "get_dashboard", "args": { "dashboard_id": "$find.results[0].id" } },
{ "name": "data", "tool": "execute_card", "args": { "card_id": "$dash.dashcards[0].card_id" } }
]
}Each step can reference results from previous steps using $stepName.path[index].field syntax. One round trip instead of three back-and-forth exchanges.
You: Run last quarter's revenue query and tell me what's interesting
Claude uses execute_query to run the query, then generate_insights which asks the Claude API to identify trends, outliers, and recommendations. You get a structured summary: headline number, 3-5 bullet points, and suggested follow-up questions.
Note on data privacy:
generate_insights,ask_data,compare_metrics, andtrend_analysissend query result rows to the Anthropic API for analysis. See Data Privacy Note for details.
This server is designed for production use with multiple layers of protection:
- SQL Guardrails: Only
SELECTandWITHqueries are allowed by default. DDL/DML statements (DROP,DELETE,INSERT, etc.) are blocked. Injection patterns (UNION, comments, multi-statement, file ops, time-based attacks) are detected and rejected. - Tiered Rate Limiting: Separate limits for read (120/min), write (30/min), and LLM (20/min) operations.
- Audit Logging: Every operation is logged with risk assessment (low/medium/high). Sensitive fields are automatically redacted. Log files are created with secure permissions (owner-only read/write).
- Secret Isolation: API keys are never exposed to tool handlers. Error responses from Metabase are sanitized to prevent credential leakage.
- Redirect Protection: API key headers are never forwarded on HTTP redirects.
When using NLQ or insight tools (ask_data, generate_insights, etc.), query result data is sent to the Anthropic API for analysis. If your queries return sensitive data (PII, financial records, etc.), that data will be processed by Claude. Consider this when enabling NLQ features on databases containing sensitive information.
What this extension collects:
- Your Metabase API key and URL (stored locally in the OS keychain — never transmitted to us)
- Your Anthropic API key, if provided (stored locally in the OS keychain — never transmitted to us)
- No telemetry, analytics, or usage data is collected by this extension
What this extension transmits:
- All Metabase API calls (queries, dashboards, cards) go directly from your machine to your own Metabase instance
- NLQ/insight tool usage sends your natural-language question, database schema context, and query result samples to the Anthropic API for processing (governed by Anthropic's privacy policy)
- If you don't provide an Anthropic API key, no data is sent to Anthropic — NLQ and insight tools are simply disabled
Data retention:
- This extension does not retain any data. Audit logs (if enabled via
AUDIT_LOG_FILE) are written to your local filesystem only, with owner-only permissions (0600)
Third-party privacy policies:
Reporting security issues: See SECURITY.md for responsible disclosure.
- Verify
METABASE_URLis correct and reachable (test:curl $METABASE_URL/api/health) - Verify
METABASE_API_KEYis valid (regenerate in Metabase Admin > Settings > API Keys if needed) - The API key must have permissions for the databases you want to query
- Only
SELECTandWITHqueries are allowed by default - Even inside a
SELECT, patterns likeUNION SELECT, SQL comments (--,/* */),xp_cmdshell,INTO OUTFILE, etc. are blocked - To execute DML (
INSERT,UPDATE,DELETE), you must run inwriteorfullmode AND the SQL must still pass guardrails (it won't — by design)
- Default limits: 120 reads/min, 30 writes/min, 20 LLM calls/min
- Adjust with
RATE_LIMIT_REQUESTS_PER_MINUTEenv var - Wait for the retry-after period shown in the error
- Requires
ANTHROPIC_API_KEY— verify it's set - Check it starts with
sk-and has remaining credits - Insight tools additionally require
MCP_MODE=full
- Fully quit and restart Claude Desktop
- Check logs:
~/Library/Logs/Claude/mcp*.logon macOS - Verify
node --versionis >= 18
This project is young and your input shapes where it goes next — especially now that Metabase has shipped its own official MCP. A minute of your time helps a lot:
- Is this useful for your workflow? Start a GitHub Discussion or star the repo — tells me where to invest.
- Which tools do you actually use? Let me know in Discussions — helps prioritize what stays, what grows.
- Hit a bug? File an issue with your Metabase version,
MCP_MODE, and reproduction steps. - Missing a feature? Request it — especially something the official Metabase MCP doesn't cover.
- Running in production? I'd genuinely love to hear about it — open a Discussion or drop a note on the repo.
- Bug reports / feature requests: GitHub Issues
- Questions / general feedback: GitHub Discussions
- Security vulnerabilities: Private disclosure — see SECURITY.md
- Response time: typically within 5 business days
npm install # Install dependencies
npm run build # Compile TypeScript
npm run dev # Watch mode
npm test # Run all tests
npm run type-check # Type checking
npm run lint # LintingSee CONTRIBUTING.md for more details.