This guide will walk you through setting up prom-analytics-proxy and configuring your query clients to use it.
- A running Prometheus, Thanos, or Cortex instance
- Access to configure your query clients (Grafana, Perses, etc.)
- Go 1.21+ (if building from source)
# Clone the repository
git clone https://github.com/nicolastakashi/prom-analytics-proxy.git
cd prom-analytics-proxy
# Build and run
make build
./prom-analytics-proxy -upstream http://your-prometheus-server:9090docker run -p 9091:9091 \
ghcr.io/nicolastakashi/prom-analytics-proxy:latest \
-upstream http://your-prometheus-server:9090The easiest way to try out the proxy is using the provided Docker Compose setup:
cd examples
docker compose up -dThis will start:
- Prometheus (port 9090) - with sample rules and configuration
- PostgreSQL (port 5432) - database backend for the proxy
- prom-analytics-proxy (port 9091) - the proxy itself
- Perses (port 8080) - dashboard tool with pre-configured dashboards
- Metrics Usage (port 8081) - metrics usage tracking UI
See the examples/docker-compose.yaml and examples/config/ for the full configuration.
The proxy will start on port :9091 by default.
This is the critical step! You must update your query clients to send queries to the proxy instead of directly to Prometheus.
- Go to Configuration → Data Sources
- Edit your Prometheus data source
- Change the URL from
http://prometheus:9090tohttp://prom-analytics-proxy:9091 - Click Save & Test
Update your datasource configuration to point to the proxy:
datasources:
- name: PrometheusDemo
default: true
plugin:
kind: PrometheusDatasource
spec:
proxy:
kind: HTTPProxy
spec:
url: http://prom-analytics-proxy:9091 # Changed from prometheus:9090Update your Prometheus client configuration to use the proxy URL:
// Before
prometheusURL := "http://prometheus:9090"
// After
prometheusURL := "http://prom-analytics-proxy:9091"If your applications use a Kubernetes Service to reach Prometheus, you can:
Option 1: Update the Service selector to point to the proxy instead of Prometheus
Option 2: Deploy the proxy as a sidecar alongside your application
Option 3: Create a new Service for the proxy and update your applications to use it
- Open the web UI at
http://localhost:9091(or your proxy address) - Execute some queries from your clients (open Grafana dashboards, etc.)
- Refresh the UI - you should see captured query analytics appear
- Proxy is running and accessible
- Query clients are configured to use the proxy URL (
:9091) - Test queries work through the proxy
- Analytics data appears in the web UI
- Proxy logs show query traffic
If you don't see any data, verify that your clients are actually sending queries to the proxy (check the logs). See the Troubleshooting Guide for common issues.
- Configure the database backend (PostgreSQL or SQLite)
- Tune performance settings for your workload
- Configure inventory sync for metrics discovery
- Set up the OTLP ingester for write-path filtering and live catalog population
- Set up tracing (optional)
- Explore the API Reference for programmatic access
The examples directory contains complete working configurations.
The config directory contains some configurations examples (config.yaml).
The demo directory contains 2 working demo.
docker-compose.yaml - Complete stack with:
- Prometheus with alerting rules
- NGINX configured as Prometheus proxy
- PostgreSQL
- prom-analytics-proxy configured to use PostgreSQL
- Perses with sample dashboards
- Perses Metrics Usage integration
docker-compose.yaml - Complete stack with:
- Prometheus with alerting rules
- PostgreSQL
- Redis
- Node Exporter
- Open Telemetry collector
- prom-analytics-proxy API configured to use PostgreSQL
- prom-analytics-proxy ingester configured to use PostgreSQL, Redis and Open Telemetry
- Perses with sample dashboards
- Perses Metrics Usage integration
The kube directory contains an opinionated set of Kubernetes YAML manifests.
./prom-analytics-proxy \
-upstream http://prometheus:9090 \
-database-provider sqlite \
-sqlite-database-path ./data/analytics.db./prom-analytics-proxy \
-upstream http://prometheus:9090 \
-database-provider postgresql \
-postgresql-addr postgres.example.com \
-postgresql-port 5432 \
-postgresql-database prom_analytics \
-postgresql-user analytics \
-insert-batch-size 50 \
-insert-buffer-size 500Note the possibility to use environment variables for sensitive data like passwords. Following variables can be set in the environment instead of passing via command line:
- POSTGRESQL_USER
- POSTGRESQL_PASSWORD
- POSTGRESQL_DATABASE
Create a config.yaml file:
upstream: "http://prometheus:9090"
insecure-listen-address: ":9091"
database-provider: "postgresql"
postgresql-addr: "postgres.example.com"
postgresql-port: 5432
postgresql-database: "prom_analytics"
postgresql-user: "analytics"
postgresql-password: "your-password"
insert-batch-size: 50
insert-buffer-size: 500
insert-flush-interval: "10s"
inventory:
enabled: true
sync_interval: 15m
time_window: 720hThen run:
./prom-analytics-proxy -config-file config.yaml# Run Prometheus locally
docker run -p 9090:9090 prom/prometheus
# Run the proxy pointing to it
./prom-analytics-proxy -upstream http://localhost:9090
# Configure Grafana to use http://localhost:9091Deploy the proxy as a sidecar container alongside your application, so all Prometheus queries from that app go through the proxy automatically.
Deploy a single proxy instance that all your query clients (Grafana, Perses, custom apps) connect to. This centralizes analytics collection.
Deploy separate proxy instances for different teams, each with their own database. This provides team-specific analytics and isolation.
Run the ingester as a standalone process that receives OTLP metrics, filters unused ones, and forwards the rest. The API server handles analytics and the catalog UI. Both share the same database.
# API server — usage aggregation only, catalog populated by the ingester
./prom-analytics-proxy api \
-upstream http://prometheus:9090 \
-database-provider postgresql \
-postgresql-addr postgres.example.com \
-inventory-metadata-sync-enabled=false
# Ingester — receives OTLP, filters unused metrics, populates catalog
./prom-analytics-proxy ingester \
-database-provider postgresql \
-postgresql-addr postgres.example.com \
-otlp-downstream-address collector:4317 \
-ingester-catalog-sync-enabled \
-ingester-cache-enabled \
-ingester-cache-addr redis:6379Or with a config file:
# ingester-config.yaml
ingester:
catalog_sync:
enabled: true
flush_interval: 30s
seen_ttl: 1h
redis:
enabled: true
addr: redis:6379- Check the Troubleshooting Guide
- Review the Configuration Reference
- Open an issue on GitHub