Skip to content

Latest commit

 

History

History
298 lines (212 loc) · 8.36 KB

File metadata and controls

298 lines (212 loc) · 8.36 KB

Quick Start Guide

This guide will walk you through setting up prom-analytics-proxy and configuring your query clients to use it.

Prerequisites

  • A running Prometheus, Thanos, or Cortex instance
  • Access to configure your query clients (Grafana, Perses, etc.)
  • Go 1.21+ (if building from source)

Step 1: Start the Proxy

Option A: Build from Source

# Clone the repository
git clone https://github.com/nicolastakashi/prom-analytics-proxy.git
cd prom-analytics-proxy

# Build and run
make build
./prom-analytics-proxy -upstream http://your-prometheus-server:9090

Option B: Using Docker

docker run -p 9091:9091 \
  ghcr.io/nicolastakashi/prom-analytics-proxy:latest \
  -upstream http://your-prometheus-server:9090

Option C: Using Docker Compose (Recommended for Testing)

The easiest way to try out the proxy is using the provided Docker Compose setup:

cd examples
docker compose up -d

This will start:

  • Prometheus (port 9090) - with sample rules and configuration
  • PostgreSQL (port 5432) - database backend for the proxy
  • prom-analytics-proxy (port 9091) - the proxy itself
  • Perses (port 8080) - dashboard tool with pre-configured dashboards
  • Metrics Usage (port 8081) - metrics usage tracking UI

See the examples/docker-compose.yaml and examples/config/ for the full configuration.

The proxy will start on port :9091 by default.

Step 2: Reconfigure Your Query Clients

This is the critical step! You must update your query clients to send queries to the proxy instead of directly to Prometheus.

For Grafana

  1. Go to Configuration → Data Sources
  2. Edit your Prometheus data source
  3. Change the URL from http://prometheus:9090 to http://prom-analytics-proxy:9091
  4. Click Save & Test

For Perses

Update your datasource configuration to point to the proxy:

datasources:
  - name: PrometheusDemo
    default: true
    plugin:
      kind: PrometheusDatasource
      spec:
        proxy:
          kind: HTTPProxy
          spec:
            url: http://prom-analytics-proxy:9091  # Changed from prometheus:9090

For Custom Applications

Update your Prometheus client configuration to use the proxy URL:

// Before
prometheusURL := "http://prometheus:9090"

// After
prometheusURL := "http://prom-analytics-proxy:9091"

For Kubernetes Deployments

If your applications use a Kubernetes Service to reach Prometheus, you can:

Option 1: Update the Service selector to point to the proxy instead of Prometheus

Option 2: Deploy the proxy as a sidecar alongside your application

Option 3: Create a new Service for the proxy and update your applications to use it

Step 3: Verify Data Collection

  1. Open the web UI at http://localhost:9091 (or your proxy address)
  2. Execute some queries from your clients (open Grafana dashboards, etc.)
  3. Refresh the UI - you should see captured query analytics appear

Verification Checklist

  • Proxy is running and accessible
  • Query clients are configured to use the proxy URL (:9091)
  • Test queries work through the proxy
  • Analytics data appears in the web UI
  • Proxy logs show query traffic

If you don't see any data, verify that your clients are actually sending queries to the proxy (check the logs). See the Troubleshooting Guide for common issues.

Next Steps

Examples

The examples directory contains complete working configurations.

Configuration

The config directory contains some configurations examples (config.yaml).

Demo

The demo directory contains 2 working demo.

Proxy only

docker-compose.yaml - Complete stack with:

  • Prometheus with alerting rules
  • NGINX configured as Prometheus proxy
  • PostgreSQL
  • prom-analytics-proxy configured to use PostgreSQL
  • Perses with sample dashboards
  • Perses Metrics Usage integration

Full

docker-compose.yaml - Complete stack with:

  • Prometheus with alerting rules
  • PostgreSQL
  • Redis
  • Node Exporter
  • Open Telemetry collector
  • prom-analytics-proxy API configured to use PostgreSQL
  • prom-analytics-proxy ingester configured to use PostgreSQL, Redis and Open Telemetry
  • Perses with sample dashboards
  • Perses Metrics Usage integration

Kubernetes

The kube directory contains an opinionated set of Kubernetes YAML manifests.

Command Line

Minimal Configuration (SQLite)

./prom-analytics-proxy \
  -upstream http://prometheus:9090 \
  -database-provider sqlite \
  -sqlite-database-path ./data/analytics.db

Production Configuration (PostgreSQL)

./prom-analytics-proxy \
  -upstream http://prometheus:9090 \
  -database-provider postgresql \
  -postgresql-addr postgres.example.com \
  -postgresql-port 5432 \
  -postgresql-database prom_analytics \
  -postgresql-user analytics \
  -insert-batch-size 50 \
  -insert-buffer-size 500

Note the possibility to use environment variables for sensitive data like passwords. Following variables can be set in the environment instead of passing via command line:

  • POSTGRESQL_USER
  • POSTGRESQL_PASSWORD
  • POSTGRESQL_DATABASE

Using a Configuration File

Create a config.yaml file:

upstream: "http://prometheus:9090"
insecure-listen-address: ":9091"
database-provider: "postgresql"

postgresql-addr: "postgres.example.com"
postgresql-port: 5432
postgresql-database: "prom_analytics"
postgresql-user: "analytics"
postgresql-password: "your-password"

insert-batch-size: 50
insert-buffer-size: 500
insert-flush-interval: "10s"

inventory:
  enabled: true
  sync_interval: 15m
  time_window: 720h

Then run:

./prom-analytics-proxy -config-file config.yaml

Common Setup Patterns

Pattern 1: Local Development

# Run Prometheus locally
docker run -p 9090:9090 prom/prometheus

# Run the proxy pointing to it
./prom-analytics-proxy -upstream http://localhost:9090

# Configure Grafana to use http://localhost:9091

Pattern 2: Kubernetes Sidecar

Deploy the proxy as a sidecar container alongside your application, so all Prometheus queries from that app go through the proxy automatically.

Pattern 3: Central Proxy

Deploy a single proxy instance that all your query clients (Grafana, Perses, custom apps) connect to. This centralizes analytics collection.

Pattern 4: Per-Team Proxies

Deploy separate proxy instances for different teams, each with their own database. This provides team-specific analytics and isolation.

Pattern 5: Ingester Mode (OTLP write-path filtering)

Run the ingester as a standalone process that receives OTLP metrics, filters unused ones, and forwards the rest. The API server handles analytics and the catalog UI. Both share the same database.

# API server — usage aggregation only, catalog populated by the ingester
./prom-analytics-proxy api \
  -upstream http://prometheus:9090 \
  -database-provider postgresql \
  -postgresql-addr postgres.example.com \
  -inventory-metadata-sync-enabled=false

# Ingester — receives OTLP, filters unused metrics, populates catalog
./prom-analytics-proxy ingester \
  -database-provider postgresql \
  -postgresql-addr postgres.example.com \
  -otlp-downstream-address collector:4317 \
  -ingester-catalog-sync-enabled \
  -ingester-cache-enabled \
  -ingester-cache-addr redis:6379

Or with a config file:

# ingester-config.yaml
ingester:
  catalog_sync:
    enabled: true
    flush_interval: 30s
    seen_ttl: 1h
  redis:
    enabled: true
    addr: redis:6379

Getting Help