This project demonstrates two methods to collect telemetry from PostgreSQL using OpenTelemetry, visualized through Jaeger (for traces) and Prometheus (for metrics). It is built to run fully inside GitHub Codespaces using Docker Compose — no local setup required.
- How to auto-instrument PostgreSQL queries in a Python app (client-side tracing)
- How to collect PostgreSQL metrics via
postgres_exporter
(server-side metrics) - How to send data to OpenTelemetry Collector and visualize using Jaeger & Prometheus
otel-postgres-demo/
│
├── app/
│ └── main.py # Flask app with PostgreSQL query (Method 1)
│
├── .devcontainer/
│ └── devcontainer.json # GitHub Codespaces setup
│
├── otel-collector-config.yml # Config for OpenTelemetry Collector
├── docker-compose.yml # Spins up all required services
├── requirements.txt # Python dependencies
└── README.md # You’re here
Service | Role |
---|---|
flask-app |
Python app w/ OTel-instrumented DB client (psycopg2) |
postgres |
PostgreSQL database |
postgres-exporter |
Exposes DB metrics to Prometheus |
otel-collector |
Central OTel telemetry collector (traces + metrics) |
jaeger |
Observability backend for viewing traces |
You can run this entire setup without installing anything locally using GitHub Codespaces.
- Fork or clone this repo.
- Open it in GitHub Codespaces.
- Run the following command to spin up all services:
docker-compose up --build
You should see logs like:
flask-app | * Running on http://0.0.0.0:5000
otel-collector | Everything is ready. Begin collecting!
The app connects to PostgreSQL using an OTel-instrumented client (psycopg2
). It auto-generates spans that trace query duration and attaches them to the overall trace context.
- Correlates database query latency with user-facing endpoints
- Helps root-cause slowdowns (
/checkout is slow → 2s JOIN query
)
-
Install Dependencies (handled by Dockerfile)
pip install opentelemetry-sdk opentelemetry-instrumentation-psycopg2
-
Run the App
Open a terminal in Codespaces and trigger a request:
curl http://localhost:5000/
Expected response:
{ "message": "Randoli loves observability!" }
-
View Traces
- Open Jaeger UI: http://localhost:16686
- Look under
flask-app
service to inspect traces from PostgreSQL query
This approach collects PostgreSQL internal metrics like query counts, slow queries, cache hits, etc., via postgres_exporter
.
- Gives macro-level visibility (e.g., cache hit ratio, active connections)
- Complements trace-level info with system-wide DB metrics
-
Run
postgres_exporter
It exposes metrics at: http://localhost:9187/metrics
Sample output:
pg_stat_database_xact_commit{datname="demo"} 104 pg_stat_activity_count{state="active"} 3
-
Configure OpenTelemetry Collector
The
otel-collector-config.yml
is already set to scrape from the exporter and send metrics to Prometheus (or other OTLP-compatible backends). -
Verify Metrics
- View raw metrics at: http://localhost:9187/metrics
- Hook into Grafana or any Prometheus frontend for dashboards
You can view:
- Trace visualizations in Jaeger (
localhost:16686
) - Metrics exposed from PostgreSQL Exporter (
localhost:9187/metrics
)
Screenshots included in the demo/
folder.
This shows the OpenTelemetry traces from PostgreSQL queries captured via Flask client instrumentation:
PostgreSQL exported metrics collected via OpenTelemetry Collector and Prometheus:
- Captured individual PostgreSQL queries as spans
- Linked DB performance to HTTP endpoints
- Monitored PostgreSQL health at system level
- Gained insight into active queries, buffer usage, cache hits
- This project is built to run inside Codespaces, so your local machine stays clean.
- If you're running locally instead, ensure you have Docker and Python 3.8+ installed.
- If Codespace is deleted, your data/config is lost unless