Getting Started Guide
This guide helps you set up and run the Observability Benchmarking environment on your local machine.
It’s designed for reproducible, like-for-like performance testing under a consistent observability pipeline (OpenTelemetry + Grafana LGTM + profiling).
Prerequisites
Required Software
- Docker Desktop (or Docker Engine + Docker Compose)
- Version: 24.0.0 or higher
- Docker Compose v2 (included in Docker Desktop)
- Download: https://www.docker.com/products/docker-desktop
- System Requirements
- CPU: Minimum 8 cores (16 recommended for full stack)
- RAM: Minimum 16 GB (32 GB recommended)
- Storage: 20 GB free space
- OS: Windows 10/11 (WSL2), macOS, or Linux
Mandatory local path setting (HOST_REPO)
⚠️ This repository is orchestrated via the compose/ project directory.
In compose/.env, you must set HOST_REPO to the absolute path of the repository root on your machine.
If HOST_REPO is incorrect, bind mounts used by the dashboard/orchestrator and benchmark tooling won’t resolve and the environment will start in a broken state.
Native-image build time & stability notes
Native-image builds are CPU intensive and can take up to ~10 minutes per service. A first-time build of the full set of services can take 30+ minutes.
On Windows + WSL2 / Docker Desktop, building native images in parallel can exhaust system resources and has been observed to crash Docker Engine.
For stability, the repository defaults to serial image builds:
COMPOSE_PARALLEL_LIMIT=1
Verify Installation
# Check Docker version
docker --version
# Expected: Docker version 24.0.0 or higher
# Check Docker Compose version
docker compose version
# Expected: Docker Compose version v2.x.x
# Verify Docker is running
docker ps
# Should list containers (or show empty list if none running)
Getting started options
There are three supported ways to run the stack. All of them ultimately use Docker Compose in compose/.
- IntelliJ IDEA Run/Debug workflow (recommended for development)
- Uses pre-configured run configurations under
.run/to build and orchestrate repeatable tasks.
- Uses pre-configured run configurations under
- Docker Compose from your terminal
- Best if you want to be explicit and scriptable.
- Dashboard/orchestrator-driven workflow
- Start the stack once, then use the Next.js dashboard (port
3001) as a control plane to run scripts and adjust runtime configuration.
- Start the stack once, then use the Next.js dashboard (port
Quick Start (5 Minutes)
Step 1: Clone the Repository
git clone https://github.com/George-C-Odes/Observability-Benchmarking.git
cd Observability-Benchmarking
Step 2: Start the Observability Stack
This launches Grafana, Loki, Tempo, Mimir, Pyroscope, and Alloy:
docker compose --project-directory compose --profile=OBS up --no-recreate --build -d
Expected output:
[+] Running 6/6
✔ Container grafana Started
✔ Container loki Started
✔ Container tempo Started
✔ Container mimir Started
✔ Container pyroscope Started
✔ Container alloy Started
Wait 60 seconds for all services to initialize before proceeding.
Step 3: Access Grafana
- Open browser: http://localhost:3000
- Default credentials: Username
a, Passworda - Explore the pre-configured dashboards and data sources
Step 4: Start a Service
Launch a Spring Boot service with virtual threads:
docker compose --project-directory compose --profile=SERVICES up --no-recreate --build -d spring-jvm-virtual
Wait 30 seconds for service startup and JVM initialization.
Step 5: Run a Benchmark
docker compose --project-directory compose --profile=RAIN_FIRE up --force-recreate -d
This starts wrk2 load generators targeting all running services.
Step 6: View Results
- Grafana: http://localhost:3000 - Dashboards, metrics, logs, traces
- Service Metrics: Navigate to Explore → Prometheus
- Logs: Explore → Loki
- Traces: Explore → Tempo
- Profiles: Explore → Pyroscope
Detailed Setup
Configuration with .env File
Create a .env file in the project root for custom configuration:
Windows (PowerShell)
# Copy example configuration
Copy-Item .env.example .env
# Edit with your preferred editor
notepad .env
macOS / Linux
# Copy example configuration
cp .env.example .env
# Edit with your preferred editor
nano .env
Example .env settings:
# Load Generator Configuration
WRK_THREADS=8
WRK_CONNECTIONS=200
WRK_DURATION=180s
WRK_RATE=80000
# Service Configuration
SPRING_HEAP_SIZE=1024m
QUARKUS_HEAP_SIZE=512m
# Resource Limits
SERVICE_CPU_LIMIT=4.0
SERVICE_MEMORY_LIMIT=2g
# Observability Configuration
GRAFANA_PORT=3000
LOKI_PORT=3100
TEMPO_PORT=3200
Profile-Based Deployment
The project uses Docker Compose profiles for modular deployment:
Profile: OBS (Observability Stack)
docker compose --project-directory compose --profile=OBS up -d
Includes:
- Grafana (visualization)
- Loki (logs)
- Tempo (traces)
- Mimir (metrics)
- Pyroscope (profiles)
- Alloy (collector)
Profile: SERVICES (REST Services)
docker compose --project-directory compose --profile=SERVICES up -d
Includes all service implementations:
- Spring Boot (JVM: platform, virtual, reactive)
- Quarkus (JVM: platform, virtual, reactive)
- Quarkus (Native: platform, virtual, reactive)
Profile: RAIN_FIRE (Load Generators)
docker compose --project-directory compose --profile=RAIN_FIRE up -d
Includes wrk2 load generators for each service.
Combined Profiles
# Run everything
docker compose --project-directory compose \
--profile=OBS --profile=SERVICES --profile=RAIN_FIRE \
up --no-recreate --build -d
Starting Individual Services
# Spring Boot with virtual threads
docker compose --project-directory compose up -d spring-jvm-virtual
# Quarkus reactive (JVM)
docker compose --project-directory compose up -d quarkus-jvm-reactive
# Quarkus native with platform threads
docker compose --project-directory compose up -d quarkus-native-platform
Service Endpoints
Once services are running:
- Spring JVM (Virtual): http://localhost:8081/api/cache/key1
- Spring JVM (Platform): http://localhost:8082/api/cache/key1
- Spring JVM (Reactive): http://localhost:8083/api/cache/key1
- Quarkus JVM (All modes): http://localhost:8090/api/cache/key1
- Quarkus Native (All modes): http://localhost:8091/api/cache/key1
Health checks available at /q/health (Quarkus) or /actuator/health (Spring).
Running Your First Benchmark
1. Prepare the Environment
# Ensure observability stack is running
docker compose --project-directory compose --profile=OBS ps
# Start the service you want to benchmark
docker compose --project-directory compose up -d spring-jvm-virtual
Wait / warm up (cross-platform)
- Windows PowerShell:
Start-Sleep -Seconds 30
- macOS/Linux:
sleep 30
2. Manual Benchmark with wrk2
# Install wrk2 (if not using Docker)
# On Ubuntu/Debian:
sudo apt-get install build-essential libssl-dev git
git clone https://github.com/giltene/wrk2.git
cd wrk2
make
sudo cp wrk /usr/local/bin/
# Run benchmark
wrk2 -t 8 -c 200 -d 180s -R 80000 --latency \
http://localhost:8081/api/cache/key1
3. Automated Benchmark (Docker)
# Use the pre-configured load generator
docker compose --project-directory compose --profile=RAIN_FIRE up --force-recreate -d
# Monitor in Grafana
# Results saved to ./results/
4. Analyze Results
Windows (PowerShell)
# View wrk2 output (adjust file name/path to your run)
Get-Content results\latest-benchmark.txt
# Check Docker stats
docker stats --no-stream
macOS / Linux
# View wrk2 output
cat results/latest-benchmark.txt
# Check Docker stats
docker stats --no-stream
Screenshots and reporting
Screenshots (for Grafana dashboards, traces, logs, and flame graphs) are kept under docs/images/screenshots/.
See docs/images/README.md for naming and inclusion guidance.
Next steps
- Read Benchmarking Methodology:
benchmarking.html - Review Tools & Technologies:
tools-technologies.html - If you’re adding a new benchmark target:
adding-a-service.html - When you publish results, store raw outputs and summaries under
results/(seeresults/README.md).