Getting Started Guide
This guide helps you set up and run the Observability Benchmarking environment on your local machine.
It’s designed for reproducible, like-for-like performance testing under a consistent observability pipeline (OpenTelemetry + Grafana LGTM + profiling).
Prerequisites
Required Software
- Docker Desktop (or Docker Engine + Docker Compose)
- Version: 24.0.0 or higher
- Docker Compose v2 (included in Docker Desktop)
- Download: https://www.docker.com/products/docker-desktop
- System Requirements
- CPU: Minimum 8 cores (16 recommended for full stack)
- RAM: Minimum 16 GB (32 GB recommended)
- Storage: 20 GB free space
- OS: Windows 10/11 (WSL2), macOS, or Linux
Mandatory local path setting (HOST_REPO)
⚠️ This repository is orchestrated via the compose/ project directory.
In compose/.env, you must set HOST_REPO to the absolute path of the repository root on your machine.
If HOST_REPO is incorrect, bind mounts used by the dashboard/orchestrator and benchmark tooling won’t resolve and the environment will start in a broken state.
Native-image build time & stability notes
Native-image builds are CPU intensive and can take up to ~10 minutes per service. A first-time build of the full set of services can take 30+ minutes.
On Windows + WSL2 / Docker Desktop, building native images in parallel can exhaust system resources and has been observed to crash Docker Engine.
For stability, the repository defaults to serial image builds:
COMPOSE_PARALLEL_LIMIT=1
Verify Installation
# Check Docker version
docker --version
# Expected: Docker version 24.0.0 or higher
# Check Docker Compose version
docker compose version
# Expected: Docker Compose version v2.x.x
# Verify Docker is running
docker ps
# Should list containers (or show empty list if none running)
Getting started options
There are three supported ways to run the stack. All of them ultimately use Docker Compose in compose/.
- IntelliJ IDEA Run/Debug workflow (recommended for development)
- Uses pre-configured run configurations under
.run/to build and orchestrate repeatable tasks.
- Uses pre-configured run configurations under
- Docker Compose from your terminal
- Best if you want to be explicit and scriptable.
- Dashboard/orchestrator-driven workflow
- Start the stack once, then use the Next.js dashboard (port
3001) as a control plane to run scripts and adjust runtime configuration.
- Start the stack once, then use the Next.js dashboard (port
Quick Start (5 Minutes)
Step 1: Clone the Repository
git clone https://github.com/George-C-Odes/Observability-Benchmarking.git
cd Observability-Benchmarking
Step 2: Start the Observability Stack
This launches Grafana, Loki, Tempo, Mimir, Pyroscope, and Alloy:
docker compose --project-directory compose --profile=OBS up --no-recreate --build -d
Expected output:
[+] Running 6/6
✔ Container grafana Started
✔ Container loki Started
✔ Container tempo Started
✔ Container mimir Started
✔ Container pyroscope Started
✔ Container alloy Started
Wait 60 seconds for all services to initialize before proceeding.
Step 3: Access Grafana
- Open browser: http://localhost:3000
- Default credentials: Username
a, Passworda - Explore the pre-configured dashboards and data sources
Step 4: Start a Service
Launch the Quarkus JVM service:
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d quarkus-jvm
Wait 30 seconds for service startup and JVM initialization.
Step 5: Run a Benchmark
docker compose --project-directory compose --profile=RAIN_FIRE up --force-recreate -d
This starts wrk2 load generators targeting all running services.
Step 6: View Results
- Grafana: http://localhost:3000 - Dashboards, metrics, logs, traces
- Service Metrics: Navigate to Explore → Prometheus
- Logs: Explore → Loki
- Traces: Explore → Tempo
- Profiles: Explore → Pyroscope
Detailed Setup
Configuration with .env File
Create a .env file in the project root for custom configuration:
Windows (PowerShell)
# Copy example configuration
Copy-Item .env.example .env
# Edit with your preferred editor
notepad .env
macOS / Linux
# Copy example configuration
cp .env.example .env
# Edit with your preferred editor
nano .env
Example .env settings:
# Benchmark target URLs file — managed via Dashboard 'Benchmark Targets' tab or by editing config/benchmark-targets.txt directly.
# Contains one full URL per line (e.g. http://quarkus-jvm:8080/hello/platform). Comments/blanks are ignored.
WRK_TARGETS_FILE: /workspace/config/benchmark-targets.txt
# If false, wrk2 container still boots and available to docker exec benchmarks on demand
WRK_AUTORUN: true
# If true, wrk2 container stops after autorun completion
WRK_EXIT_AFTER_AUTORUN: false
# 0 means indefinitely
WRK_ITERATIONS: 1
# Sleep in seconds between endpoints / iterations
WRK_SLEEP_BETWEEN: 10
# Sleep in seconds on boot
WRK_SLEEP_INIT: 20
# wrk2 client number of concurrent threads
WRK_THREADS: 5
# wrk2 client number of concurrent open connections
WRK_CONNECTIONS: 200
# wrk2 client test duration (per endpoint)
WRK_DURATION: 3m
# wrk2 client target rate of requests per second, total across all threads
WRK_RATE: 120000
# If true, exports the wrk2 benchmark log to file
WRK_SAVE_LOGS: true
# Specific max number of CPU core and Memory TOTAL allocations to be used in every container of benchmarked services
CORES_LIMIT: 2
MEM_LIMIT: 832M
HEAP_MIN: 64M
HEAP_MAX: 704M
OFF_HEAP_MAX: 32M
Profile-Based Deployment
The project uses Docker Compose profiles for modular deployment:
Profile: OBS (Observability Stack)
docker compose --project-directory compose --profile=OBS up -d
Includes:
- Grafana (visualization)
- Loki (logs)
- Tempo (traces)
- Mimir (metrics)
- Pyroscope (profiles)
- Alloy (collector)
Profile: CONTROL (Dashboard + Orchestrator)
docker compose --project-directory compose --profile=OBS --profile=CONTROL up --no-recreate --build -d
Includes the control-plane services:
- Next.js Dashboard (port 3001) — environment editor, benchmark targets selector, script runner
- Quarkus Orchestrator (port 3002) — Docker control plane with SSE streaming
See control-plane.html for details.
Profile: SERVICES (REST Services)
docker compose --project-directory compose --profile=SERVICES up -d
Includes all service implementations:
- Spring Boot (JVM: platform, virtual, reactive)
- Spring Boot (Native: platform, virtual, reactive)
- Quarkus (JVM: platform, virtual, reactive)
- Quarkus (Native: platform, virtual, reactive)
- Micronaut (JVM: platform, virtual, reactive)
- Micronaut (Native: platform, virtual, reactive)
- Helidon SE (JVM & Native: virtual)
- Helidon MP (JVM & Native: virtual)
- Spark (JVM: platform, virtual)
- Javalin (JVM: platform, virtual)
- Dropwizard (JVM: platform, virtual)
- Vert.x (JVM: reactive)
- Pekko (JVM: reactive)
- Go
- Django (CPython: platform, reactive)
Profile: RAIN_FIRE (Load Generators)
docker compose --project-directory compose --profile=RAIN_FIRE up -d
Includes wrk2 load generators for each service.
Combined Profiles
# Run everything
docker compose --project-directory compose \
--profile=OBS --profile=SERVICES --profile=RAIN_FIRE \
up --no-recreate --build -d
Starting Individual Services
# Spring Boot JVM with platform threads
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d spring-jvm-tomcat-platform
# Spring Boot JVM with virtual threads
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d spring-jvm-tomcat-virtual
# Spring Boot JVM with reactive
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d spring-jvm-netty
# Spring Boot Native with platform threads
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d spring-native-tomcat-platform
# Spring Boot Native with virtual threads
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d spring-native-tomcat-virtual
# Spring Boot Native with reactive
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d spring-native-netty
# Quarkus JVM
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d quarkus-jvm
# Quarkus Native
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d quarkus-native
# Micronaut JVM
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d micronaut-jvm
# Micronaut Native
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d micronaut-native
# Helidon SE JVM (only virtual threads applicable)
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d helidon-se-jvm
# Helidon SE Native (only virtual threads applicable)
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d helidon-se-native
# Helidon MP JVM (only virtual threads applicable)
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d helidon-mp-jvm
# Helidon MP Native (only virtual threads applicable)
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d helidon-mp-native
# Spark JVM with platform threads
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d spark-jvm-platform
# Spark JVM with virtual threads
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d spark-jvm-virtual
# Javalin JVM with platform threads
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d javalin-jvm-platform
# Javalin JVM with virtual threads
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d javalin-jvm-virtual
# Dropwizard JVM with platform threads
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d dropwizard-jvm-platform
# Dropwizard JVM with virtual threads
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d dropwizard-jvm-virtual
# Vert.x JVM (reactive)
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d vertx-jvm
# Pekko JVM (reactive)
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d pekko-jvm
# Go
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d go
# Django Platform (Python WSGI)
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d django-platform
# Django Reactive (Python ASGI)
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d django-reactive
Service Endpoints
Once services are running:
- Spring JVM (Platform): http://localhost:8080/hello/platform
- Spring JVM (Virtual): http://localhost:8081/hello/virtual
- Spring JVM (Reactive): http://localhost:8082/hello/reactive
- Spring Native (Platform): http://localhost:8083/hello/platform
- Spring Native (Virtual): http://localhost:8084/hello/virtual
- Spring Native (Reactive): http://localhost:8085/hello/reactive
- Quarkus JVM (Platform): http://localhost:8086/hello/platform
- Quarkus JVM (Virtual): http://localhost:8086/hello/virtual
- Quarkus JVM (Reactive): http://localhost:8086/hello/reactive
- Quarkus Native (Platform): http://localhost:8087/hello/platform
- Quarkus Native (Virtual): http://localhost:8087/hello/virtual
- Quarkus Native (Reactive): http://localhost:8087/hello/reactive
- Micronaut JVM (Platform): http://localhost:8088/hello/platform
- Micronaut JVM (Virtual): http://localhost:8088/hello/virtual
- Micronaut JVM (Reactive): http://localhost:8088/hello/reactive
- Micronaut Native (Platform): http://localhost:8089/hello/platform
- Micronaut Native (Virtual): http://localhost:8089/hello/virtual
- Micronaut Native (Reactive): http://localhost:8089/hello/reactive
- Helidon SE JVM (Virtual): http://localhost:8090/hello/virtual
- Helidon SE Native (Virtual): http://localhost:8091/hello/virtual
- Helidon MP JVM (Virtual): http://localhost:8092/hello/virtual
- Helidon MP Native (Virtual): http://localhost:8093/hello/virtual
- Spark JVM (Platform): http://localhost:8094/hello/platform
- Spark JVM (Virtual): http://localhost:8095/hello/virtual
- Javalin JVM (Platform): http://localhost:8096/hello/platform
- Javalin JVM (Virtual): http://localhost:8097/hello/virtual
- Dropwizard JVM (Platform): http://localhost:8098/hello/platform
- Dropwizard JVM (Virtual): http://localhost:8099/hello/virtual
- Vert.x JVM (Reactive): http://localhost:8100/hello/reactive
- Pekko JVM (Reactive): http://localhost:8101/hello/reactive
- Go: http://localhost:9080/hello/virtual
- Django WSGI (Platform): http://localhost:9090/hello/platform
- Django ASGI (Reactive): http://localhost:9091/hello/reactive
Health checks available at /q/health (Quarkus), /actuator/health (Spring), /ready (Spark, Javalin, Dropwizard, Vert.x, Pekko), or /hello/healthz (Django).
Running Your First Benchmark
1. Prepare the Environment
# Ensure observability stack is running
docker compose --project-directory compose --profile=OBS up --no-recreate -d
# Start the service you want to benchmark
docker compose --project-directory compose --profile=OBS --profile=SERVICES up -d quarkus-jvm
Wait / warm up (cross-platform)
- Windows PowerShell:
Start-Sleep -Seconds 30
- macOS/Linux:
sleep 30
2. Manual Benchmark with wrk2
# Install wrk2 (if not using Docker)
# On Ubuntu/Debian:
sudo apt-get install build-essential libssl-dev git
git clone https://github.com/giltene/wrk2.git
cd wrk2
make
sudo cp wrk /usr/local/bin/
# Run benchmark
wrk2 -t 8 -c 200 -d 180s -R 80000 --latency \
http://localhost:8080/hello/platform
3. Automated Benchmark (Docker)
# Use the pre-configured load generator
docker compose --project-directory compose --profile=RAIN_FIRE up --force-recreate -d
# Monitor in Grafana
# Results saved to ./results/
4. Analyze Results
Windows (PowerShell)
# View wrk2 output (adjust file name/path to your run)
Get-Content results\latest-benchmark.txt
# Check Docker stats
docker stats --no-stream
macOS / Linux
# View wrk2 output
cat results/latest-benchmark.txt
# Check Docker stats
docker stats --no-stream
Screenshots and reporting
Screenshots (for Grafana dashboards, traces, logs, and flame graphs) are kept under docs/images/screenshots/.
See docs/images/README.md for naming and inclusion guidance.
Next steps
- Read Benchmarking Methodology:
benchmarking.html - Review Tools & Technologies:
tools-technologies.html - If you’re adding a new benchmark target:
adding-a-service.html - When you publish results, store raw outputs and summaries under
results/(seeresults/README.md).