Tools & Technologies

A comprehensive overview of the tools, frameworks, and technologies used in this project.

Terminology note: concurrency describes how a runtime handles many in-flight tasks (virtual threads/reactive/goroutines). parallelism describes work happening simultaneously across CPU cores.

Table of Contents


Application Frameworks

Spring Boot 4.0.5

Official Site: https://spring.io/projects/spring-boot

Why We Use It:

Implementation Details:

Thread Models Implemented:

  1. Platform Threads (Traditional)
    • Servlet container thread pool
    • Tomcat connector
    • Blocking I/O model
  2. Virtual Threads (Project Loom)
    • Lightweight threads from Java 21+
    • Simplified async programming
    • Netty connector with virtual thread support
  3. Reactive (WebFlux)
    • Non-blocking I/O
    • Reactor framework
    • Event-loop architecture

Configuration:

server:
  port: 8080
  
spring:
  application:
    name: spring-benchmark
    
management:
  endpoints:
    web:
      exposure:
        include: health,metrics,prometheus

Pros:

Cons:

Quarkus 3.34.1

Official Site: https://quarkus.io/

Why We Use It:

Implementation Details:

Thread Models Implemented:

  1. Platform Threads
    • Traditional blocking I/O
    • Worker thread pool
  2. Virtual Threads
    • Java 21+ virtual threads
    • Seamless async operations
  3. Reactive (Mutiny)
    • Non-blocking reactive streams
    • SmallRye Mutiny API
    • Event-loop based

Configuration:

quarkus.application.name=quarkus-benchmark
quarkus.http.port=8080
quarkus.log.level=INFO

Native Compilation:

# Build native image
mvn package -Pnative

# Size comparison
# JVM JAR: ~200MB
# Native binary: ~50MB

Pros:

Cons:

Micronaut 4.10.18

Official Site: https://micronaut.io/

Why We Use It:

Implementation Details:

Thread Models Implemented:

  1. Platform Threads
    • Traditional Netty worker pool with blocking dispatch
    • Standard thread-per-request model
  2. Virtual Threads
    • Java 21+ virtual threads with Netty carrier threads
    • Combines Netty’s event loop with virtual-thread blocking
  3. Reactive
    • Netty event-loop architecture
    • Reactor / RxJava integration
    • Non-blocking I/O with backpressure

Native Compilation:

# Build native image
./mvnw package -Dpackaging=native-image

Pros:

Cons:

Helidon 4.3.4

Official Site: https://helidon.io/

Why We Use It:

Implementation Details:

Thread Model:

Helidon SE vs Helidon MP:

Aspect Helidon SE Helidon MP
Routing Programmatic / functional CDI + JAX-RS annotations
Overhead Minimal MicroProfile CDI container overhead
Best for Maximum throughput, tiny footprint Standards compliance, enterprise APIs
Typical RPS ~65k (JVM), ~31k (native) ~15k (JVM), ~10k (native)

Build Highlights:

Pros:

Cons:

SparkJava 3.0.4 (Zoomba fork)

Official Site: https://sparkjava.com/ Zoomba fork

Why We Use It:

Implementation Details:

Thread Models Implemented:

  1. Platform Threads
    • Traditional Jetty thread pool
    • Blocking I/O model
  2. Virtual Threads (via Zoomba fork)
    • Lightweight threads from Java 21+
    • Drop-in replacement for the platform thread executor

Pros:

Cons:

Javalin 7.1.0

Official Site: https://javalin.io/

Why We Use It:

Implementation Details:

Thread Models Implemented:

  1. Platform Threads
    • Standard Jetty thread pool
    • Blocking I/O model
  2. Virtual Threads
    • Java 21+ virtual threads via Jetty’s virtual thread executor
    • Blocking code on virtual threads

Pros:

Cons:

Dropwizard 5.0.1

Official Site: https://www.dropwizard.io/ GitHub

Why We Use It:

Implementation Details:

Thread Models Implemented:

  1. Platform Threads
    • Jetty QueuedThreadPool
    • Blocking I/O model
  2. Virtual Threads
    • Jetty 12 VirtualThreadPool (Project Loom)
    • Blocking code on virtual threads

Pros:

Cons:

Vert.x 5.0.8

Official Site: https://vertx.io/ GitHub

Why We Use It:

Implementation Details:

Thread Models Implemented:

  1. Reactive (Event Loop)
    • All request handling runs on the Vert.x event loop
    • Non-blocking sleep via vertx.setTimer (never blocks the event loop)
    • Multiple HTTP server instances (one per event-loop thread) for optimal throughput

Pros:

Cons:

Pekko 1.3.0

Official Site: https://pekko.apache.org/ GitHub

Why We Use It:

Background — Play → Pekko pivot:

Implementation Details:

Thread Models Implemented:

  1. Reactive (Pekko Dispatcher)
    • All request handling runs on the Pekko default ForkJoin dispatcher
    • Non-blocking sleep via Pekko scheduler (never blocks the dispatcher)
    • HTTP/1.1 pipelining-limit=32 for keep-alive benchmarks

Architecture:

Pros:

Cons:

Go with Fiber

Official Site: https://gofiber.io/

Why We’re Adding It:

Headline benchmark (17/02/2026): ~24,000 RPS (observability-aligned implementation)

Fairness note: An additional go-simple variant can reach ~60,000 RPS, but it is excluded from headline comparisons because it does not use an equivalent observability setup to the Java services.

Django 6.0.3

Official Site: https://www.djangoproject.com/ GitHub

Why We Use It:

Implementation Details:

Thread/Concurrency Models Implemented:

  1. Platform Threads (WSGI — gthread)
    • Gunicorn pre-fork multi-process model with threaded workers
    • Each worker process runs multiple OS threads
    • GIL limits true CPU parallelism within a single worker
  2. Reactive / Async (ASGI — UvicornWorker)
    • Gunicorn pre-fork with Uvicorn ASGI workers
    • asyncio event loop per worker
    • Non-blocking I/O for async views

Architecture:

Pros:

Cons:


Observability Stack

Grafana

Official Site: https://grafana.com/

Purpose: Unified visualization and observability platform

Features Used:

Data Sources Configured:

Dashboards:

Access: http://localhost:3000

Loki

Official Site: https://grafana.com/oss/loki/

Purpose: Log aggregation system

Why Loki:

Configuration Highlights:

ingester:
  chunk_idle_period: 5m
  max_chunk_age: 1h
  
limits_config:
  ingestion_rate_mb: 10
  ingestion_burst_size_mb: 20

Query Examples:

# All logs from service
{service_name="quarkus-jvm"}

# Error logs only
{service_name="quarkus-jvm"} |= "ERROR"

# Request duration parsing
{service_name="quarkus-jvm"} | json | duration > 100ms

Tempo

Official Site: https://grafana.com/oss/tempo/

Purpose: Distributed tracing backend

Why Tempo:

Features:

Query Examples:

# Find slow requests
{ duration > 100ms }

# Specific service spans
{ service.name = "quarkus-jvm" }

# Error traces
{ status = error }

Mimir

Official Site: https://grafana.com/oss/mimir/

Purpose: Long-term metrics storage (Prometheus-compatible)

Why Mimir:

Metrics Collected:

Query Examples:

# Request rate
rate(http_server_requests_seconds_count[5m])

# P99 latency
histogram_quantile(0.99, http_server_requests_seconds_bucket)

# Heap usage
jvm_memory_used_bytes{area="heap"}

Pyroscope

Official Site: https://grafana.com/oss/pyroscope/

Purpose: Continuous profiling

Why Pyroscope:

Profiling Methods:

  1. Java Agent (JVM only)
    -javaagent:/opt/pyroscope-agent.jar
    
    • CPU profiling
    • Allocation profiling
    • Lock contention
  2. eBPF (All services)
    • System-level profiling
    • No instrumentation required
    • Minimal overhead
  3. HTTP Scrape (Pull model)
    • Endpoint-based collection
    • Flexible integration

Profile Types:

Grafana Alloy

Official Site: https://grafana.com/oss/alloy/

Purpose: OpenTelemetry collector and distributor

Why Alloy:

Components Used:

Data Flow:

Services → Alloy → {Loki, Tempo, Mimir, Pyroscope}

Testing & Benchmarking

wrk2

Official Site: https://github.com/giltene/wrk2

Purpose: HTTP benchmarking with constant throughput

Why wrk2 (vs. wrk, ab, etc.):

Key Features:

Typical Usage:

wrk2 -t 8 -c 200 -d 180s -R 80000 --latency http://service:8080/hello/platform

Output Metrics:

Comparison to Alternatives:

Tool Type Coordinated Omission Scripting
wrk2 ✅ Fixed rate ✅ Yes ✅ Lua
wrk ❌ Open loop ❌ No ✅ Lua
ab ❌ Open loop ❌ No ❌ No
Gatling ✅ Fixed rate ✅ Yes ✅ Scala

OpenTelemetry

Official Site: https://opentelemetry.io/

Purpose: Standardized telemetry instrumentation

Why OpenTelemetry:

Components:

  1. SDK: Embedded in services
  2. API: Instrumentation interface
  3. Collector: Data pipeline (Alloy)
  4. Protocol: OTLP over gRPC

Instrumentation Methods:

Java Agent (JVM):

-javaagent:/opt/opentelemetry-javaagent.jar

Native Dependency (GraalVM):

<dependency>
    <groupId>io.opentelemetry</groupId>
    <artifactId>opentelemetry-api</artifactId>
</dependency>

Signals Collected:


Infrastructure & Deployment

Docker

Official Site: https://www.docker.com/

Purpose: Containerization platform

Why Docker:

Images Used:

Multi-stage Builds:

# Stage 1: Build
FROM maven:3.9.14-eclipse-temurin-25-noble AS builder
COPY . .
RUN mvn clean package

# Stage 2: Runtime
FROM gcr.io/distroless/java25-debian13:nonroot
COPY --from=builder /target/app.jar /app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]

Docker Compose

Official Site: https://docs.docker.com/compose/

Purpose: Multi-container orchestration

Why Docker Compose:

Compose Features Used:

Example:

services:
  quarkus-jvm:
    build:
      context: ../services/java
      dockerfile: quarkus/jvm/Dockerfile
    container_name: quarkus-jvm
    environment:
      - JAVA_OPTS=-Xmx1g
    deploy:
      resources:
        limits:
          cpus: 2
          memory: 768M

Development Tools

Quality guards (linting / static analysis)

This repository treats code quality tooling as a first-class part of “production readiness”:

Maven

Official Site: https://maven.apache.org/

Purpose: Build automation and dependency management

Why Maven:

Key Plugins:

GraalVM

Official Site: https://www.graalvm.org/

Purpose: High-performance JDK and native compilation

Why GraalVM:

Editions:

Native Image:

native-image \
  -H:+StaticExecutableWithDynamicLibC \
  -H:+ReportExceptionStackTraces \
  -O3 \
  -jar application.jar

Git

Official Site: https://git-scm.com/

Purpose: Version control

Workflow:


Libraries & Dependencies

Caffeine Cache

Official Site: https://github.com/ben-manes/caffeine

Purpose: High-performance Java caching library

Why Caffeine:

Configuration:

Cache<String, String> cache = Caffeine.newBuilder()
   .maximumSize(50000)
   .expireAfterWrite(Duration.ofDays(1))
   .build();

cachetools (Python)

Official Site: https://github.com/tkem/cachetools

Purpose: In-memory caching library for Python

Why cachetools:

Configuration:

cache = TTLCache(maxsize=50_000, ttl=86_400)

Reactor (Spring)

Official Site: https://projectreactor.io/

Purpose: Reactive programming library

Features:

Mutiny (Quarkus)

Official Site: https://smallrye.io/smallrye-mutiny/

Purpose: Reactive programming for Quarkus

Features:


Technology Stack Summary

Layer Category Technology Version Purpose / Role
Execution Runtime Java (Eclipse Temurin) 25.0.2 Primary JVM runtime for backend services under benchmark
Execution Runtime GraalVM 25.0.2 Native image compilation for startup and memory footprint benchmarks
Execution Runtime Go 1.26.1 High-performance baseline services for comparison
Execution Runtime Python (CPython) 3.13.12 Interpreted runtime for Django benchmark services
Execution Server Gunicorn 25.3.0 Production WSGI/ASGI process manager for Django benchmark services
Execution Runtime Node.js 25.8.2 Frontend tooling and SSR runtime
Backend Framework Spring Boot 4.0.5 Enterprise Java baseline framework
Backend Framework Quarkus 3.34.1 Cloud-native Java framework (JVM + native image focus)
Backend Framework Micronaut 4.10.18 Compile-time optimized JVM microservices framework
Backend Framework Helidon SE 4.3.4 Lightweight Java microservices (programmatic routing)
Backend Framework Helidon MP 4.3.4 MicroProfile-compliant Java microservices (CDI + JAX-RS)
Backend Framework SparkJava (Zoomba fork) 3.0.4 Minimal HTTP server (virtual-thread friendly)
Backend Framework Javalin 7.1.0 Lightweight REST server
Backend Framework Dropwizard 5.0.1 Production-ready RESTful web services (Jetty + Jersey + Jackson)
Backend Framework Vert.x 5.0.8 Reactive, event-driven applications on the JVM (Netty)
Backend Framework Pekko 1.3.0 Reactive HTTP toolkit on the Pekko actor system (Apache)
Backend Framework Django 6.0.3 Python web framework (WSGI platform + ASGI reactive)
Frontend Framework Next.js 16.2.2 SSR frontend and control dashboard
Frontend Library React 19.2.4 UI rendering layer
Frontend Language TypeScript 6.0.2 Type-safe frontend development
Frontend UI Library Material UI (MUI) 7.3.9 Component library and theming
Observability Visualization Grafana 12.4.2 Metrics, logs, traces dashboards
Observability Logs Loki 3.7.1 Log aggregation
Observability Tracing Tempo 2.10.3 Distributed tracing backend
Observability Metrics Mimir 3.0.5 Long-term metrics storage
Observability Profiling Pyroscope 1.19.1 Continuous CPU and memory profiling
Observability Collection Grafana Alloy 1.10.2 Unified telemetry collection pipelines
Telemetry Instrumentation OpenTelemetry SDK 1.60.1 Manual metrics, logs, and traces instrumentation
Telemetry Instrumentation OpenTelemetry Distribution 2.26.1 Auto-instrumentation and exporters
Performance Cache Caffeine 3.2.3 High-performance in-memory caching (Java)
Performance Cache cachetools 7.0.5 In-memory caching (Python)
Platform Container Runtime Docker Engine 24+ Container runtime for reproducible benchmarks
Platform Orchestration Docker Compose v2 Local multi-service orchestration
Platform Tooling Docker CLI 29.3.1 Image build and lifecycle management
Build Build Tool Maven 3.9.14 Java build and dependency management
Build Build Tool pip-compile Latest Python dependency pinning and resolution
Build Package Manager npm 11.12.1 Frontend dependency management
Quality Linter / Formatter Ruff 0.15.8 Python linting and code formatting
Testing Load Testing wrk2 Latest Deterministic HTTP benchmarking
Testing Unit / Integration JUnit 5 / 6 JVM unit and integration testing
Testing Frontend Testing Vitest 4.1.2 Frontend unit testing

Further Reading

Official Documentation

Community Resources

Learning Paths


Next: Getting Started Architecture Benchmarking Adding a Service