Tools & Technologies
A comprehensive overview of the tools, frameworks, and technologies used in this project.
Terminology note: concurrency describes how a runtime handles many in-flight tasks (virtual threads/reactive/goroutines). parallelism describes work happening simultaneously across CPU cores.
Table of Contents
- Application Frameworks
- Observability Stack
- Testing & Benchmarking
- Infrastructure & Deployment
- Development Tools
- Libraries & Dependencies
Application Frameworks
Spring Boot 4.0.5
Official Site: https://spring.io/projects/spring-boot
Why We Use It:
- Industry standard for enterprise Java applications
- Extensive ecosystem and community support
- Multiple deployment modes (embedded Tomcat/Netty, standalone)
- Excellent integration with observability tools
Implementation Details:
- Spring Boot 4.0.5 (latest major release)
- Spring WebFlux for reactive implementation
- Spring MVC for traditional servlet-based implementations
- Actuator for health checks and metrics
Thread Models Implemented:
- Platform Threads (Traditional)
- Servlet container thread pool
- Tomcat connector
- Blocking I/O model
- Virtual Threads (Project Loom)
- Lightweight threads from Java 21+
- Simplified async programming
- Netty connector with virtual thread support
- Reactive (WebFlux)
- Non-blocking I/O
- Reactor framework
- Event-loop architecture
Configuration:
server:
port: 8080
spring:
application:
name: spring-benchmark
management:
endpoints:
web:
exposure:
include: health,metrics,prometheus
Pros:
- Mature, stable, well-documented
- Familiar to most Java developers
- Rich feature set
- Strong enterprise adoption
Cons:
- Higher memory footprint
- Slower startup compared to Quarkus
- More complex configuration for optimal performance
Quarkus 3.34.1
Official Site: https://quarkus.io/
Why We Use It:
- Kubernetes-native architecture
- Optimized for cloud deployment
- Fast startup and low memory usage
- Native compilation support
Implementation Details:
- Quarkus 3.34.1 (latest stable)
- RESTEasy Reactive for REST endpoints
- SmallRye for reactive programming
- GraalVM for native compilation
Thread Models Implemented:
- Platform Threads
- Traditional blocking I/O
- Worker thread pool
- Virtual Threads
- Java 21+ virtual threads
- Seamless async operations
- Reactive (Mutiny)
- Non-blocking reactive streams
- SmallRye Mutiny API
- Event-loop based
Configuration:
quarkus.application.name=quarkus-benchmark
quarkus.http.port=8080
quarkus.log.level=INFO
Native Compilation:
# Build native image
mvn package -Pnative
# Size comparison
# JVM JAR: ~200MB
# Native binary: ~50MB
Pros:
- Ultra-fast startup (milliseconds)
- Low memory footprint
- Excellent throughput
- Native image support
Cons:
- Smaller ecosystem than Spring
- Native build complexity
- Reflection limitations in native mode
Micronaut 4.10.18
Official Site: https://micronaut.io/
Why We Use It:
- Compile-time dependency injection and AOP — avoids reflection-heavy runtime costs
- Fast startup and low memory footprint, rivaling Quarkus
- First-class GraalVM native image support
- Provides all three concurrency modes for a well-rounded comparison
Implementation Details:
- Micronaut 4.10.18 (latest stable)
- Micronaut HTTP Server (Netty-based)
- GraalVM for native compilation
- Experimental
micronaut.server.netty.worker.threadscarrier-thread property for loom integration
Thread Models Implemented:
- Platform Threads
- Traditional Netty worker pool with blocking dispatch
- Standard thread-per-request model
- Virtual Threads
- Java 21+ virtual threads with Netty carrier threads
- Combines Netty’s event loop with virtual-thread blocking
- Reactive
- Netty event-loop architecture
- Reactor / RxJava integration
- Non-blocking I/O with backpressure
Native Compilation:
# Build native image
./mvnw package -Dpackaging=native-image
Pros:
- Compile-time DI eliminates reflection overhead
- Fast startup (sub-second JVM, near-instant native)
- Excellent GraalVM support
- Rich feature set (HTTP client, service discovery, config management)
Cons:
- Smaller community than Spring
- Compile-time DI can be harder to debug
- Some libraries require Micronaut-specific adapters
Helidon 4.3.4
Official Site: https://helidon.io/
Why We Use It:
- Oracle’s open-source microservices framework, purpose-built for Java 21+ virtual threads
- Two distinct flavours (SE and MP) let us benchmark minimal vs full-stack overhead
- Excellent native image support via GraalVM
jlink-optimised JVM builds produce notably small Docker images
Implementation Details:
- Helidon 4.3.4 (latest stable, virtual-thread–first architecture)
- Helidon SE: Programmatic, functional-style routing with minimal overhead (Níma)
- Helidon MP: MicroProfile-compliant layer on top of SE (CDI + JAX-RS)
- Both flavours support JVM and GraalVM native builds
Thread Model:
- Virtual Threads only — Helidon 4 removed the legacy reactive Netty-based HTTP server; every request is dispatched on a virtual thread by default. Platform-thread and reactive modes are N/A by design.
Helidon SE vs Helidon MP:
| Aspect | Helidon SE | Helidon MP |
|---|---|---|
| Routing | Programmatic / functional | CDI + JAX-RS annotations |
| Overhead | Minimal | MicroProfile CDI container overhead |
| Best for | Maximum throughput, tiny footprint | Standards compliance, enterprise APIs |
| Typical RPS | ~65k (JVM), ~31k (native) | ~15k (JVM), ~10k (native) |
Build Highlights:
- jlink-optimised JVM images: Custom JRE with unused JDK modules stripped, yielding Docker images as small as ~169 MB (SE) / ~189 MB (MP)
- Shared native sources: Native modules reuse JVM sources via
build-helper-maven-plugin; only the build toolchain differs
Pros:
- Best-in-class virtual thread performance (SE variant)
- Very small Docker images thanks to jlink
- Clean separation between SE (minimal) and MP (full-stack)
- Strong GraalVM native support
Cons:
- Virtual-thread–only model limits concurrency model comparisons
- Smaller community and ecosystem than Spring or Quarkus
- Helidon MP’s CDI overhead is significant compared to SE
SparkJava 3.0.4 (Zoomba fork)
| Official Site: https://sparkjava.com/ | Zoomba fork |
Why We Use It:
- Extremely minimal HTTP micro-framework — ideal as a lightweight baseline
- Simple, expressive API for defining routes
- The Zoomba fork adds virtual thread support to the original Spark codebase
- Useful for isolating framework overhead in benchmarks
Implementation Details:
- SparkJava 3.0.4 (Zoomba fork with virtual thread support)
- Embedded Jetty server
- JVM builds only (no native image support)
Thread Models Implemented:
- Platform Threads
- Traditional Jetty thread pool
- Blocking I/O model
- Virtual Threads (via Zoomba fork)
- Lightweight threads from Java 21+
- Drop-in replacement for the platform thread executor
Pros:
- Near-zero learning curve
- Very small dependency footprint
- Fast startup
- Great for micro-benchmarks and prototyping
Cons:
- No reactive/non-blocking mode
- Limited ecosystem (no built-in DI, validation, etc.)
- No native image support
- Official project is largely unmaintained; the Zoomba fork keeps it viable
Javalin 7.1.0
Official Site: https://javalin.io/
Why We Use It:
- Lightweight yet feature-rich REST framework built on top of Jetty
- First-class Kotlin support (useful for future polyglot benchmarks)
- Simple, declarative API similar to Express.js / Koa
- Good middle ground between Spark’s minimalism and Spring’s richness
Implementation Details:
- Javalin 7.1.0 (latest major release)
- Embedded Jetty server
- JVM builds only (no native image support)
Thread Models Implemented:
- Platform Threads
- Standard Jetty thread pool
- Blocking I/O model
- Virtual Threads
- Java 21+ virtual threads via Jetty’s virtual thread executor
- Blocking code on virtual threads
Pros:
- Concise, readable API
- Lightweight with fast startup
- Active community and regular releases
- Built-in OpenAPI / Swagger support
Cons:
- No reactive/non-blocking HTTP model
- No native image support out of the box
- Smaller ecosystem than Spring or Micronaut
Dropwizard 5.0.1
| Official Site: https://www.dropwizard.io/ | GitHub |
Why We Use It:
- Battle-tested, production-ready Java framework that bundles Jetty, Jersey, Jackson, and Metrics into a single cohesive package
- Opinionated “fat JAR” approach — simple deployment model with minimal ceremony
- Useful for benchmarking a mature, ops-focused framework against newer alternatives
- Jetty 12 in Dropwizard 5.x enables direct virtual-thread support via
VirtualThreadPool
Implementation Details:
- Dropwizard 5.0.1 (latest major release — Jetty 12 + Jersey 3 + Jackson 2)
- Embedded Jetty server with configurable thread pool
- JVM builds only (no native image support)
- jlink-optimised runtime image with distroless base
Thread Models Implemented:
- Platform Threads
- Jetty
QueuedThreadPool - Blocking I/O model
- Jetty
- Virtual Threads
- Jetty 12
VirtualThreadPool(Project Loom) - Blocking code on virtual threads
- Jetty 12
Pros:
- Batteries-included: HTTP, JSON, metrics, health checks, logging out of the box
- Mature ecosystem with extensive production track record
- Simple deployment (single fat JAR + YAML config)
- Built-in Dropwizard Metrics support; Micrometer integration provided via this project’s dependencies/agent
Cons:
- No reactive/non-blocking HTTP model
- No native image support
- Heavier baseline than Spark or Javalin due to bundled subsystems
- Smaller community momentum compared to Spring or Quarkus
Vert.x 5.0.8
| Official Site: https://vertx.io/ | GitHub |
Why We Use It:
- Industry-standard reactive toolkit for building high-performance, non-blocking applications on the JVM
- Event-loop–based architecture (Netty under the hood) designed for maximum throughput per CPU core
- Polyglot runtime supporting Java, Kotlin, Groovy, JavaScript, and more
- Lightweight — no CDI, no annotation scanning, no classpath magic; explicit wiring
Implementation Details:
- Vert.x 5.0.8 (latest stable release)
- Fully reactive HTTP server on Netty event loops
- JVM build only (no native image support at this time)
- jlink-optimised runtime image with distroless base
Thread Models Implemented:
- Reactive (Event Loop)
- All request handling runs on the Vert.x event loop
- Non-blocking sleep via
vertx.setTimer(never blocks the event loop) - Multiple HTTP server instances (one per event-loop thread) for optimal throughput
Pros:
- Extremely high throughput with minimal resource consumption
- True non-blocking from the ground up — no thread-per-request overhead
- Mature project with strong Eclipse Foundation governance
- Micrometer metrics bridge (
vertx-micrometer-metrics) for native Vert.x metric collection - Excellent documentation and well-defined API contracts
Cons:
- Requires reactive programming discipline (no blocking calls on the event loop)
- Steeper learning curve for developers accustomed to thread-per-request models
- No native image support in the community edition
- Smaller enterprise adoption compared to Spring or Quarkus
Pekko 1.3.0
| Official Site: https://pekko.apache.org/ | GitHub |
Why We Use It:
- Apache Pekko HTTP is the community-driven successor to Akka HTTP, providing a fully reactive, non-blocking HTTP toolkit built on the Pekko actor system
- Event-driven architecture with high throughput per CPU core — no thread-per-request overhead
- Lightweight and composable — no DI container, no annotation scanning, no classpath magic; explicit manual wiring
- Useful for benchmarking a reactive Scala/Java toolkit against other non-blocking alternatives (Vert.x, WebFlux)
Background — Play → Pekko pivot:
- We originally planned to benchmark the Play Framework, which itself is built on Pekko HTTP under the hood.
- Initial implementation used Play’s
RoutingDslwithroutingAsync, but throughput plateaued at ~1.8k RPS on 2 vCPUs — roughly 15× below comparable reactive frameworks. - Investigation showed that Play’s layered abstractions introduced significant overhead:
- Typesafe Config loading (
application.conf) - Play application lifecycle bootstrap
- DI wiring
RoutingDsldispatch pipeline
- Typesafe Config loading (
- Stripping away the Play layer and driving Pekko HTTP directly lifted throughput to ~30k RPS — in line with other reactive implementations.
- We therefore pivoted the module from “Play” to “Pekko HTTP” to keep the benchmark fair and representative of what the underlying engine can actually deliver.
Implementation Details:
- Pekko HTTP 1.3.0 (latest stable)
- Pekko Core 1.4.0 (actor system + stream engine)
- Direct Pekko HTTP server
- JVM build only (no native image support at this time)
- jlink-optimised runtime image with distroless base
Thread Models Implemented:
- Reactive (Pekko Dispatcher)
- All request handling runs on the Pekko default ForkJoin dispatcher
- Non-blocking sleep via Pekko scheduler (never blocks the dispatcher)
- HTTP/1.1 pipelining-limit=32 for keep-alive benchmarks
Architecture:
- Clean architecture / hexagonal: domain layer is framework-agnostic, infrastructure adapters injected via constructor, web layer is a thin routing adapter
- Caffeine cache retrieval on every request in the hot path (consistent with other modules)
Pros:
- High throughput with minimal resource consumption (30k RPS on 2 vCPUs)
- True non-blocking from the ground up — event-driven dispatcher model
- Apache 2.0 licensed community project with active development
- Micrometer metrics for native metric collection
- No DI container overhead — all wiring is manual for minimal startup and runtime cost
Cons:
- Requires reactive programming discipline (no blocking calls on the dispatcher)
- Scala dependency footprint (Scala 3 stdlib bundled in the uber JAR)
- Smaller Java community compared to Vert.x or Spring WebFlux
- No native image support
Go with Fiber
Official Site: https://gofiber.io/
Why We’re Adding It:
- Excellent performance characteristics
- Built-in concurrency (goroutines)
- Fast HTTP routing
- Cross-language comparison
- Ultralightweight
Headline benchmark (17/02/2026): ~24,000 RPS (observability-aligned implementation)
Fairness note: An additional go-simple variant can reach ~60,000 RPS, but it is excluded from headline comparisons because it does not use an equivalent observability setup to the Java services.
Django 6.0.3
| Official Site: https://www.djangoproject.com/ | GitHub |
Why We Use It:
- The most popular Python web framework, with a mature ecosystem and massive community
- Provides a strong cross-language comparison point (interpreted CPython vs compiled JVM/Go)
- Supports both WSGI (synchronous/threaded) and ASGI (async) deployment models
- Demonstrates the impact of Python’s GIL on throughput under high concurrency
Implementation Details:
- Django 6.0.3 (latest major release)
- Gunicorn 25.3.0 as the production WSGI/ASGI server
- Python 3.13.12 (CPython)
- Two modules sharing a common application package (
gunicorn/common):- WSGI module (
django-platform): Gunicorngthreadworkers (threaded platform model) - ASGI module (
django-reactive): Gunicorn withUvicornWorker(async event loop)
- WSGI module (
Thread/Concurrency Models Implemented:
- Platform Threads (WSGI —
gthread)- Gunicorn pre-fork multi-process model with threaded workers
- Each worker process runs multiple OS threads
- GIL limits true CPU parallelism within a single worker
- Reactive / Async (ASGI —
UvicornWorker)- Gunicorn pre-fork with Uvicorn ASGI workers
asyncioevent loop per worker- Non-blocking I/O for async views
Architecture:
- Clean architecture / hexagonal:
api/→application/→infrastructure/layering - Ports-and-adapters cache abstraction (
CachePortABC →CachetoolsAdapter/DictCacheAdapter) - OpenTelemetry SDK instrumentation via
opentelemetry-instrumentation-django - Pyroscope continuous profiling via
pyroscope-ioPython SDK cachetools.TTLCachewith optional plaindictadapter to eliminate per-requesttime.monotonic()syscall
Pros:
- Enormous ecosystem, extensive documentation, massive community
- Both sync and async deployment models from the same framework
- Useful cross-language baseline for understanding runtime overhead
- Clean architecture demonstrates Python best practices (SOLID, ports-and-adapters)
Cons:
- CPython GIL severely limits per-process throughput (~1k RPS platform, ~0.5k reactive on 2 vCPUs)
- Orders of magnitude slower than JVM and Go implementations for this workload
- Multi-process model consumes more memory than JVM thread pools
- No native image / AOT compilation equivalent
Observability Stack
Grafana
Official Site: https://grafana.com/
Purpose: Unified visualization and observability platform
Features Used:
- Dashboard provisioning
- Multiple data sources
- Explore interface
- Alerting (planned)
Data Sources Configured:
- Prometheus/Mimir (metrics)
- Loki (logs)
- Tempo (traces)
- Pyroscope (profiles)
Dashboards:
- Service overview
- JVM metrics
- HTTP metrics
- Custom queries
Access: http://localhost:3000
Loki
Official Site: https://grafana.com/oss/loki/
Purpose: Log aggregation system
Why Loki:
- Label-based indexing (like Prometheus)
- Cost-effective storage
- Native Grafana integration
- LogQL query language
Configuration Highlights:
ingester:
chunk_idle_period: 5m
max_chunk_age: 1h
limits_config:
ingestion_rate_mb: 10
ingestion_burst_size_mb: 20
Query Examples:
# All logs from service
{service_name="quarkus-jvm"}
# Error logs only
{service_name="quarkus-jvm"} |= "ERROR"
# Request duration parsing
{service_name="quarkus-jvm"} | json | duration > 100ms
Tempo
Official Site: https://grafana.com/oss/tempo/
Purpose: Distributed tracing backend
Why Tempo:
- Trace ID-based storage (no indexing required)
- Cost-effective
- TraceQL query language
- Exemplar support
Features:
- Trace ingestion via OTLP
- Tag-based search
- Service graph generation
- Metrics generation from traces
Query Examples:
# Find slow requests
{ duration > 100ms }
# Specific service spans
{ service.name = "quarkus-jvm" }
# Error traces
{ status = error }
Mimir
Official Site: https://grafana.com/oss/mimir/
Purpose: Long-term metrics storage (Prometheus-compatible)
Why Mimir:
- Horizontally scalable
- High availability
- Long-term storage
- PromQL compatible
Metrics Collected:
- HTTP request rate
- Request duration (histogram)
- JVM memory usage
- GC statistics
- Thread counts
- CPU usage
Query Examples:
# Request rate
rate(http_server_requests_seconds_count[5m])
# P99 latency
histogram_quantile(0.99, http_server_requests_seconds_bucket)
# Heap usage
jvm_memory_used_bytes{area="heap"}
Pyroscope
Official Site: https://grafana.com/oss/pyroscope/
Purpose: Continuous profiling
Why Pyroscope:
- Low overhead profiling
- Flame graph visualization
- Time-based analysis
- Tag-based filtering
Profiling Methods:
- Java Agent (JVM only)
-javaagent:/opt/pyroscope-agent.jar- CPU profiling
- Allocation profiling
- Lock contention
- eBPF (All services)
- System-level profiling
- No instrumentation required
- Minimal overhead
- HTTP Scrape (Pull model)
- Endpoint-based collection
- Flexible integration
Profile Types:
- CPU: Where time is spent
- Allocations: Memory allocation patterns
- Locks: Contention analysis
Grafana Alloy
Official Site: https://grafana.com/oss/alloy/
Purpose: OpenTelemetry collector and distributor
Why Alloy:
- Unified telemetry collection
- Service discovery
- eBPF profiling support
- Efficient batching
Components Used:
- OTLP receiver (gRPC + HTTP)
- Batch processor
- OTLP exporters (to Loki, Tempo, Mimir)
- Pyroscope exporter
Data Flow:
Services → Alloy → {Loki, Tempo, Mimir, Pyroscope}
Testing & Benchmarking
wrk2
Official Site: https://github.com/giltene/wrk2
Purpose: HTTP benchmarking with constant throughput
Why wrk2 (vs. wrk, ab, etc.):
- Constant request rate (not open-loop)
- Coordinated omission correction
- Accurate latency measurements
- Lua scripting support
Key Features:
- Multi-threaded load generation
- Connection pooling
- Latency distribution (HDR histogram)
- Custom request scripting
Typical Usage:
wrk2 -t 8 -c 200 -d 180s -R 80000 --latency http://service:8080/hello/platform
Output Metrics:
- Requests per second (actual)
- Latency distribution (p50, p90, p99, p99.9, p99.99)
- Transfer rate
- Error rate
Comparison to Alternatives:
| Tool | Type | Coordinated Omission | Scripting |
|---|---|---|---|
| wrk2 | ✅ Fixed rate | ✅ Yes | ✅ Lua |
| wrk | ❌ Open loop | ❌ No | ✅ Lua |
| ab | ❌ Open loop | ❌ No | ❌ No |
| Gatling | ✅ Fixed rate | ✅ Yes | ✅ Scala |
OpenTelemetry
Official Site: https://opentelemetry.io/
Purpose: Standardized telemetry instrumentation
Why OpenTelemetry:
- Vendor-neutral standard
- Auto-instrumentation
- Language-agnostic
- Future-proof
Components:
- SDK: Embedded in services
- API: Instrumentation interface
- Collector: Data pipeline (Alloy)
- Protocol: OTLP over gRPC
Instrumentation Methods:
Java Agent (JVM):
-javaagent:/opt/opentelemetry-javaagent.jar
Native Dependency (GraalVM):
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-api</artifactId>
</dependency>
Signals Collected:
- Traces: Request spans with context
- Metrics: Counters, gauges, histograms
- Logs: Structured logging with trace context
Infrastructure & Deployment
Docker
Official Site: https://www.docker.com/
Purpose: Containerization platform
Why Docker:
- Consistent environments
- Resource isolation
- Easy deployment
- Reproducible builds
Images Used:
- gcr.io/distroless/base-debian13:nonroot: Runtime Base for JVM Builds
- container-registry.oracle.com/graalvm/native-image:25: Native builds
- grafana/grafana: Visualization
- grafana/loki: Logs
- grafana/tempo: Traces
- grafana/mimir: Metrics
- grafana/pyroscope: Profiles
- grafana/alloy: Collector
Multi-stage Builds:
# Stage 1: Build
FROM maven:3.9.14-eclipse-temurin-25-noble AS builder
COPY . .
RUN mvn clean package
# Stage 2: Runtime
FROM gcr.io/distroless/java25-debian13:nonroot
COPY --from=builder /target/app.jar /app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]
Docker Compose
Official Site: https://docs.docker.com/compose/
Purpose: Multi-container orchestration
Why Docker Compose:
- Declarative configuration
- Easy local development
- Service dependencies
- Network management
Compose Features Used:
- Profiles (OBS, SERVICES, CONTROL, RAIN_FIRE)
- Environment variables
- Volume management
- Network isolation
- Resource limits
Example:
services:
quarkus-jvm:
build:
context: ../services/java
dockerfile: quarkus/jvm/Dockerfile
container_name: quarkus-jvm
environment:
- JAVA_OPTS=-Xmx1g
deploy:
resources:
limits:
cpus: 2
memory: 768M
Development Tools
Quality guards (linting / static analysis)
This repository treats code quality tooling as a first-class part of “production readiness”:
- ESLint: Used in the Next.js dashboard (
utils/nextjs-dash). - Checkstyle: Enforces consistent style across Java services.
- Ruff: Fast Python linter and formatter enforcing PEP 8 across Django services.
- Qodana: Automated static analysis via GitHub Actions (see
qodana.yaml).
Maven
Official Site: https://maven.apache.org/
Purpose: Build automation and dependency management
Why Maven:
- Standard Java build tool
- Dependency resolution
- Plugin ecosystem
- Multi-module support
Key Plugins:
spring-boot-maven-plugin: Executable JARquarkus-maven-plugin: Native buildsnative-maven-plugin: GraalVM compilation
GraalVM
Official Site: https://www.graalvm.org/
Purpose: High-performance JDK and native compilation
Why GraalVM:
- Native image compilation
- Faster startup
- Lower memory footprint
- Ahead-of-time compilation
Editions:
- Community: Free, basic features
- Enterprise: G1 GC, better performance
Native Image:
native-image \
-H:+StaticExecutableWithDynamicLibC \
-H:+ReportExceptionStackTraces \
-O3 \
-jar application.jar
Git
Official Site: https://git-scm.com/
Purpose: Version control
Workflow:
- Branching strategy
- Commit conventions
- Pull request process
Libraries & Dependencies
Caffeine Cache
Official Site: https://github.com/ben-manes/caffeine
Purpose: High-performance Java caching library
Why Caffeine:
- Non-blocking operations
- High throughput
- Low latency
- Window TinyLFU eviction
Configuration:
Cache<String, String> cache = Caffeine.newBuilder()
.maximumSize(50000)
.expireAfterWrite(Duration.ofDays(1))
.build();
cachetools (Python)
Official Site: https://github.com/tkem/cachetools
Purpose: In-memory caching library for Python
Why cachetools:
- Drop-in replacement for
functools.lru_cachewith TTL and size-based eviction - Used in Django benchmark services as the default cache backend
- Pluggable via the
CachePortabstraction (can be swapped for a plaindictto eliminate per-requesttime.monotonic()syscall overhead)
Configuration:
cache = TTLCache(maxsize=50_000, ttl=86_400)
Reactor (Spring)
Official Site: https://projectreactor.io/
Purpose: Reactive programming library
Features:
- Non-blocking streams
- Backpressure support
- Scheduler abstraction
Mutiny (Quarkus)
Official Site: https://smallrye.io/smallrye-mutiny/
Purpose: Reactive programming for Quarkus
Features:
- Simple API
- Uni and Multi types
- Excellent Quarkus integration
Technology Stack Summary
| Layer | Category | Technology | Version | Purpose / Role |
|---|---|---|---|---|
| Execution | Runtime | Java (Eclipse Temurin) | 25.0.2 | Primary JVM runtime for backend services under benchmark |
| Execution | Runtime | GraalVM | 25.0.2 | Native image compilation for startup and memory footprint benchmarks |
| Execution | Runtime | Go | 1.26.1 | High-performance baseline services for comparison |
| Execution | Runtime | Python (CPython) | 3.13.12 | Interpreted runtime for Django benchmark services |
| Execution | Server | Gunicorn | 25.3.0 | Production WSGI/ASGI process manager for Django benchmark services |
| Execution | Runtime | Node.js | 25.8.2 | Frontend tooling and SSR runtime |
| Backend | Framework | Spring Boot | 4.0.5 | Enterprise Java baseline framework |
| Backend | Framework | Quarkus | 3.34.1 | Cloud-native Java framework (JVM + native image focus) |
| Backend | Framework | Micronaut | 4.10.18 | Compile-time optimized JVM microservices framework |
| Backend | Framework | Helidon SE | 4.3.4 | Lightweight Java microservices (programmatic routing) |
| Backend | Framework | Helidon MP | 4.3.4 | MicroProfile-compliant Java microservices (CDI + JAX-RS) |
| Backend | Framework | SparkJava (Zoomba fork) | 3.0.4 | Minimal HTTP server (virtual-thread friendly) |
| Backend | Framework | Javalin | 7.1.0 | Lightweight REST server |
| Backend | Framework | Dropwizard | 5.0.1 | Production-ready RESTful web services (Jetty + Jersey + Jackson) |
| Backend | Framework | Vert.x | 5.0.8 | Reactive, event-driven applications on the JVM (Netty) |
| Backend | Framework | Pekko | 1.3.0 | Reactive HTTP toolkit on the Pekko actor system (Apache) |
| Backend | Framework | Django | 6.0.3 | Python web framework (WSGI platform + ASGI reactive) |
| Frontend | Framework | Next.js | 16.2.2 | SSR frontend and control dashboard |
| Frontend | Library | React | 19.2.4 | UI rendering layer |
| Frontend | Language | TypeScript | 6.0.2 | Type-safe frontend development |
| Frontend | UI Library | Material UI (MUI) | 7.3.9 | Component library and theming |
| Observability | Visualization | Grafana | 12.4.2 | Metrics, logs, traces dashboards |
| Observability | Logs | Loki | 3.7.1 | Log aggregation |
| Observability | Tracing | Tempo | 2.10.3 | Distributed tracing backend |
| Observability | Metrics | Mimir | 3.0.5 | Long-term metrics storage |
| Observability | Profiling | Pyroscope | 1.19.1 | Continuous CPU and memory profiling |
| Observability | Collection | Grafana Alloy | 1.10.2 | Unified telemetry collection pipelines |
| Telemetry | Instrumentation | OpenTelemetry SDK | 1.60.1 | Manual metrics, logs, and traces instrumentation |
| Telemetry | Instrumentation | OpenTelemetry Distribution | 2.26.1 | Auto-instrumentation and exporters |
| Performance | Cache | Caffeine | 3.2.3 | High-performance in-memory caching (Java) |
| Performance | Cache | cachetools | 7.0.5 | In-memory caching (Python) |
| Platform | Container Runtime | Docker Engine | 24+ | Container runtime for reproducible benchmarks |
| Platform | Orchestration | Docker Compose | v2 | Local multi-service orchestration |
| Platform | Tooling | Docker CLI | 29.3.1 | Image build and lifecycle management |
| Build | Build Tool | Maven | 3.9.14 | Java build and dependency management |
| Build | Build Tool | pip-compile | Latest | Python dependency pinning and resolution |
| Build | Package Manager | npm | 11.12.1 | Frontend dependency management |
| Quality | Linter / Formatter | Ruff | 0.15.8 | Python linting and code formatting |
| Testing | Load Testing | wrk2 | Latest | Deterministic HTTP benchmarking |
| Testing | Unit / Integration | JUnit | 5 / 6 | JVM unit and integration testing |
| Testing | Frontend Testing | Vitest | 4.1.2 | Frontend unit testing |
Further Reading
Official Documentation
- Spring Boot Reference
- Quarkus Guides
- Micronaut Documentation
- Helidon Documentation
- SparkJava Documentation
- Javalin Documentation
- Dropwizard Documentation
- Vert.x Documentation
- Pekko Documentation
- Django Documentation
- Grafana Documentation
- OpenTelemetry Docs
- Docker Documentation
Community Resources
Learning Paths
| Next: Getting Started | Architecture | Benchmarking | Adding a Service |