Hire Me
Interactive Demo

Platform Threads vs Virtual Threads

Java 21 Project Loom · I/O-bound workload simulation Drag the slider to increase load — watch where each model breaks
Concurrent Requests 50
I/O Duration (per request) 200ms
Thread Pool Size 200
Platform Threads Tomcat Default
Request Queue
Queue empty
Req/s
Avg Latency
0
Rejected/s
Virtual Threads Java 21 Loom
Request Queue
Queue empty — all requests accepted
Req/s
Avg Latency
0
Rejected/s
Metric Platform Threads Virtual Threads
Throughput vs Concurrent Requests — across all load levels
Platform
Virtual
Platform Threads — Spring Boot default server.tomcat.threads.max=200
server.tomcat.threads.min-spare=10
server.tomcat.accept-count=100
Virtual Threads — one property spring.threads.virtual.enabled=true
# Switches Tomcat, @Async,
# @Scheduled to virtual executors
When virtual threads help I/O-bound workloads: DB queries,
HTTP calls, stream reads. Each
blocking call parks — not blocks.
When they don't help CPU-bound work: image processing,
cryptography, computation. The
limit is cores, not threads.

Simulated I/O-bound workload. Real throughput depends on database connection pool size, downstream service limits, and GC pressure. Modernising a Spring Boot service to Java 21? Get in touch.

Why virtual threads change the calculus

The bottleneck
A Tomcat thread pool of 200 can handle 200 concurrent blocking requests. When all 200 threads are waiting on I/O (a DB query, an HTTP call), the 201st request queues. Beyond the accept-count queue, it gets rejected with HTTP 503. Thread count is the ceiling — not CPU, not I/O capacity.
Platform thread cost
Each OS thread requires 512KB–1MB of stack memory and a kernel-level context switch. Creating thousands is expensive. The pool of 200 is a pragmatic cap: beyond that, scheduling overhead eats into throughput.
How virtual threads park
A virtual thread is a lightweight JVM construct (~few hundred bytes). When it calls a blocking I/O operation (JDBC query, InputStream.read()), it parks — unmounts from its carrier OS thread. The carrier is immediately free to run another virtual thread. No kernel context switch, no stack allocation held idle.
The saturation point
Drag the slider past the platform thread pool size and watch latency spike. The throughput line on the chart goes flat (the pool ceiling) then dips as timeout overhead grows. The virtual thread line stays linear until you hit a real constraint: database connection pool, downstream rate limits, or CPU saturation.
Enabling in Spring Boot
spring.threads.virtual.enabled=true in application.yml — that's the entire migration for most services. Spring Boot 3.2+ wires Tomcat, @Async, @Scheduled, and Spring Security's SecurityContextHolder automatically. One property, production-safe from day one.
What still needs care
synchronized blocks pin the carrier thread — replace with ReentrantLock for hot paths. Connection pools (HikariCP) still cap DB connections — virtual threads don't create more connections, they just wait more efficiently. CPU-bound tasks gain nothing — use ForkJoinPool for those.