Java Virtual Threads Complete Guide (Project Loom)
Introduction
Java 21, released on September 19, 2023, marks a watershed moment in Java history with the introduction of Virtual Threads as a production-ready feature. After years of development under Project Loom, virtual threads finally deliver on the promise of making high-throughput concurrent applications simple to write, debug, and maintain.
Virtual threads are lightweight threads that dramatically reduce the effort of writing, maintaining, and observing high-throughput concurrent applications. They allow developers to write blocking code that scales like reactive code, without sacrificing the familiar thread-per-request programming model that has served Java developers for decades.
In this comprehensive guide, we'll explore what virtual threads are, how they work internally, when to use them, and best practices for getting the most out of this revolutionary feature.
Note
Virtual threads were originally called "fibers" during early Project Loom development. The team renamed them because "fibers" was already used for similar-yet-different constructs in other contexts, causing confusion.
Brian Goetz suggested "virtual threads" to evoke the analogy with virtual memory: just as virtual memory provides the illusion of more physical memory than actually exists, virtual threads provide the illusion of more OS threads than actually exist.
Understanding the Problem: Why Virtual Threads?
Before exploring virtual threads, it's essential to understand the problem they solve. Traditional Java concurrency is built on platform threads, which are thin wrappers around operating system (OS) threads.
The platform thread bottleneck
Platform threads have several limitations:
- Resource intensive: Each thread consumes about 1MB of stack memory and requires OS kernel resources.
- Context switching overhead: The OS scheduler manages threads, and context switches are expensive (typically 1-10 microseconds).
- Limited scalability: Most systems struggle beyond 10,000-20,000 concurrent threads.
- Thread pool sizing: Developers must carefully tune thread pools, balancing throughput against resource consumption.
Consider a typical web server handling 10,000 concurrent requests. With the thread-per-request model, you need 10,000 platform threads, consuming ~10GB of memory just for thread stacks. This doesn't scale to modern cloud applications that might need to handle hundreds of thousands of concurrent connections.
The reactive alternative and its costs
Reactive frameworks like Spring WebFlux, RxJava, and Vert.x emerged to address these limitations. They use non-blocking I/O with a small number of threads, achieving massive scalability.
A real-world example of this trade-off is the Fabric8 Kubernetes Client. When building Java-based Kubernetes operators and controllers, each watcher or SharedInformer traditionally required its own thread. In large clusters with hundreds of Custom Resource Definitions (CRDs) and thousands of watched resources, the thread overhead became a significant bottleneck. This limitation was one of the main reasons we added reactive HTTP client implementations to Fabric8, including Vert.x, which uses non-blocking I/O to handle massive numbers of concurrent watches without the thread overhead.
However, reactive programming comes with significant costs:
- Steep learning curve: Developers must think in terms of streams, publishers, and subscribers.
- Debugging nightmare: Stack traces become nearly useless as execution hops between callbacks.
- Viral adoption: Once you go reactive, everything must be reactive, synchronous libraries can't be used.
- Code complexity: Simple logic becomes convoluted with operators like
flatMap,switchIfEmpty, andonErrorResume.
Virtual threads promise the best of both worlds: the simplicity of blocking code with the scalability of reactive systems.
What Are Virtual Threads?
Virtual threads are lightweight threads managed by the JVM rather than the operating system.
They are instances of java.lang.Thread that run on top of platform threads (called "carrier threads") but don't monopolize them during blocking operations.
Key characteristics
- Cheap to create: Virtual threads consume only a few hundred bytes initially, growing as needed.
- Cheap to block: When a virtual thread blocks, it releases its carrier thread for other work.
- Familiar API: They implement
java.lang.Thread, so existing code works with minimal changes. - Debuggable: Full stack traces, standard debugger support, and JFR (Java Flight Recorder) integration.
Project Loom Timeline and History
Project Loom has been in development for over a decade. Here's the journey from concept to production-ready feature:
The extended development period allowed the team to refine the API, optimize performance, and ensure backward compatibility with existing Java code.
Creating Virtual Threads
There are four main ways to create virtual threads in Java 21. Let's explore each approach with practical examples.
Method 1: Thread.startVirtualThread()
The following code snippet shows the simplest way to start a virtual thread:
public class StartVirtualThread {
public static void main(String[] args) throws InterruptedException {
Thread vt = Thread.startVirtualThread(() -> {
System.out.println("Running in: " + Thread.currentThread());
System.out.println("Is virtual: " + Thread.currentThread().isVirtual());
});
vt.join();
}
}This method immediately starts the virtual thread and returns a Thread object.
Method 2: Thread.ofVirtual().start()
The following code snippet shows how to use the builder pattern for more control over thread configuration:
public class OfVirtualStart {
public static void main(String[] args) throws InterruptedException {
Thread vt = Thread.ofVirtual()
.name("my-virtual-thread")
.start(() -> {
System.out.println("Thread name: " + Thread.currentThread().getName());
simulateWork();
});
vt.join();
}
private static void simulateWork() {
try {
Thread.sleep(100);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}The builder pattern allows you to set the thread name, uncaught exception handler, and other properties.
Method 3: Executors.newVirtualThreadPerTaskExecutor()
The following code snippet shows how to use the executor service for production applications:
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.stream.IntStream;
public class VirtualThreadExecutor {
public static void main(String[] args) {
try (ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor()) {
IntStream.range(0, 10_000).forEach(i -> {
executor.submit(() -> {
Thread.sleep(1000);
return i;
});
});
} // executor.close() is called implicitly, waits for tasks to complete
System.out.println("All tasks completed");
}
}Note
The newVirtualThreadPerTaskExecutor() creates a new virtual thread for each submitted task.
Unlike traditional thread pools, there's no need to configure pool sizes, virtual threads are cheap enough to create on demand.
Method 4: ThreadFactory for virtual threads
The following code snippet shows how to create threads with consistent configuration using a ThreadFactory:
import java.util.concurrent.ThreadFactory;
import java.util.concurrent.atomic.AtomicLong;
public class VirtualThreadFactory {
public static void main(String[] args) throws InterruptedException {
AtomicLong counter = new AtomicLong();
ThreadFactory factory = Thread.ofVirtual()
.name("worker-", counter.getAndIncrement())
.factory();
Thread t1 = factory.newThread(() -> System.out.println(Thread.currentThread().getName()));
Thread t2 = factory.newThread(() -> System.out.println(Thread.currentThread().getName()));
t1.start();
t2.start();
t1.join();
t2.join();
}
}The thread factory is useful when integrating with libraries that accept a ThreadFactory parameter.
How Virtual Threads Work Internally
Understanding the internals helps you write better code and debug issues. Virtual threads use a technique called "continuation" to pause and resume execution.
The mounting and unmounting process
- Mounting: When a virtual thread is ready to run, the scheduler mounts it onto an available carrier thread.
- Execution: The virtual thread executes on the carrier thread just like a normal thread.
- Blocking: When the virtual thread encounters a blocking operation, it saves its state (continuation) and unmounts from the carrier.
- Parking: The virtual thread enters a parked state, consuming minimal resources.
- Resumption: When the blocking operation completes, the virtual thread is scheduled to mount again (possibly on a different carrier).
Carrier thread pool
By default, the JVM uses a work-stealing ForkJoinPool as the carrier thread pool.
However, this is a specialized internal pool, not the common ForkJoinPool.commonPool() used by parallel streams.
You cannot tune it using the same system properties as the common pool, ßuse the virtual thread-specific properties shown below instead.
The number of carrier threads defaults to the number of available processors but can be configured:
# Set carrier thread count
java -Djdk.virtualThreadScheduler.parallelism=4 MyApp
# Set maximum pool size (for unparking)
java -Djdk.virtualThreadScheduler.maxPoolSize=256 MyAppPerformance Benchmarks
Let's examine how virtual threads compare to platform threads in real-world scenarios. You can run the VirtualThreadsPerformance.java benchmark yourself to see the difference.
Throughput comparison
According to JEP 444, when running concurrent tasks that sleep for one second (simulating blocking I/O):
| Metric | Platform Threads (200 pooled) | Virtual Threads | Improvement |
|---|---|---|---|
| Tasks per second | ~200 | ~10,000 | 50x faster |
| Concurrent capacity | Limited by pool size | Millions | Unbounded |
The throughput improvement comes from virtual threads' ability to efficiently handle blocking operations without consuming OS threads.
Memory footprint and creation time
A key advantage of virtual threads is that they do not reserve a large contiguous stack like platform threads do.
According to Oracle Java Magazine, a platform thread on typical Linux x64 systems reserves approximately 1 MB of virtual memory for its stack by default (-Xss).
This reservation happens even if the thread never uses the entire stack.
Virtual threads behave very differently. Their stack frames are heap-allocated and grow on demand, starting from a very small initial footprint. This allows the JVM to host millions of virtual threads in the same memory where platform threads would be limited to only thousands.
Because virtual thread stack usage depends on actual call depth and local variables, numbers vary between workloads. The table below illustrates the typical order of magnitude difference:
| Concurrent Tasks | Platform Threads Memory (approx. 1 MB each) | Virtual Threads Memory (varies with usage) |
|---|---|---|
| 1,000 | ~1 GB | A few MB |
| 10,000 | ~10 GB | Tens of MB |
| 100,000 | Not practical | Hundreds of MB |
| 1,000,000 | Not practical | Possible on modern servers |
Even though exact virtual thread memory usage depends on your code’s stack depth, the trend is clear: virtual threads enable massive concurrency with dramatically lower memory requirements.
Creation time is also significantly faster. Since virtual threads do not require OS-level allocation, millions of them can be created quickly, while platform thread creation is comparatively expensive due to kernel interaction.
Note
Virtual threads aren't lightweight because they do less work. They're lightweight because the JVM performs the expensive parts lazily, only when needed.
CPU-bound workloads warning
Virtual threads are designed to handle massive numbers of I/O-bound tasks efficiently. They provide little benefit when the work is dominated by pure computation, because CPU-bound workloads are limited by core availability rather than thread count.
Warning
For CPU-heavy tasks, virtual threads may achieve only 50–55% of the throughput of platform threads due to additional scheduling overhead. In these cases, prefer platform-thread-based executors such as ForkJoinPool or Executors.newFixedThreadPool().
If your application spends most of its time waiting on external systems (databases, HTTP calls, file I/O), virtual threads can dramatically improve scalability and simplify your code. But when parallel computation is the bottleneck, stick to traditional pools.
Thread Pinning: The Critical Gotcha
Thread pinning is the most important concept to understand when working with virtual threads. A virtual thread becomes "pinned" to its carrier thread when it cannot unmount during a blocking operation.
What causes pinning?
- synchronized blocks/methods (JDK 21-23): The JVM cannot unmount a virtual thread while holding a monitor lock.
Update: JEP 491 in JDK 24 fixes this limitation, allowing virtual threads to unmount even when holding monitor locks. - Native code execution: JNI calls prevent unmounting.
- Foreign function calls: Panama Foreign Function & Memory API (FFI) calls can cause pinning.
Detecting pinning
Enable pinning detection with JVM flags:
# Log pinning events
java -Djdk.tracePinnedThreads=full MyApp
# Or for shorter output
java -Djdk.tracePinnedThreads=short MyAppBefore: Code that pins (JDK 21-23)
public class PinnedThread {
private final Object lock = new Object();
public void problematicMethod() {
synchronized (lock) { // ⚠️ Pinning starts here
performBlockingIO(); // Virtual thread cannot unmount!
} // Pinning ends
}
private void performBlockingIO() {
try {
Thread.sleep(1000); // Would normally unmount, but can't
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}After: Code that doesn't pin (JDK 21-23 workaround)
import java.util.concurrent.locks.ReentrantLock;
public class UnpinnedThread {
private final ReentrantLock lock = new ReentrantLock();
public void improvedMethod() {
lock.lock(); // ✅ ReentrantLock allows unmounting
try {
performBlockingIO(); // Virtual thread CAN unmount
} finally {
lock.unlock();
}
}
private void performBlockingIO() {
try {
Thread.sleep(1000); // Thread unmounts during sleep
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}Caution
Thread pinning can severely degrade performance in JDK 21-23. If many virtual threads are pinned simultaneously, you'll effectively exhaust the carrier thread pool, negating the benefits of virtual threads.
For JDK 21-23, prefer ReentrantLock over synchronized when blocking operations are involved.
JDK 24+: With JEP 491, synchronized no longer causes pinning, so you can use it freely with virtual threads.
When NOT to Use Virtual Threads
Virtual threads are not a silver bullet. Here are scenarios where platform threads remain the better choice:
CPU-bound workloads
Virtual threads provide no benefit for CPU-intensive tasks:
// ❌ Don't use virtual threads for this
public long computePrimes(int limit) {
return LongStream.range(2, limit)
.filter(this::isPrime)
.count();
}For CPU-bound work, you're limited by the number of physical cores, not threads.
Use the standard ForkJoinPool or Executors.newFixedThreadPool() instead.
When you need thread-local caching
Thread-local variables in virtual threads can cause memory issues:
// ⚠️ Problematic with virtual threads
private static final ThreadLocal<ExpensiveCache> CACHE =
ThreadLocal.withInitial(ExpensiveCache::new);
public void processRequest() {
// Each of 1 million virtual threads gets its own cache!
ExpensiveCache cache = CACHE.get();
// ...
}With millions of virtual threads, thread-local storage can consume enormous amounts of memory.
Consider using ScopedValue (preview feature) instead.
When libraries use synchronized extensively
Some older libraries use synchronized pervasively:
- Legacy JDBC drivers
- Older HTTP clients
- Some logging frameworks
Check your dependencies for pinning behavior before migrating.
Best Practices
Follow these guidelines to get the most from virtual threads:
1. Don't pool virtual threads
Unlike platform threads, virtual threads are cheap to create. Pooling them is unnecessary and can limit scalability:
// ❌ Don't do this
ExecutorService pool = Executors.newFixedThreadPool(100,
Thread.ofVirtual().factory());
// ✅ Do this instead
ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor();2. Use try-with-resources for executors
Always close executor services properly:
// ✅ Proper resource management
try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
executor.submit(() -> processRequest());
} // Automatically waits for tasks and shuts down3. Prefer ReentrantLock over synchronized
When blocking I/O is involved, use java.util.concurrent locks:
// ✅ Virtual thread friendly
private final ReentrantLock lock = new ReentrantLock();
private final Condition condition = lock.newCondition();
public void waitForCondition() throws InterruptedException {
lock.lock();
try {
while (!ready) {
condition.await(); // Can unmount
}
} finally {
lock.unlock();
}
}4. Keep blocking operations short
Virtual threads excel at many short blocking operations, not few long ones:
// ✅ Many short operations - ideal for virtual threads
for (String url : urls) {
executor.submit(() -> fetchUrl(url)); // Each fetch is short
}
// ⚠️ Fewer long operations - less benefit
executor.submit(() -> processLargeFile()); // Minutes of work5. Use structured concurrency
When available, prefer structured concurrency for cleaner code and better error handling (see Structured Concurrency section).
Common Pitfalls and Anti-patterns
Pitfall 1: Thread.yield() abuse
Don't use Thread.yield() thinking it will help scheduling:
// ❌ Don't do this
while (processing) {
doWork();
Thread.yield(); // Unnecessary with virtual threads
}Virtual threads unmount automatically during blocking operations. Manual yielding adds overhead without benefit.
Pitfall 2: Ignoring InterruptedException
Always handle interruption properly:
// ❌ Wrong
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// Ignoring - bad practice!
}
// ✅ Correct
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new RuntimeException("Operation cancelled", e);
}Pitfall 3: Assuming virtual threads are always faster
Virtual threads help with blocking operations, not computation:
// Virtual threads won't help here
IntStream.range(0, 1000)
.parallel() // Uses ForkJoinPool - good for CPU work
.map(this::heavyComputation)
.sum();Virtual Threads with Spring Boot
Spring Boot 3.2+ provides native support for virtual threads. Enable them with a single property:
spring:
threads:
virtual:
enabled: trueOr in application.properties:
spring.threads.virtual.enabled=trueWhat this enables
- Tomcat/Jetty/Undertow use virtual threads for request handling
@Asyncmethods run on virtual threads- Spring WebFlux continues using reactive patterns (no change)
Performance results with Spring Boot
Early adopters report significant improvements when migrating to virtual threads:
| Metric | Improvement |
|---|---|
| Memory usage | 43% reduction |
| Tail latency (p99) | 4x improvement |
| CPU utilization | 20-40% lower under same load |
| Throughput | 2x improvement for I/O-bound workloads |
Custom executor configuration
For fine-grained control:
@Configuration
public class VirtualThreadConfig {
@Bean
public AsyncTaskExecutor applicationTaskExecutor() {
return new TaskExecutorAdapter(
Executors.newVirtualThreadPerTaskExecutor()
);
}
@Bean
public TomcatProtocolHandlerCustomizer<?> protocolHandlerVirtualThreadCustomizer() {
return protocolHandler -> {
protocolHandler.setExecutor(
Executors.newVirtualThreadPerTaskExecutor()
);
};
}
}Note
With Spring Boot 3.2+ and spring.threads.virtual.enabled=true, your existing blocking code automatically benefits from virtual threads without any code changes.
This is the easiest migration path for most applications.
Virtual Threads vs Reactive
Both approaches solve the scalability problem, but they differ significantly. Reactive frameworks like Vert.x, Mutiny (used by Quarkus), and Spring WebFlux all share similar characteristics:
| Aspect | Virtual Threads | Reactive (Vert.x/Mutiny) |
|---|---|---|
| Programming model | Imperative, blocking | Declarative, non-blocking |
| Learning curve | Low (familiar APIs) | High (new paradigm) |
| Debugging | Standard tools work | Complex, fragmented traces |
| Existing code | Works with minimal changes | Requires rewrite |
| CPU efficiency | Good | Excellent |
| Memory under load | Good | Better |
| Error handling | try/catch | Operators (onFailure, recover) |
| Testing | Simple unit tests | Requires reactive testing |
When to choose reactive
- Streaming data (SSE, WebSocket heavy use)
- Backpressure requirements
- Already invested in reactive ecosystem (Vert.x, Mutiny, RxJava)
- Need maximum efficiency at extreme scale
When to choose virtual threads
- Traditional request/response applications
- Team familiar with blocking code
- Need to integrate with legacy libraries
- Debugging and maintainability are priorities
Virtual Threads vs Go Goroutines vs Kotlin Coroutines
How do Java virtual threads compare to similar features in other languages?
| Feature | Java Virtual Threads | Go Goroutines | Kotlin Coroutines |
|---|---|---|---|
| Release | Java 21 (2023) | Go 1.0 (2012) | Kotlin 1.3 (2018) |
| Runtime | JVM | Go runtime | JVM/Native/JS |
| Default stack | 512 bytes + grows | 2 KB + grows | ~dozen objects |
| Max concurrent | Millions | Millions | Millions |
| Scheduling | Work-stealing | M:N scheduler | Dispatchers |
| Blocking | Automatic unmount | Automatic | suspend functions |
| Native integration | JNI pins | CGO pins | Platform-specific |
| Structured concurrency | Preview | Explicit (WaitGroup) | Native |
Go's advantage
Go was designed from scratch with goroutines. The entire standard library is non-blocking, so there's no "pinning" equivalent.
Java's advantage
Virtual threads work with existing Java code. You don't need to rewrite libraries or learn a new paradigm. The vast Java ecosystem becomes automatically more scalable.
Kotlin's approach
Kotlin coroutines require explicit suspend functions, making it clear what can pause.
This is more explicit but requires learning new patterns.
Structured Concurrency and Scoped Values
Java 21 introduces structured concurrency (preview) alongside virtual threads. These features work together to simplify concurrent programming.
Note
Structured concurrency and scoped values are still in preview as of Java 24. They are expected to reach stable status in Java 25 LTS or shortly after, so the API may still change slightly.
StructuredTaskScope
Structured concurrency treats groups of concurrent tasks as a single unit:
import java.util.concurrent.StructuredTaskScope;
import java.util.concurrent.StructuredTaskScope.Subtask;
public class StructuredConcurrency {
record User(String name) {}
record Order(String id) {}
record Response(User user, Order order) {}
public Response fetchUserAndOrder(String userId, String orderId)
throws InterruptedException {
try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
Subtask<User> userTask = scope.fork(() -> fetchUser(userId));
Subtask<Order> orderTask = scope.fork(() -> fetchOrder(orderId));
scope.join() // Wait for both tasks
.throwIfFailed(); // Propagate exceptions
return new Response(userTask.get(), orderTask.get());
}
}
private User fetchUser(String id) { /* ... */ return new User("Marc"); }
private Order fetchOrder(String id) { /* ... */ return new Order("ORD-123"); }
}Tip
Structured concurrency provides three key guarantees:
- Tasks don't outlive their scope
- Cancellation is automatic when the scope fails
- Error handling is centralized
Scoped Values
ScopedValue (preview) is the modern replacement for ThreadLocal:
import jdk.incubator.concurrent.ScopedValue;
public class ScopedValuesExample {
private static final ScopedValue<String> USER_ID = ScopedValue.newInstance();
public void handleRequest(String userId) {
ScopedValue.runWhere(USER_ID, userId, () -> {
processRequest();
});
}
private void processRequest() {
String userId = USER_ID.get(); // Available in child virtual threads too
System.out.println("Processing for user: " + userId);
}
}Scoped values are immutable and automatically inherited by child virtual threads, making them ideal for request context propagation.
Observability and Debugging
Virtual threads integrate with existing Java observability tools.
Thread dumps
Use jcmd to get thread dumps including virtual threads:
jcmd <pid> Thread.dump_to_file -format=json threads.jsonThe JSON format includes virtual thread details:
{
"tid": "123456",
"name": "virtual-thread-1",
"virtual": true,
"state": "WAITING",
"stack": [...]
}Java Flight Recorder (JFR)
JFR provides virtual thread events:
java -XX:StartFlightRecording=filename=recording.jfr MyAppEvents include:
jdk.VirtualThreadStartjdk.VirtualThreadEndjdk.VirtualThreadPinned
Debugging in IDEs
IntelliJ IDEA and Eclipse support debugging virtual threads:
- Breakpoints work normally
- Step through virtual thread code
- View virtual thread stack traces
- Conditional breakpoints on virtual threads
JDK 24+ Observability Enhancements
JDK 24 introduces additional observability features for virtual threads. If you're on JDK 21 or 22, you can still use thread dumps and JFR events described above, the features below are enhancements available only in JDK 24+.
# View virtual thread scheduler statistics
jcmd <pid> Thread.vthread_scheduler
# Enhanced thread dump with virtual thread details
jcmd <pid> Thread.dump_to_file -format=json threads.jsonThe VirtualThreadSchedulerMXBean provides programmatic access to scheduler metrics:
import java.lang.management.ManagementFactory;
import jdk.management.VirtualThreadSchedulerMXBean;
// Get the virtual thread scheduler MXBean (JDK 24+)
VirtualThreadSchedulerMXBean mxBean = ManagementFactory.getPlatformMXBean(
VirtualThreadSchedulerMXBean.class
);
// Monitor scheduler metrics
System.out.println("Parallelism: " + mxBean.getParallelism());
System.out.println("Pool size: " + mxBean.getPoolSize());
System.out.println("Mounted count: " + mxBean.getMountedVirtualThreadCount());
System.out.println("Queued count: " + mxBean.getQueuedVirtualThreadCount());Real-World Example: HTTP Client
The following code snippet demonstrates a practical example of fetching data from multiple APIs concurrently:
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.time.Duration;
import java.util.List;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
public class ConcurrentHttpClient {
private static final HttpClient client = HttpClient.newBuilder()
.connectTimeout(Duration.ofSeconds(10))
.build();
public static void main(String[] args) throws Exception {
List<String> urls = List.of(
"https://api.github.com/users/octocat",
"https://api.github.com/repos/openjdk/jdk",
"https://api.github.com/orgs/spring-projects",
"https://api.github.com/users/marcnuri-demo"
);
long start = System.currentTimeMillis();
try (ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor()) {
List<Future<String>> futures = urls.stream()
.map(url -> executor.submit(() -> fetchUrl(url)))
.toList();
for (Future<String> future : futures) {
String response = future.get();
System.out.println("Received " + response.length() + " bytes");
}
}
long elapsed = System.currentTimeMillis() - start;
System.out.printf("Completed %d requests in %d ms%n", urls.size(), elapsed);
}
private static String fetchUrl(String url) throws Exception {
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create(url))
.header("User-Agent", "Java Virtual Threads Demo")
.GET()
.build();
HttpResponse<String> response = client.send(request,
HttpResponse.BodyHandlers.ofString());
return response.body();
}
}This example demonstrates:
- Creating virtual threads via executor
- Parallel HTTP requests
- Proper resource management with try-with-resources
- Performance timing
Running this code, all four requests execute concurrently, completing in roughly the time of the slowest request rather than the sum of all request times.
Migration Guide
Ready to migrate your application to virtual threads? Follow this step-by-step guide.
Step 1: Update to Java 21+
Ensure your project uses Java 21 or later:
<properties>
<java.version>21</java.version>
</properties>Step 2: Identify blocking code
Look for code that blocks:
- Database calls (JDBC, JPA)
- HTTP client calls
- File I/O
Thread.sleep()- Lock contention
These are prime candidates for virtual thread benefits.
Step 3: Check for pinning
Audit your code for synchronized blocks containing blocking operations:
# Search for synchronized with blocking inside
grep -r "synchronized" --include="*.java" .Replace synchronized with ReentrantLock where blocking occurs.
Step 4: Enable virtual threads
For Spring Boot applications:
spring.threads.virtual.enabled: trueFor custom applications, replace thread pools:
// Before
ExecutorService executor = Executors.newFixedThreadPool(200);
// After
ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor();Step 5: Test under load
Run load tests to verify:
- Throughput improvements
- No pinning warnings (check logs)
- Memory usage stays reasonable
- No degradation in CPU-bound operations
Step 6: Monitor in production
Enable JFR events and monitor:
- Virtual thread count
- Pinning events
- Carrier thread utilization
Conclusion
Virtual threads represent the most significant change to Java concurrency since the introduction of java.util.concurrent in Java 5.
They deliver on Project Loom's promise: the simplicity of blocking code with the scalability of non-blocking systems.
Key takeaways:
- Virtual threads are cheap: Create millions without concern for memory or startup time.
- Blocking is now acceptable: Virtual threads make blocking I/O efficient again.
- Familiar APIs: Use standard
ThreadandExecutorServiceAPIs you already know. - Watch for pinning: Replace
synchronizedwithReentrantLockwhen blocking operations are involved. - Not for CPU-bound work: Virtual threads help with I/O-bound, not CPU-bound workloads.
- Easy migration: Spring Boot 3.2+ makes adoption trivial with a single configuration property.
The future of Java concurrency is here, and it's more accessible than ever. Whether you're building microservices, data processing pipelines, or web applications, virtual threads can help you scale to meet demand while keeping your code simple and maintainable.
Source code
You can find the source code for this article on GitHub.
