Understanding libuv and JS Asynchronous I/O

Master how libuv powers asynchronous I/O in Node.js. Covers the libuv event loop architecture, thread pool management, file system operations, network I/O with epoll/kqueue/IOCP, DNS resolution, signal handling, child processes, and performance tuning strategies.

JavaScriptadvanced
18 min read

libuv is the cross-platform asynchronous I/O library that powers Node.js. It provides the event loop, thread pool, file system operations, networking, DNS resolution, and child process management that make non-blocking JavaScript possible on the server.

For how the JavaScript event loop sits on top of libuv, see JavaScript Event Loop Internals Full Guide.

libuv Architecture Overview

libuv handles async operations in two fundamentally different ways. Network I/O goes through kernel-level polling (epoll on Linux, kqueue on macOS, IOCP on Windows) with no threads needed. File system operations, DNS lookups, and some crypto work go through a thread pool because most operating systems do not offer a reliable async API for those. This model below shows how the two paths work side by side.

javascriptjavascript
// libuv sits between Node.js JavaScript code and the operating system
//
// โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
// โ”‚                 JavaScript Code                  โ”‚
// โ”‚         (your app, npm packages, etc.)           โ”‚
// โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
// โ”‚                Node.js C++ Bindings              โ”‚
// โ”‚       (V8 integration, Buffer, Stream, etc.)     โ”‚
// โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
// โ”‚                     libuv                        โ”‚
// โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
// โ”‚  โ”‚ Event    โ”‚  โ”‚ Thread   โ”‚  โ”‚ OS Async I/O  โ”‚  โ”‚
// โ”‚  โ”‚ Loop     โ”‚  โ”‚ Pool     โ”‚  โ”‚ (epoll/kqueue โ”‚  โ”‚
// โ”‚  โ”‚          โ”‚  โ”‚ (4 threadsโ”‚ โ”‚  /IOCP)       โ”‚  โ”‚
// โ”‚  โ”‚          โ”‚  โ”‚ default) โ”‚  โ”‚               โ”‚  โ”‚
// โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
// โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
// โ”‚              Operating System Kernel             โ”‚
// โ”‚    (Linux/macOS/Windows/FreeBSD)                 โ”‚
// โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
 
// TWO TYPES OF ASYNC OPERATIONS IN libuv:
//
// 1. KERNEL-ASYNC (no thread pool needed):
//    - TCP/UDP sockets (epoll/kqueue/IOCP)
//    - Pipes (named/unnamed)
//    - TTY
//    - Signals
//    - Child process events
//    These use OS-level async mechanisms directly
//
// 2. THREAD-POOL-ASYNC (delegated to worker threads):
//    - File system operations (fs.*)
//    - DNS lookups (dns.lookup)
//    - Certain crypto operations
//    - zlib compression
//    These cannot be done async at the OS level on all platforms
 
// Simulating libuv's dual approach
class LibuvModel {
  #pendingIO = new Map();
  #threadPool;
  #eventLoop;
 
  constructor(poolSize = 4) {
    this.#threadPool = new ThreadPoolModel(poolSize);
    this.#eventLoop = new EventLoopModel();
  }
 
  // Network I/O: kernel-async (no thread pool)
  async tcpConnect(host, port) {
    const handle = this.#eventLoop.registerPollHandle(host, port);
    // OS notifies via epoll/kqueue when connection is ready
    return new Promise((resolve) => {
      handle.onReadable = () => resolve(handle);
    });
  }
 
  // File I/O: thread-pool-async
  async readFile(path) {
    return new Promise((resolve, reject) => {
      this.#threadPool.submit(() => {
        // Runs on thread pool thread (blocking I/O)
        const data = blockingRead(path);
        return data;
      }, (err, result) => {
        // Callback runs on main thread
        if (err) reject(err);
        else resolve(result);
      });
    });
  }
 
  // DNS: thread-pool-async (dns.lookup)
  async dnsLookup(hostname) {
    return new Promise((resolve, reject) => {
      this.#threadPool.submit(() => {
        return blockingDNSResolve(hostname);
      }, (err, result) => {
        if (err) reject(err);
        else resolve(result);
      });
    });
  }
}
 
function blockingRead(path) { /* OS-level synchronous read */ }
function blockingDNSResolve(hostname) { /* OS getaddrinfo call */ }
 
class ThreadPoolModel {
  constructor(size) { this.size = size; }
  submit(work, callback) { /* Queue work to thread pool */ }
}
 
class EventLoopModel {
  registerPollHandle(host, port) { /* Register with epoll/kqueue */ }
}

Thread Pool Deep Dive

The thread pool defaults to 4 threads, and that number matters more than most people realize. File reads, DNS lookups via dns.lookup(), crypto hashing, and zlib compression all compete for the same 4 threads. If you fire off 8 file reads and a DNS lookup at the same time, the DNS lookup might wait behind file I/O. The fix is either increasing UV_THREADPOOL_SIZE or switching to dns.resolve(), which bypasses the thread pool entirely.

javascriptjavascript
// libuv's thread pool handles operations that cannot be done
// asynchronously at the OS level
 
// DEFAULT: 4 threads
// CONFIGURABLE: UV_THREADPOOL_SIZE environment variable (max 1024)
 
// WHY THREAD POOL SIZE MATTERS:
// If all 4 threads are busy with file I/O, DNS lookups queue up
// This can cause unexpected latency
 
const fs = require("node:fs");
const dns = require("node:dns");
const { performance: perf } = require("node:perf_hooks");
 
// DEMONSTRATION: Thread pool contention
async function threadPoolContention() {
  const start = perf.now();
 
  // Launch 8 file reads simultaneously (only 4 threads available)
  const reads = Array.from({ length: 8 }, (_, i) =>
    fs.promises.readFile(`/tmp/file-${i}.txt`)
  );
 
  // Also launch a DNS lookup (uses same thread pool!)
  const dnsPromise = new Promise((resolve, reject) => {
    dns.lookup("example.com", (err, address) => {
      if (err) reject(err);
      else {
        console.log(`DNS resolved in ${(perf.now() - start).toFixed(1)}ms`);
        resolve(address);
      }
    });
  });
 
  await Promise.all([...reads, dnsPromise]);
  console.log(`Total time: ${(perf.now() - start).toFixed(1)}ms`);
}
 
// With UV_THREADPOOL_SIZE=4:
//   First 4 file reads start immediately
//   Remaining 4 file reads + DNS lookup wait in queue
//   DNS may be delayed by file I/O
//
// Fix: UV_THREADPOOL_SIZE=16 or use dns.resolve() (kernel-async via c-ares)
 
// THREAD POOL OPERATIONS BY MODULE:
//
// | Module     | Operation           | Uses Thread Pool? |
// |------------|---------------------|-------------------|
// | fs         | All operations      | Yes               |
// | dns        | dns.lookup()        | Yes               |
// | dns        | dns.resolve*()      | No (c-ares)       |
// | crypto     | pbkdf2, scrypt      | Yes               |
// | crypto     | randomBytes         | Yes               |
// | zlib       | deflate, inflate    | Yes               |
// | net/http   | connect, read/write | No (kernel async) |
// | child_proc | spawn, exec         | No (kernel async) |
 
// TUNING THE THREAD POOL
// Set before require('http') or any I/O module
process.env.UV_THREADPOOL_SIZE = "16";
 
// Guidelines:
// - CPU cores * 1.5 is a reasonable starting point
// - If DNS-heavy: increase pool or use dns.resolve() instead of dns.lookup()
// - If crypto-heavy: increase pool (pbkdf2/scrypt block a thread for 100ms+)
// - Never set above 1024 (libuv hard limit)
// - Each thread uses ~1MB stack memory

Network I/O and OS Polling

Network I/O is where Node.js really shines for concurrency. Instead of dedicating a thread per connection, libuv registers sockets with the OS polling mechanism and just waits. When data arrives or a connection is ready, the OS wakes up the event loop, and the callback runs on the main thread. This is how a single Node.js process can handle tens of thousands of concurrent connections without breaking a sweat.

javascriptjavascript
// Network I/O bypasses the thread pool entirely
// libuv uses OS-specific polling mechanisms:
//
// Linux:   epoll    (O(1) for socket readiness, scales to 100K+ connections)
// macOS:   kqueue   (similar to epoll, also handles file events)
// Windows: IOCP     (completion-based, not readiness-based)
// Others:  select/poll (fallback, O(n) per poll)
 
// HOW EPOLL WORKS (Linux):
// 1. Create epoll instance: epoll_create()
// 2. Register interest: epoll_ctl(ADD, fd, events)
// 3. Wait for events: epoll_wait(timeout)
// 4. Process ready file descriptors
// 5. Repeat from step 3
 
// Node.js TCP server flow:
// const server = net.createServer((socket) => {
//   socket.on('data', (chunk) => { /* handle data */ });
// });
// server.listen(3000);
//
// Under the hood:
// 1. libuv calls socket(), bind(), listen() (synchronous)
// 2. Registers listening socket with epoll via epoll_ctl(ADD)
// 3. Event loop blocks at epoll_wait() in the poll phase
// 4. When a client connects, epoll_wait() returns
// 5. libuv calls accept() and creates new socket handle
// 6. New socket is registered with epoll
// 7. 'connection' callback fires on main thread
// 8. When data arrives, epoll_wait() returns again
// 9. libuv calls read() and fires 'data' callback
 
// Simulating epoll-style network I/O
class NetworkIOModel {
  #watchers = new Map();
  #nextId = 1;
 
  watch(fd, events, callback) {
    const id = this.#nextId++;
    this.#watchers.set(id, { fd, events, callback });
    return id;
  }
 
  unwatch(id) {
    this.#watchers.delete(id);
  }
 
  // Called by event loop during poll phase
  poll(timeoutMs) {
    // In real libuv, this calls epoll_wait/kqueue/IOCP
    // Returns list of ready file descriptors
    const readyFDs = this.#simulateOSPoll(timeoutMs);
 
    for (const { fd, events } of readyFDs) {
      for (const [id, watcher] of this.#watchers) {
        if (watcher.fd === fd && (watcher.events & events)) {
          watcher.callback(events);
        }
      }
    }
 
    return readyFDs.length;
  }
 
  #simulateOSPoll(timeoutMs) {
    // OS would block here until events arrive or timeout
    return [];
  }
}
 
// IOCP DIFFERENCE (Windows):
// - epoll/kqueue: "tell me when ready, I'll do the I/O"
// - IOCP: "here's a buffer, do the I/O, tell me when done"
//
// libuv abstracts this difference:
// On Linux: libuv does read() after epoll says "readable"
// On Windows: libuv submits read to IOCP, gets completed result

File System Operations

Every fs call goes through the thread pool, even simple ones like stat. A worker thread picks up the request, performs the blocking system call, and posts the result back to the event loop. For large files, streaming is better than readFile because it uses backpressure to avoid queuing too many read operations at once. File watching is the one exception: fs.watch() hooks into kernel events (inotify, FSEvents) and skips the thread pool entirely.

javascriptjavascript
// ALL fs operations go through the thread pool
// This is because most OSes don't support truly async file I/O
 
const fs = require("node:fs");
const path = require("node:path");
 
// OPERATION LIFECYCLE:
// 1. JavaScript calls fs.readFile(path, callback)
// 2. Node.js creates a uv_fs_t request struct
// 3. Request is queued to the thread pool
// 4. A worker thread picks up the request
// 5. Worker thread calls blocking open() + read() + close()
// 6. Worker thread posts result back to event loop
// 7. Event loop fires callback on main thread
 
// OPTIMIZATION: fs.promises vs callbacks
// Both use the thread pool, but fs.promises creates fewer objects
async function readOptimized(filePath) {
  // Uses uv_fs_open, uv_fs_read, uv_fs_close under the hood
  const handle = await fs.promises.open(filePath, "r");
  try {
    const stat = await handle.stat();
    const buffer = Buffer.allocUnsafe(stat.size);
    await handle.read(buffer, 0, stat.size, 0);
    return buffer;
  } finally {
    await handle.close();
  }
}
 
// FILE WATCHING
// libuv uses OS-specific file watching:
// Linux: inotify (kernel events for file changes)
// macOS: FSEvents (framework-level, efficient)
// Windows: ReadDirectoryChangesW (native API)
 
// Fallback: fs.watchFile() uses stat polling (thread pool, slow)
// Preferred: fs.watch() uses kernel events (no thread pool)
 
// fs.watch() libuv flow:
// 1. Calls inotify_init() + inotify_add_watch() (Linux)
// 2. inotify fd is registered with epoll
// 3. When file changes, epoll_wait() returns
// 4. libuv reads inotify events and fires callback
// No thread pool involvement at all
 
// BATCH FILE OPERATIONS
// Maximize thread pool utilization by batching
async function batchRead(filePaths) {
  // All reads are submitted to thread pool simultaneously
  // Up to UV_THREADPOOL_SIZE run in parallel
  const results = await Promise.all(
    filePaths.map((p) => fs.promises.readFile(p, "utf8"))
  );
  return results;
}
 
// STREAMING vs BUFFERED
// For large files, streaming avoids loading everything into memory
async function streamProcess(inputPath, outputPath) {
  const input = fs.createReadStream(inputPath, { highWaterMark: 64 * 1024 });
  const output = fs.createWriteStream(outputPath);
 
  // Each chunk read is a thread pool operation
  // But backpressure prevents queuing too many reads
  for await (const chunk of input) {
    const processed = processChunk(chunk);
    if (!output.write(processed)) {
      // Backpressure: wait for drain before reading more
      await new Promise((resolve) => output.once("drain", resolve));
    }
  }
  output.end();
}
 
function processChunk(chunk) {
  return chunk; // Transform as needed
}

Signal and Child Process Handling

Signals and child process events are kernel-async, meaning they do not use the thread pool. libuv uses a self-pipe trick for signals: when a signal arrives, the signal handler writes a byte to a pipe that is registered with epoll. The event loop picks that up in the poll phase and fires your callback on the main thread, where it is safe to do real work. Child process stdout/stderr works the same way through pipes.

javascriptjavascript
// SIGNAL HANDLING
// libuv registers signal handlers that integrate with the event loop
// Signals are delivered asynchronously via a self-pipe trick
 
// Self-pipe trick:
// 1. libuv creates a pipe (two file descriptors)
// 2. Read end is registered with epoll
// 3. Signal handler (runs in signal context) writes to write end
// 4. epoll_wait() wakes up, libuv reads from pipe
// 5. Signal callback fires on main thread (safe context)
 
process.on("SIGTERM", () => {
  console.log("SIGTERM received, shutting down gracefully");
  // Clean up resources
  server.close(() => {
    process.exit(0);
  });
});
 
process.on("SIGUSR2", () => {
  console.log("SIGUSR2: reloading configuration");
  reloadConfig();
});
 
function reloadConfig() { /* reload logic */ }
const server = { close(cb) { cb(); } };
 
// CHILD PROCESS I/O
// Child process I/O uses kernel-async (pipes registered with epoll)
const { spawn } = require("node:child_process");
 
function runCommand(cmd, args) {
  return new Promise((resolve, reject) => {
    const child = spawn(cmd, args);
    const chunks = [];
 
    // stdout pipe is registered with epoll
    // Data arrives via poll phase, not thread pool
    child.stdout.on("data", (chunk) => chunks.push(chunk));
    child.stderr.on("data", (chunk) => {
      console.error("stderr:", chunk.toString());
    });
 
    // Process exit is delivered via SIGCHLD signal
    child.on("close", (code) => {
      if (code === 0) resolve(Buffer.concat(chunks).toString());
      else reject(new Error(`Process exited with code ${code}`));
    });
 
    child.on("error", reject);
  });
}
 
// HANDLE TYPES IN libuv:
//
// | Handle Type | Description                | Polling Mechanism  |
// |-------------|----------------------------|--------------------|
// | uv_tcp_t    | TCP socket                 | epoll/kqueue/IOCP  |
// | uv_udp_t    | UDP socket                 | epoll/kqueue/IOCP  |
// | uv_pipe_t   | Named/unnamed pipe         | epoll/kqueue/IOCP  |
// | uv_tty_t    | Terminal                   | epoll/kqueue/IOCP  |
// | uv_timer_t  | Timer                      | Timer heap         |
// | uv_signal_t | Signal                     | Self-pipe + epoll  |
// | uv_process_t| Child process              | SIGCHLD + epoll    |
// | uv_fs_event_t| File change watcher       | inotify/FSEvents   |
// | uv_idle_t   | Idle callback              | Every loop tick    |
// | uv_check_t  | Post-poll callback         | After poll phase   |
// | uv_prepare_t| Pre-poll callback          | Before poll phase  |

Performance Tuning

Diagnosing libuv bottlenecks comes down to two things: is the thread pool saturated, and is the event loop lagging? You can measure thread pool saturation by timing simple fs.stat calls. If a stat that should take under a millisecond is taking 50ms, your threads are backed up. Event loop lag shows up when setTimeout callbacks fire later than expected. These monitoring patterns will help you catch problems before your users do.

javascriptjavascript
// DIAGNOSING LIBUV BOTTLENECKS
 
// 1. Thread pool saturation
// Symptom: fs operations or DNS lookups become slow
// Diagnosis: measure time between request and callback
 
async function measureFsLatency() {
  const start = process.hrtime.bigint();
  await fs.promises.stat("/tmp");
  const elapsed = Number(process.hrtime.bigint() - start) / 1e6;
 
  if (elapsed > 10) {
    console.warn(`fs.stat took ${elapsed.toFixed(1)}ms (thread pool may be saturated)`);
  }
  return elapsed;
}
 
// 2. Event loop lag
// Symptom: setTimeout callbacks fire late
function monitorEventLoopLag(intervalMs = 1000) {
  let lastCheck = process.hrtime.bigint();
 
  setInterval(() => {
    const now = process.hrtime.bigint();
    const expected = BigInt(intervalMs) * 1_000_000n;
    const actual = now - lastCheck;
    const lagMs = Number(actual - expected) / 1e6;
 
    if (lagMs > 50) {
      console.warn(`Event loop lag: ${lagMs.toFixed(1)}ms`);
    }
 
    lastCheck = now;
  }, intervalMs);
}
 
// 3. Active handles and requests
// Check what's keeping the event loop alive
function inspectEventLoop() {
  const handles = process._getActiveHandles();
  const requests = process._getActiveRequests();
 
  console.log("Active handles:", handles.length);
  handles.forEach((h) => {
    console.log(` - ${h.constructor.name}: ${h.address?.() || ""}`);
  });
 
  console.log("Active requests:", requests.length);
  requests.forEach((r) => {
    console.log(` - ${r.constructor.name}`);
  });
}
 
// BEST PRACTICES:
//
// 1. Use dns.resolve() instead of dns.lookup() for high-throughput
//    dns.resolve() uses c-ares (kernel-async, no thread pool)
//    dns.lookup() uses getaddrinfo (thread pool, blocks a thread)
//
// 2. Set UV_THREADPOOL_SIZE based on workload:
//    - I/O heavy: 2x CPU cores
//    - crypto heavy: 4x CPU cores
//    - Mixed: monitor and adjust
//
// 3. Use streams instead of readFile for large files
//    Avoids blocking a thread pool thread for long reads
//
// 4. Avoid sync fs methods in servers
//    fs.readFileSync blocks the ENTIRE event loop
//    All I/O stops until it completes
//
// 5. Use worker_threads for CPU-intensive work
//    Thread pool is for I/O, not computation
//    CPU work in the thread pool blocks I/O operations
Operation TypeMechanismThread Pool?Scalability
TCP/UDP networkingepoll/kqueue/IOCPNo100K+ connections
File system opsThread pool + blocking I/OYes (default 4)Limited by pool size
DNS (dns.lookup)Thread pool + getaddrinfoYesLimited by pool size
DNS (dns.resolve)c-ares libraryNoHigh
Crypto (pbkdf2)Thread pool + OpenSSLYesLimited by pool size
Child processesPipe + SIGCHLDNoOS process limits
File watchinginotify/FSEvents/RDCWNoOS watch limits
Rune AI

Rune AI

Key Insights

  • libuv provides two async mechanisms: kernel-async (epoll/kqueue/IOCP) for networking and thread-pool-async for file I/O and DNS: Understanding which operations use which mechanism is critical for diagnosing performance issues
  • The default thread pool size of 4 can become a bottleneck when file I/O, DNS lookups, and crypto operations compete for threads: Increase UV_THREADPOOL_SIZE or use kernel-async alternatives like dns.resolve()
  • Network I/O scales to hundreds of thousands of connections because epoll/kqueue provide O(1) readiness notification: No thread pool threads are consumed for socket operations
  • File system operations always use the thread pool because most operating systems lack reliable async file I/O APIs: Streaming large files reduces the time each thread is occupied
  • Monitoring event loop lag and fs operation latency reveals thread pool saturation before it impacts users: Use process.hrtime and periodic checks to detect contention early
RunePowered by Rune AI

Frequently Asked Questions

Why does Node.js use a thread pool for file I/O instead of kernel async?

Most operating systems do not provide truly asynchronous file I/O for regular files. Linux has io_uring (relatively new) and AIO (limited), but they have restrictions and compatibility issues. POSIX does not define async file operations. libuv chose the thread pool approach because it works consistently across all platforms. Each thread performs a blocking read/write, but from the main thread's perspective the operation is non-blocking. Some newer versions of libuv are experimenting with io_uring support on Linux for improved file I/O performance.

How many threads should I set for UV_THREADPOOL_SIZE?

The default of 4 works well for many applications. If your app is file I/O heavy or uses crypto operations like pbkdf2/scrypt, increase it to match your CPU core count or 2x cores. For DNS-heavy apps, prefer switching to dns.resolve() (which uses c-ares and bypasses the thread pool) rather than increasing pool size. Never set it above 128 in practice because each thread consumes ~1MB of stack memory. Monitor thread pool utilization by measuring fs operation latency under load.

What is the difference between epoll, kqueue, and IOCP?

epoll (Linux) and kqueue (macOS/BSD) are readiness-based: they tell you when a file descriptor is ready for reading or writing, then you perform the I/O. IOCP (Windows) is completion-based: you submit an I/O operation and get notified when it finishes, with the data already in your buffer. libuv abstracts these differences so Node.js code works identically on all platforms. Under the hood, libuv adapts its internal model for each backend.

Can libuv handle millions of concurrent connections?

Yes, in theory. epoll and kqueue scale to hundreds of thousands of file descriptors with O(1) readiness notification. The practical limits are memory (each connection needs a socket buffer), file descriptor limits (ulimit -n), and application-level processing time. Node.js servers have demonstrated handling 1M+ concurrent WebSocket connections with appropriate tuning (increased file descriptor limits, memory allocation, and UV_THREADPOOL_SIZE for any file I/O involved).

Conclusion

libuv is the foundation that makes Node.js non-blocking. It provides kernel-async networking through epoll/kqueue/IOCP and thread-pool-async file I/O. Understanding which operations use the thread pool and which use kernel async helps you diagnose bottlenecks and tune performance. For how the JavaScript event loop works on top of libuv, see JavaScript Event Loop Internals Full Guide. For the call stack mechanics during execution, explore Call Stack vs Task Queue vs Microtask Queue in JS.