Advanced Web Workers for High Performance JS

Master Web Workers for truly parallel JavaScript execution. Covers dedicated and shared workers, structured cloning, transferable objects, SharedArrayBuffer with Atomics, worker pools, task scheduling, Comlink RPC patterns, module workers, and performance profiling strategies.

JavaScriptadvanced
19 min read

Web Workers run JavaScript on separate OS threads, enabling true parallelism. They do not share memory by default (structured cloning), but SharedArrayBuffer and Atomics enable low-level shared memory patterns for maximum throughput.

For how the event loop drives the main thread, see JavaScript Event Loop Internals Full Guide.

Dedicated Worker Fundamentals

A dedicated worker lives in a separate file and communicates with the main thread through postMessage. You send it data, it crunches the numbers off the main thread, and sends results back. The second half of this block also shows how to create inline workers from blob URLs when you don't want a separate file.

javascriptjavascript
// === main.js ===
// Create a worker from a separate file
const worker = new Worker("worker.js");
 
// Send messages to the worker
worker.postMessage({ type: "compute", data: [1, 2, 3, 4, 5] });
 
// Receive messages from the worker
worker.onmessage = (event) => {
  console.log("Result from worker:", event.data);
};
 
// Handle errors
worker.onerror = (event) => {
  console.error("Worker error:", event.message);
};
 
// Terminate the worker
// worker.terminate();
 
// === worker.js ===
// self refers to the worker global scope (DedicatedWorkerGlobalScope)
self.onmessage = (event) => {
  const { type, data } = event.data;
 
  switch (type) {
    case "compute": {
      // Heavy computation on worker thread
      const result = data.reduce((sum, n) => sum + n * n, 0);
      self.postMessage({ type: "result", value: result });
      break;
    }
 
    case "fibonacci": {
      const fib = computeFibonacci(data.n);
      self.postMessage({ type: "fibonacci", value: fib });
      break;
    }
  }
};
 
function computeFibonacci(n) {
  if (n <= 1) return n;
  let a = 0, b = 1;
  for (let i = 2; i <= n; i++) {
    [a, b] = [b, a + b];
  }
  return b;
}
 
// INLINE WORKER (no separate file needed)
function createInlineWorker(fn) {
  const blob = new Blob(
    [`self.onmessage = ${fn.toString()}`],
    { type: "application/javascript" }
  );
  const url = URL.createObjectURL(blob);
  const worker = new Worker(url);
 
  // Clean up blob URL after worker starts
  worker.addEventListener("message", () => URL.revokeObjectURL(url), { once: true });
 
  return worker;
}
 
const inlineWorker = createInlineWorker((event) => {
  const { numbers } = event.data;
  const sorted = numbers.slice().sort((a, b) => a - b);
  self.postMessage({ sorted });
});
 
inlineWorker.postMessage({ numbers: [5, 3, 8, 1, 9] });
inlineWorker.onmessage = (e) => console.log(e.data.sorted); // [1, 3, 5, 8, 9]
 
// MODULE WORKER (ES modules support)
// const moduleWorker = new Worker("worker.js", { type: "module" });
// Inside worker.js, you can use import/export:
// import { heavyFunction } from "./utils.js";

Transferable Objects

When you pass an ArrayBuffer to a worker with postMessage, the browser copies it by default. For large buffers, that copy is expensive. Transferable objects fix this by moving ownership of the buffer to the worker in constant time. The trade-off is that the sender can no longer access the buffer after the transfer.

javascriptjavascript
// Transferable objects move ownership from one context to another
// Zero-copy: no serialization overhead, but sender loses access
 
// === main.js ===
function sendLargeData() {
  // Create a large ArrayBuffer (100MB)
  const buffer = new ArrayBuffer(100 * 1024 * 1024);
  const view = new Float64Array(buffer);
 
  // Fill with data
  for (let i = 0; i < view.length; i++) {
    view[i] = Math.random();
  }
 
  console.log("Before transfer: byteLength =", buffer.byteLength); // 104857600
 
  // Transfer ownership to worker (zero-copy)
  worker.postMessage({ buffer }, [buffer]);
 
  console.log("After transfer: byteLength =", buffer.byteLength); // 0 (detached!)
}
 
// TRANSFERABLE TYPES:
// - ArrayBuffer
// - MessagePort
// - ReadableStream
// - WritableStream
// - TransformStream
// - ImageBitmap
// - OffscreenCanvas
 
// TRANSFER VS CLONE PERFORMANCE COMPARISON
function benchmarkTransfer(sizeInMB) {
  const buffer = new ArrayBuffer(sizeInMB * 1024 * 1024);
 
  // Clone (structured cloning): O(n) time and memory
  const cloneStart = performance.now();
  worker.postMessage({ buffer }); // Cloned
  const cloneTime = performance.now() - cloneStart;
 
  // Transfer: O(1) time
  const buffer2 = new ArrayBuffer(sizeInMB * 1024 * 1024);
  const transferStart = performance.now();
  worker.postMessage({ buffer: buffer2 }, [buffer2]); // Transferred
  const transferTime = performance.now() - transferStart;
 
  console.log(`${sizeInMB}MB - Clone: ${cloneTime.toFixed(2)}ms, Transfer: ${transferTime.toFixed(2)}ms`);
}
 
// ROUND-TRIP PATTERN: send buffer to worker, get it back
function processWithWorker(data) {
  return new Promise((resolve) => {
    const buffer = new Float64Array(data).buffer;
 
    const handler = (event) => {
      worker.removeEventListener("message", handler);
      resolve(new Float64Array(event.data.buffer));
    };
 
    worker.addEventListener("message", handler);
    worker.postMessage({ buffer }, [buffer]); // Transfer to worker
  });
}
 
// === worker.js ===
// self.onmessage = (event) => {
//   const buffer = event.data.buffer;
//   const view = new Float64Array(buffer);
//
//   // Process data
//   for (let i = 0; i < view.length; i++) {
//     view[i] = view[i] * 2; // Double all values
//   }
//
//   // Transfer back to main thread
//   self.postMessage({ buffer }, [buffer]);
// };

SharedArrayBuffer and Atomics

javascriptjavascript
// SharedArrayBuffer: true shared memory between threads
// Requires Cross-Origin-Isolation headers:
// Cross-Origin-Opener-Policy: same-origin
// Cross-Origin-Embedder-Policy: require-corp
 
// === main.js ===
function sharedMemoryExample() {
  // Create shared buffer visible to all threads
  const sharedBuffer = new SharedArrayBuffer(1024);
  const sharedArray = new Int32Array(sharedBuffer);
 
  // Share with worker (no transfer needed, both have access)
  worker.postMessage({ sharedBuffer });
 
  // Main thread can read/write the same memory
  Atomics.store(sharedArray, 0, 42);
  console.log("Main wrote:", Atomics.load(sharedArray, 0)); // 42
}
 
// MUTEX WITH ATOMICS
class AtomicMutex {
  #lockArray;
 
  constructor(sharedBuffer, offset = 0) {
    this.#lockArray = new Int32Array(sharedBuffer, offset, 1);
  }
 
  lock() {
    // Spin until we acquire the lock
    while (Atomics.compareExchange(this.#lockArray, 0, 0, 1) !== 0) {
      // Wait for lock to be released
      Atomics.wait(this.#lockArray, 0, 1);
    }
  }
 
  unlock() {
    Atomics.store(this.#lockArray, 0, 0);
    // Wake up one waiting thread
    Atomics.notify(this.#lockArray, 0, 1);
  }
 
  tryLock() {
    return Atomics.compareExchange(this.#lockArray, 0, 0, 1) === 0;
  }
}
 
// ATOMIC COUNTER
class AtomicCounter {
  #array;
 
  constructor(sharedBuffer, offset = 0) {
    this.#array = new Int32Array(sharedBuffer, offset, 1);
  }
 
  increment() {
    return Atomics.add(this.#array, 0, 1) + 1;
  }
 
  decrement() {
    return Atomics.sub(this.#array, 0, 1) - 1;
  }
 
  get value() {
    return Atomics.load(this.#array, 0);
  }
}
 
// PRODUCER-CONSUMER WITH SHARED MEMORY
class SharedRingBuffer {
  // Layout: [writePos, readPos, size, ...data]
  #meta;
  #data;
  #capacity;
 
  constructor(sharedBuffer, capacity) {
    this.#capacity = capacity;
    this.#meta = new Int32Array(sharedBuffer, 0, 3); // writePos, readPos, size
    this.#data = new Float64Array(sharedBuffer, 12, capacity);
  }
 
  write(value) {
    const size = Atomics.load(this.#meta, 2);
 
    if (size >= this.#capacity) {
      return false; // Buffer full
    }
 
    const writePos = Atomics.load(this.#meta, 0);
    this.#data[writePos] = value;
 
    Atomics.store(this.#meta, 0, (writePos + 1) % this.#capacity);
    Atomics.add(this.#meta, 2, 1);
    Atomics.notify(this.#meta, 2, 1); // Wake consumer
 
    return true;
  }
 
  read() {
    let size = Atomics.load(this.#meta, 2);
 
    if (size === 0) {
      // Wait for data
      Atomics.wait(this.#meta, 2, 0);
      size = Atomics.load(this.#meta, 2);
    }
 
    const readPos = Atomics.load(this.#meta, 1);
    const value = this.#data[readPos];
 
    Atomics.store(this.#meta, 1, (readPos + 1) % this.#capacity);
    Atomics.sub(this.#meta, 2, 1);
 
    return value;
  }
}

Worker Pool

javascriptjavascript
// Manage a pool of workers for parallel task execution
 
class WorkerPool {
  #workers = [];
  #taskQueue = [];
  #workerStatus = []; // "idle" | "busy"
  #resolvers = new Map();
  #taskId = 0;
 
  constructor(workerScript, poolSize = navigator.hardwareConcurrency || 4) {
    for (let i = 0; i < poolSize; i++) {
      const worker = new Worker(workerScript);
 
      worker.onmessage = (event) => {
        const { taskId, result, error } = event.data;
 
        // Resolve the promise for this task
        const resolver = this.#resolvers.get(taskId);
        if (resolver) {
          if (error) resolver.reject(new Error(error));
          else resolver.resolve(result);
          this.#resolvers.delete(taskId);
        }
 
        // Mark worker as idle and process next task
        this.#workerStatus[i] = "idle";
        this.#processQueue();
      };
 
      this.#workers.push(worker);
      this.#workerStatus.push("idle");
    }
  }
 
  execute(task) {
    return new Promise((resolve, reject) => {
      const taskId = this.#taskId++;
      this.#resolvers.set(taskId, { resolve, reject });
      this.#taskQueue.push({ taskId, task });
      this.#processQueue();
    });
  }
 
  #processQueue() {
    while (this.#taskQueue.length > 0) {
      const idleIndex = this.#workerStatus.indexOf("idle");
      if (idleIndex === -1) break; // No idle workers
 
      const { taskId, task } = this.#taskQueue.shift();
      this.#workerStatus[idleIndex] = "busy";
      this.#workers[idleIndex].postMessage({ taskId, ...task });
    }
  }
 
  // Execute multiple tasks in parallel, gather all results
  async map(tasks) {
    return Promise.all(tasks.map(task => this.execute(task)));
  }
 
  // Get pool status
  get stats() {
    return {
      total: this.#workers.length,
      idle: this.#workerStatus.filter(s => s === "idle").length,
      busy: this.#workerStatus.filter(s => s === "busy").length,
      queued: this.#taskQueue.length
    };
  }
 
  terminate() {
    for (const worker of this.#workers) {
      worker.terminate();
    }
    this.#workers = [];
    this.#workerStatus = [];
 
    // Reject pending tasks
    for (const [, resolver] of this.#resolvers) {
      resolver.reject(new Error("Worker pool terminated"));
    }
    this.#resolvers.clear();
  }
}
 
// === pool-worker.js ===
// self.onmessage = (event) => {
//   const { taskId, type, data } = event.data;
//
//   try {
//     let result;
//     switch (type) {
//       case "sort":
//         result = data.slice().sort((a, b) => a - b);
//         break;
//       case "prime-check":
//         result = isPrime(data);
//         break;
//       case "hash":
//         result = simpleHash(data);
//         break;
//     }
//     self.postMessage({ taskId, result });
//   } catch (err) {
//     self.postMessage({ taskId, error: err.message });
//   }
// };
 
// USAGE
// const pool = new WorkerPool("pool-worker.js", 4);
//
// // Single task
// const sorted = await pool.execute({ type: "sort", data: [5, 3, 8, 1] });
//
// // Parallel batch
// const results = await pool.map([
//   { type: "prime-check", data: 997 },
//   { type: "prime-check", data: 998 },
//   { type: "prime-check", data: 991 },
//   { type: "prime-check", data: 1009 }
// ]);
//
// console.log(pool.stats); // { total: 4, idle: 4, busy: 0, queued: 0 }
// pool.terminate();
javascriptjavascript
// Expose worker functions as if they were local async functions
 
class WorkerRPC {
  #worker;
  #pending = new Map();
  #callId = 0;
 
  constructor(worker) {
    this.#worker = worker;
 
    worker.onmessage = (event) => {
      const { id, result, error } = event.data;
      const pending = this.#pending.get(id);
 
      if (pending) {
        if (error) pending.reject(new Error(error));
        else pending.resolve(result);
        this.#pending.delete(id);
      }
    };
  }
 
  call(method, ...args) {
    return new Promise((resolve, reject) => {
      const id = this.#callId++;
      this.#pending.set(id, { resolve, reject });
 
      // Transfer ArrayBuffers if present
      const transferables = args.filter(a => a instanceof ArrayBuffer);
      this.#worker.postMessage({ id, method, args }, transferables);
    });
  }
 
  // Create a proxy that turns method calls into RPC
  createProxy() {
    return new Proxy({}, {
      get: (_, method) => {
        return (...args) => this.call(method, ...args);
      }
    });
  }
 
  terminate() {
    this.#worker.terminate();
    for (const [, { reject }] of this.#pending) {
      reject(new Error("Worker terminated"));
    }
    this.#pending.clear();
  }
}
 
// EXPOSE HELPER (for worker side)
function expose(api) {
  self.onmessage = async (event) => {
    const { id, method, args } = event.data;
 
    try {
      if (typeof api[method] !== "function") {
        throw new Error(`Method "${method}" not found`);
      }
 
      const result = await api[method](...args);
 
      const transferables = [];
      if (result instanceof ArrayBuffer) transferables.push(result);
 
      self.postMessage({ id, result }, transferables);
    } catch (err) {
      self.postMessage({ id, error: err.message });
    }
  };
}
 
// === math-worker.js ===
// expose({
//   fibonacci(n) {
//     if (n <= 1) return n;
//     let a = 0, b = 1;
//     for (let i = 2; i <= n; i++) [a, b] = [b, a + b];
//     return b;
//   },
//
//   primeFactors(n) {
//     const factors = [];
//     for (let d = 2; d * d <= n; d++) {
//       while (n % d === 0) { factors.push(d); n /= d; }
//     }
//     if (n > 1) factors.push(n);
//     return factors;
//   },
//
//   async processImage(buffer) {
//     // Heavy image processing
//     const view = new Uint8Array(buffer);
//     for (let i = 0; i < view.length; i += 4) {
//       const avg = (view[i] + view[i+1] + view[i+2]) / 3;
//       view[i] = view[i+1] = view[i+2] = avg; // Grayscale
//     }
//     return buffer; // Transfer back
//   }
// });
 
// === main.js ===
// const rpc = new WorkerRPC(new Worker("math-worker.js"));
// const math = rpc.createProxy();
//
// const fib = await math.fibonacci(40);      // 102334155
// const factors = await math.primeFactors(360); // [2, 2, 2, 3, 3, 5]
CommunicationMechanismCopy CostShared AccessBest For
postMessage (clone)Structured cloningO(n)No (copy)Small to medium data
postMessage (transfer)Ownership transferO(1)No (moved)Large buffers, round-trip
SharedArrayBufferShared memoryO(1)Yes (concurrent)High-throughput pipelines
AtomicsLock-free operationsO(1)Yes (atomic)Counters, flags, synchronization
MessageChannelDedicated port pairO(n)NoWorker-to-worker communication
Rune AI

Rune AI

Key Insights

  • Web Workers run JavaScript on separate OS threads, enabling true parallelism for CPU-bound tasks that would block the main thread: Each worker has its own V8 isolate, event loop, and heap
  • Transferable Objects provide zero-copy data movement between threads by transferring ownership rather than cloning: The sender's ArrayBuffer becomes detached (byteLength 0) after transfer
  • SharedArrayBuffer enables true shared memory between threads, requiring Cross-Origin Isolation headers for security: Use Atomics for thread-safe operations on shared memory
  • Worker pools manage a fixed number of workers with a task queue, distributing work across available threads: This avoids the overhead of creating and destroying workers per task
  • RPC patterns via Proxy make worker methods callable as local async functions, hiding the postMessage complexity: The Proxy intercepts method calls and routes them through the message channel
RunePowered by Rune AI

Frequently Asked Questions

When should I use Web Workers vs async/await?

Web Workers provide true parallelism on separate OS threads. Use them for CPU-bound tasks: image processing, data parsing, sorting large datasets, cryptographic operations, physics simulations. Async/await provides concurrency on a single thread using the event loop. Use it for I/O-bound tasks: network requests, file operations, timer-based delays. If a task blocks the main thread for more than 50ms (causing UI jank), move it to a worker.

How do I share data between workers efficiently?

For read-heavy sharing, clone the data to each worker once during initialization. For write-heavy sharing, use SharedArrayBuffer with Atomics for lock-free or mutex-based coordination. For large one-time transfers, use Transferable Objects (zero-copy, but sender loses access). For complex objects, serialize to JSON or use structured cloning. Avoid frequently cloning large objects across the message boundary as this creates GC pressure.

What are the limitations of Web Workers?

Workers cannot access the DOM, window object, document, or any UI APIs directly. They have their own global scope (DedicatedWorkerGlobalScope) with limited APIs: fetch, WebSocket, IndexedDB, Cache API, crypto, timers, and importScripts. Workers cannot share JavaScript objects or closures (only structured-cloneable data). Module workers support ES import/export. Workers add memory overhead (each worker has its own V8 isolate with its own heap).

How does SharedArrayBuffer relate to Spectre mitigations?

SharedArrayBuffer was temporarily disabled in browsers after the Spectre vulnerability disclosure in 2018. It enables high-resolution timing attacks via shared memory counters between threads. Browsers re-enabled it under Cross-Origin Isolation (COOP + COEP headers), which prevents cross-origin resources from sharing the process. You must serve your page with `Cross-Origin-Opener-Policy: same-origin` and `Cross-Origin-Embedder-Policy: require-corp` to use SharedArrayBuffer. This isolation ensures that attackers cannot load your page in a context where Spectre attacks are possible.

Conclusion

Web Workers unlock true parallelism in JavaScript through dedicated threads, transferable objects, and shared memory. Worker pools and RPC patterns make concurrent architectures practical and maintainable. For OffscreenCanvas rendering in workers, continue to OffscreenCanvas API in JS for UI Performance. For the event loop model that workers extend, see Understanding libuv and JS Asynchronous I/O.