How to Measure JavaScript Execution Time Accurately

A complete guide to measuring JavaScript execution time accurately. Covers performance.now(), Date.now() limitations, User Timing API, statistical benchmarking, microbenchmark pitfalls, JIT warmup, GC interference, and building a reliable benchmarking framework.

JavaScriptadvanced
16 min read

Measuring JavaScript execution time seems straightforward, but getting accurate results requires understanding timer resolution, JIT compilation warmup, garbage collection interference, and statistical analysis. This guide builds reliable measurement techniques from timing basics to a full benchmarking framework.

For profiling tools and workflows, see JavaScript Profiling: Advanced Performance Guide.

Timer Comparison

javascriptjavascript
// Date.now(): millisecond resolution, affected by clock drift
const d1 = Date.now();
doWork();
const d2 = Date.now();
console.log(`Date.now: ${d2 - d1}ms`);
// Problem: 0ms for fast operations (resolution is 1ms)
 
// performance.now(): microsecond resolution, monotonic
const p1 = performance.now();
doWork();
const p2 = performance.now();
console.log(`performance.now: ${(p2 - p1).toFixed(3)}ms`);
// Much better: shows sub-millisecond precision
 
// console.time/timeEnd: convenient but limited
console.time("work");
doWork();
console.timeEnd("work"); // Prints: work: 1.234ms
// Cannot capture the value programmatically
 
// process.hrtime.bigint() (Node.js): nanosecond resolution
const n1 = process.hrtime.bigint();
doWork();
const n2 = process.hrtime.bigint();
console.log(`hrtime: ${Number(n2 - n1) / 1e6}ms`);
TimerResolutionMonotonicEnvironmentUse Case
Date.now()1msNo (clock drift)AllTimestamps, not profiling
performance.now()~5us (may be reduced)YesBrowser + Node.jsGeneral profiling
console.time()~5usYesAllQuick debugging
process.hrtime.bigint()1nsYesNode.js onlyHigh-precision benchmarks

Timer Precision Reduction

Browsers reduce performance.now() precision (typically to 100us) to mitigate Spectre/timing attacks:

javascriptjavascript
// Check effective timer resolution
function measureTimerResolution() {
  const samples = [];
  let prev = performance.now();
 
  for (let i = 0; i < 1000; i++) {
    const now = performance.now();
    if (now !== prev) {
      samples.push(now - prev);
      prev = now;
    }
  }
 
  const minDelta = Math.min(...samples);
  const avgDelta = samples.reduce((a, b) => a + b, 0) / samples.length;
 
  return {
    effectiveResolution: `${(minDelta * 1000).toFixed(0)}us`,
    averageDelta: `${(avgDelta * 1000).toFixed(0)}us`,
    samples: samples.length,
  };
}
 
console.log(measureTimerResolution());
// Typically: { effectiveResolution: "100us", ... }
 
// Impact: operations under 100us cannot be timed individually
// Solution: run many iterations and divide

JIT Warmup

V8 compiles JavaScript in tiers. The first run is interpreted, subsequent runs use optimized machine code:

javascriptjavascript
function benchmarkWithWarmup(fn, iterations = 1000, warmupRuns = 100) {
  // WARMUP: let V8 optimize the function
  for (let i = 0; i < warmupRuns; i++) {
    fn();
  }
 
  // MEASURE: time the optimized version
  const start = performance.now();
  for (let i = 0; i < iterations; i++) {
    fn();
  }
  const elapsed = performance.now() - start;
 
  return {
    totalMs: elapsed.toFixed(3),
    perCallUs: ((elapsed / iterations) * 1000).toFixed(3),
    iterations,
  };
}
 
// Without warmup: first runs are slow (interpreted/baseline compiled)
// With warmup: measures the optimized (TurboFan) version
 
const result = benchmarkWithWarmup(() => {
  const arr = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5];
  return arr.slice().sort((a, b) => a - b);
});
 
console.log(result);

Garbage Collection Interference

GC pauses can spike individual measurements:

javascriptjavascript
function benchmarkWithGCControl(fn, iterations = 1000) {
  // Trigger GC before measuring (Node.js with --expose-gc only)
  if (typeof globalThis.gc === "function") {
    globalThis.gc();
  }
 
  const times = [];
 
  for (let i = 0; i < iterations; i++) {
    const start = performance.now();
    fn();
    const elapsed = performance.now() - start;
    times.push(elapsed);
  }
 
  // Remove outliers (likely GC pauses)
  times.sort((a, b) => a - b);
  const trimCount = Math.floor(iterations * 0.05);
  const trimmed = times.slice(trimCount, times.length - trimCount);
 
  const mean = trimmed.reduce((a, b) => a + b, 0) / trimmed.length;
 
  return {
    meanMs: mean.toFixed(4),
    medianMs: trimmed[Math.floor(trimmed.length / 2)].toFixed(4),
    p95Ms: times[Math.floor(iterations * 0.95)].toFixed(4),
    p99Ms: times[Math.floor(iterations * 0.99)].toFixed(4),
    minMs: times[0].toFixed(4),
    maxMs: times[times.length - 1].toFixed(4),
    gcOutliers: trimCount * 2,
  };
}

Statistical Benchmarking

javascriptjavascript
class Benchmark {
  constructor(name, fn, options = {}) {
    this.name = name;
    this.fn = fn;
    this.warmupRuns = options.warmup || 100;
    this.minIterations = options.minIterations || 100;
    this.minTimeMs = options.minTimeMs || 1000;
    this.maxIterations = options.maxIterations || 100000;
  }
 
  run() {
    // Warmup
    for (let i = 0; i < this.warmupRuns; i++) {
      this.fn();
    }
 
    // Calibrate: determine how many iterations to run
    const calibrateStart = performance.now();
    let calibrateCount = 0;
    while (performance.now() - calibrateStart < 100) {
      this.fn();
      calibrateCount++;
    }
    const calibrateTime = performance.now() - calibrateStart;
    const estimatedPerCall = calibrateTime / calibrateCount;
 
    // Calculate iterations to fill minTimeMs
    let iterations = Math.ceil(this.minTimeMs / estimatedPerCall);
    iterations = Math.max(iterations, this.minIterations);
    iterations = Math.min(iterations, this.maxIterations);
 
    // Collect samples (multiple rounds)
    const rounds = 10;
    const iterPerRound = Math.ceil(iterations / rounds);
    const roundTimes = [];
 
    for (let r = 0; r < rounds; r++) {
      const start = performance.now();
      for (let i = 0; i < iterPerRound; i++) {
        this.fn();
      }
      const elapsed = performance.now() - start;
      roundTimes.push(elapsed / iterPerRound);
    }
 
    return this.analyze(roundTimes, iterPerRound * rounds);
  }
 
  analyze(roundTimes) {
    roundTimes.sort((a, b) => a - b);
 
    const mean = roundTimes.reduce((a, b) => a + b, 0) / roundTimes.length;
    const median = roundTimes[Math.floor(roundTimes.length / 2)];
 
    // Standard deviation
    const variance = roundTimes.reduce(
      (sum, t) => sum + Math.pow(t - mean, 2), 0
    ) / roundTimes.length;
    const stdDev = Math.sqrt(variance);
 
    // Margin of error (95% confidence)
    const marginOfError = (1.96 * stdDev) / Math.sqrt(roundTimes.length);
 
    // Operations per second
    const opsPerSec = 1000 / mean;
 
    return {
      name: this.name,
      meanMs: mean.toFixed(4),
      medianMs: median.toFixed(4),
      stdDevMs: stdDev.toFixed(4),
      marginOfError: marginOfError.toFixed(4),
      opsPerSec: opsPerSec.toFixed(0),
      confidence: `${((1 - marginOfError / mean) * 100).toFixed(1)}%`,
    };
  }
}
 
// Usage
const bench = new Benchmark("array-sort", () => {
  const arr = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8, 9, 7, 9];
  return arr.slice().sort((a, b) => a - b);
});
 
console.table([bench.run()]);

Benchmark Suite for Comparisons

javascriptjavascript
class BenchmarkSuite {
  constructor(name) {
    this.name = name;
    this.benchmarks = [];
  }
 
  add(name, fn, options = {}) {
    this.benchmarks.push(new Benchmark(name, fn, options));
    return this;
  }
 
  run() {
    console.log(`\nBenchmark Suite: ${this.name}`);
    console.log("=".repeat(60));
 
    const results = this.benchmarks.map((b) => b.run());
 
    // Sort by ops/sec descending
    results.sort((a, b) => parseInt(b.opsPerSec) - parseInt(a.opsPerSec));
 
    // Add relative speed
    const fastest = parseInt(results[0].opsPerSec);
    results.forEach((r) => {
      r.relative = `${((parseInt(r.opsPerSec) / fastest) * 100).toFixed(1)}%`;
    });
 
    console.table(results);
    return results;
  }
}
 
// Usage: compare different approaches
const suite = new BenchmarkSuite("String Concatenation");
 
suite
  .add("template-literal", () => {
    const name = "Alice";
    const age = 30;
    return `Hello ${name}, you are ${age} years old`;
  })
  .add("plus-operator", () => {
    const name = "Alice";
    const age = 30;
    return "Hello " + name + ", you are " + age + " years old";
  })
  .add("array-join", () => {
    const name = "Alice";
    const age = 30;
    return ["Hello ", name, ", you are ", age, " years old"].join("");
  })
  .add("concat-method", () => {
    const name = "Alice";
    const age = 30;
    return "Hello ".concat(name, ", you are ", age, " years old");
  });
 
suite.run();

Common Microbenchmark Pitfalls

javascriptjavascript
// PITFALL 1: Dead code elimination
// The engine may optimize away code whose result is unused
function badBenchmark() {
  const start = performance.now();
  for (let i = 0; i < 1000000; i++) {
    Math.sqrt(i); // Result unused -> engine may skip this entirely
  }
  return performance.now() - start; // Measures nothing
}
 
// FIX: Use the result
function goodBenchmark() {
  const start = performance.now();
  let sum = 0;
  for (let i = 0; i < 1000000; i++) {
    sum += Math.sqrt(i); // Result used -> cannot be eliminated
  }
  const elapsed = performance.now() - start;
  if (sum < 0) console.log(sum); // Prevent further optimization
  return elapsed;
}
 
// PITFALL 2: Constant folding
// The engine may precompute constant expressions
function badConstant() {
  const start = performance.now();
  for (let i = 0; i < 1000; i++) {
    const result = 42 * 13 + 7; // Computed once at compile time
  }
  return performance.now() - start;
}
 
// FIX: Use dynamic inputs
function goodConstant() {
  const start = performance.now();
  let result = 0;
  for (let i = 0; i < 1000; i++) {
    result += i * 13 + 7; // Different every iteration
  }
  return { elapsed: performance.now() - start, result };
}
 
// PITFALL 3: Measuring the wrong thing
// Array creation dominates over the operation you want to measure
function badSort() {
  const start = performance.now();
  for (let i = 0; i < 1000; i++) {
    const arr = Array.from({ length: 1000 }, () => Math.random());
    arr.sort((a, b) => a - b);
  }
  return performance.now() - start; // Includes array creation time!
}
 
// FIX: Separate setup from measurement
function goodSort() {
  const arrays = Array.from({ length: 1000 }, () =>
    Array.from({ length: 1000 }, () => Math.random())
  );
 
  const start = performance.now();
  for (const arr of arrays) {
    arr.sort((a, b) => a - b);
  }
  return performance.now() - start; // Measures only sort
}
PitfallSymptomFix
Dead code eliminationSuspiciously fast resultsUse computed results
Constant foldingAll iterations take same timeUse variable inputs
Measuring setup + workInconsistent with expectationsSeparate setup from measurement
No warmupFirst iterations much slowerRun warmup iterations before timing
GC spikesRandom slow iterationsTrim outliers, use median
Timer resolution0ms results for fast operationsIncrease iteration count
Rune AI

Rune AI

Key Insights

  • performance.now() over Date.now(): Use the high-resolution monotonic timer for profiling; Date.now() lacks precision and can drift with system clock adjustments
  • JIT warmup is mandatory: Run 100+ warmup iterations before measuring to ensure V8 has compiled the function with TurboFan optimizations
  • Statistical analysis over single measurements: Collect multiple rounds, compute mean/median/stdDev, and report confidence intervals to distinguish real differences from noise
  • Prevent dead code elimination: Always use computed results so the engine cannot optimize away the code you are trying to measure
  • Benchmark suites for fair comparison: Run competing implementations under identical conditions with calibrated iteration counts and report relative ops/sec
RunePowered by Rune AI

Frequently Asked Questions

Why does performance.now() have reduced precision in browsers?

Browsers reduce `performance.now()` precision (typically to 100 microseconds) as a mitigation against Spectre-class timing attacks, which exploit high-resolution timers to infer data from CPU cache states. In cross-origin isolated contexts (with `Cross-Origin-Opener-Policy` and `Cross-Origin-Embedder-Policy` headers), browsers may restore full precision.

How many iterations do I need for reliable results?

Enough that the total measurement time exceeds at least 1 second. For fast operations (nanoseconds each), that could mean millions of iterations. For slow operations (milliseconds each), hundreds may suffice. Use the calibration technique where you first estimate per-call time, then calculate iterations to fill your target duration.

Should I benchmark in the browser or in Node.js?

It depends on where the code runs. Browser benchmarks include DOM overhead and browser-specific optimizations. Node.js benchmarks are cleaner (no DOM, less noise) and have access to `process.hrtime.bigint()` for nanosecond precision. For library code, benchmark in both. For browser-specific code, benchmark in the browser with CPU throttling.

How do I prevent V8 from optimizing away my benchmark?

V8 eliminates code with unused results. Always accumulate results into a variable and use that variable after the loop (even a fake `if (sum < 0) console.log(sum)` is enough). Return the result from your benchmark function. Never benchmark in a way where all inputs and outputs are constant.

What is the best way to compare two implementations?

Run both in the same `BenchmarkSuite`, using the same input data and iteration count. Report ops/sec with confidence intervals. Run the suite multiple times and check that results are consistent. A difference under 5% is generally noise; only differences over 10-20% with high confidence are meaningful.

Conclusion

Accurate JavaScript timing requires performance.now() for sub-millisecond precision, JIT warmup to measure optimized code, outlier trimming to remove GC spikes, and statistical analysis with confidence intervals. Avoid microbenchmark pitfalls like dead code elimination and constant folding. Use a benchmark suite for fair comparisons with ops/sec and relative percentages. For profiling tools, see JavaScript Profiling: Advanced Performance Guide. For DevTools analysis, see Using Chrome DevTools for JS Performance Tuning.