How to Measure JavaScript Execution Time Accurately
A complete guide to measuring JavaScript execution time accurately. Covers performance.now(), Date.now() limitations, User Timing API, statistical benchmarking, microbenchmark pitfalls, JIT warmup, GC interference, and building a reliable benchmarking framework.
Measuring JavaScript execution time seems straightforward, but getting accurate results requires understanding timer resolution, JIT compilation warmup, garbage collection interference, and statistical analysis. This guide builds reliable measurement techniques from timing basics to a full benchmarking framework.
For profiling tools and workflows, see JavaScript Profiling: Advanced Performance Guide.
Timer Comparison
// Date.now(): millisecond resolution, affected by clock drift
const d1 = Date.now();
doWork();
const d2 = Date.now();
console.log(`Date.now: ${d2 - d1}ms`);
// Problem: 0ms for fast operations (resolution is 1ms)
// performance.now(): microsecond resolution, monotonic
const p1 = performance.now();
doWork();
const p2 = performance.now();
console.log(`performance.now: ${(p2 - p1).toFixed(3)}ms`);
// Much better: shows sub-millisecond precision
// console.time/timeEnd: convenient but limited
console.time("work");
doWork();
console.timeEnd("work"); // Prints: work: 1.234ms
// Cannot capture the value programmatically
// process.hrtime.bigint() (Node.js): nanosecond resolution
const n1 = process.hrtime.bigint();
doWork();
const n2 = process.hrtime.bigint();
console.log(`hrtime: ${Number(n2 - n1) / 1e6}ms`);| Timer | Resolution | Monotonic | Environment | Use Case |
|---|---|---|---|---|
Date.now() | 1ms | No (clock drift) | All | Timestamps, not profiling |
performance.now() | ~5us (may be reduced) | Yes | Browser + Node.js | General profiling |
console.time() | ~5us | Yes | All | Quick debugging |
process.hrtime.bigint() | 1ns | Yes | Node.js only | High-precision benchmarks |
Timer Precision Reduction
Browsers reduce performance.now() precision (typically to 100us) to mitigate Spectre/timing attacks:
// Check effective timer resolution
function measureTimerResolution() {
const samples = [];
let prev = performance.now();
for (let i = 0; i < 1000; i++) {
const now = performance.now();
if (now !== prev) {
samples.push(now - prev);
prev = now;
}
}
const minDelta = Math.min(...samples);
const avgDelta = samples.reduce((a, b) => a + b, 0) / samples.length;
return {
effectiveResolution: `${(minDelta * 1000).toFixed(0)}us`,
averageDelta: `${(avgDelta * 1000).toFixed(0)}us`,
samples: samples.length,
};
}
console.log(measureTimerResolution());
// Typically: { effectiveResolution: "100us", ... }
// Impact: operations under 100us cannot be timed individually
// Solution: run many iterations and divideJIT Warmup
V8 compiles JavaScript in tiers. The first run is interpreted, subsequent runs use optimized machine code:
function benchmarkWithWarmup(fn, iterations = 1000, warmupRuns = 100) {
// WARMUP: let V8 optimize the function
for (let i = 0; i < warmupRuns; i++) {
fn();
}
// MEASURE: time the optimized version
const start = performance.now();
for (let i = 0; i < iterations; i++) {
fn();
}
const elapsed = performance.now() - start;
return {
totalMs: elapsed.toFixed(3),
perCallUs: ((elapsed / iterations) * 1000).toFixed(3),
iterations,
};
}
// Without warmup: first runs are slow (interpreted/baseline compiled)
// With warmup: measures the optimized (TurboFan) version
const result = benchmarkWithWarmup(() => {
const arr = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5];
return arr.slice().sort((a, b) => a - b);
});
console.log(result);Garbage Collection Interference
GC pauses can spike individual measurements:
function benchmarkWithGCControl(fn, iterations = 1000) {
// Trigger GC before measuring (Node.js with --expose-gc only)
if (typeof globalThis.gc === "function") {
globalThis.gc();
}
const times = [];
for (let i = 0; i < iterations; i++) {
const start = performance.now();
fn();
const elapsed = performance.now() - start;
times.push(elapsed);
}
// Remove outliers (likely GC pauses)
times.sort((a, b) => a - b);
const trimCount = Math.floor(iterations * 0.05);
const trimmed = times.slice(trimCount, times.length - trimCount);
const mean = trimmed.reduce((a, b) => a + b, 0) / trimmed.length;
return {
meanMs: mean.toFixed(4),
medianMs: trimmed[Math.floor(trimmed.length / 2)].toFixed(4),
p95Ms: times[Math.floor(iterations * 0.95)].toFixed(4),
p99Ms: times[Math.floor(iterations * 0.99)].toFixed(4),
minMs: times[0].toFixed(4),
maxMs: times[times.length - 1].toFixed(4),
gcOutliers: trimCount * 2,
};
}Statistical Benchmarking
class Benchmark {
constructor(name, fn, options = {}) {
this.name = name;
this.fn = fn;
this.warmupRuns = options.warmup || 100;
this.minIterations = options.minIterations || 100;
this.minTimeMs = options.minTimeMs || 1000;
this.maxIterations = options.maxIterations || 100000;
}
run() {
// Warmup
for (let i = 0; i < this.warmupRuns; i++) {
this.fn();
}
// Calibrate: determine how many iterations to run
const calibrateStart = performance.now();
let calibrateCount = 0;
while (performance.now() - calibrateStart < 100) {
this.fn();
calibrateCount++;
}
const calibrateTime = performance.now() - calibrateStart;
const estimatedPerCall = calibrateTime / calibrateCount;
// Calculate iterations to fill minTimeMs
let iterations = Math.ceil(this.minTimeMs / estimatedPerCall);
iterations = Math.max(iterations, this.minIterations);
iterations = Math.min(iterations, this.maxIterations);
// Collect samples (multiple rounds)
const rounds = 10;
const iterPerRound = Math.ceil(iterations / rounds);
const roundTimes = [];
for (let r = 0; r < rounds; r++) {
const start = performance.now();
for (let i = 0; i < iterPerRound; i++) {
this.fn();
}
const elapsed = performance.now() - start;
roundTimes.push(elapsed / iterPerRound);
}
return this.analyze(roundTimes, iterPerRound * rounds);
}
analyze(roundTimes) {
roundTimes.sort((a, b) => a - b);
const mean = roundTimes.reduce((a, b) => a + b, 0) / roundTimes.length;
const median = roundTimes[Math.floor(roundTimes.length / 2)];
// Standard deviation
const variance = roundTimes.reduce(
(sum, t) => sum + Math.pow(t - mean, 2), 0
) / roundTimes.length;
const stdDev = Math.sqrt(variance);
// Margin of error (95% confidence)
const marginOfError = (1.96 * stdDev) / Math.sqrt(roundTimes.length);
// Operations per second
const opsPerSec = 1000 / mean;
return {
name: this.name,
meanMs: mean.toFixed(4),
medianMs: median.toFixed(4),
stdDevMs: stdDev.toFixed(4),
marginOfError: marginOfError.toFixed(4),
opsPerSec: opsPerSec.toFixed(0),
confidence: `${((1 - marginOfError / mean) * 100).toFixed(1)}%`,
};
}
}
// Usage
const bench = new Benchmark("array-sort", () => {
const arr = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8, 9, 7, 9];
return arr.slice().sort((a, b) => a - b);
});
console.table([bench.run()]);Benchmark Suite for Comparisons
class BenchmarkSuite {
constructor(name) {
this.name = name;
this.benchmarks = [];
}
add(name, fn, options = {}) {
this.benchmarks.push(new Benchmark(name, fn, options));
return this;
}
run() {
console.log(`\nBenchmark Suite: ${this.name}`);
console.log("=".repeat(60));
const results = this.benchmarks.map((b) => b.run());
// Sort by ops/sec descending
results.sort((a, b) => parseInt(b.opsPerSec) - parseInt(a.opsPerSec));
// Add relative speed
const fastest = parseInt(results[0].opsPerSec);
results.forEach((r) => {
r.relative = `${((parseInt(r.opsPerSec) / fastest) * 100).toFixed(1)}%`;
});
console.table(results);
return results;
}
}
// Usage: compare different approaches
const suite = new BenchmarkSuite("String Concatenation");
suite
.add("template-literal", () => {
const name = "Alice";
const age = 30;
return `Hello ${name}, you are ${age} years old`;
})
.add("plus-operator", () => {
const name = "Alice";
const age = 30;
return "Hello " + name + ", you are " + age + " years old";
})
.add("array-join", () => {
const name = "Alice";
const age = 30;
return ["Hello ", name, ", you are ", age, " years old"].join("");
})
.add("concat-method", () => {
const name = "Alice";
const age = 30;
return "Hello ".concat(name, ", you are ", age, " years old");
});
suite.run();Common Microbenchmark Pitfalls
// PITFALL 1: Dead code elimination
// The engine may optimize away code whose result is unused
function badBenchmark() {
const start = performance.now();
for (let i = 0; i < 1000000; i++) {
Math.sqrt(i); // Result unused -> engine may skip this entirely
}
return performance.now() - start; // Measures nothing
}
// FIX: Use the result
function goodBenchmark() {
const start = performance.now();
let sum = 0;
for (let i = 0; i < 1000000; i++) {
sum += Math.sqrt(i); // Result used -> cannot be eliminated
}
const elapsed = performance.now() - start;
if (sum < 0) console.log(sum); // Prevent further optimization
return elapsed;
}
// PITFALL 2: Constant folding
// The engine may precompute constant expressions
function badConstant() {
const start = performance.now();
for (let i = 0; i < 1000; i++) {
const result = 42 * 13 + 7; // Computed once at compile time
}
return performance.now() - start;
}
// FIX: Use dynamic inputs
function goodConstant() {
const start = performance.now();
let result = 0;
for (let i = 0; i < 1000; i++) {
result += i * 13 + 7; // Different every iteration
}
return { elapsed: performance.now() - start, result };
}
// PITFALL 3: Measuring the wrong thing
// Array creation dominates over the operation you want to measure
function badSort() {
const start = performance.now();
for (let i = 0; i < 1000; i++) {
const arr = Array.from({ length: 1000 }, () => Math.random());
arr.sort((a, b) => a - b);
}
return performance.now() - start; // Includes array creation time!
}
// FIX: Separate setup from measurement
function goodSort() {
const arrays = Array.from({ length: 1000 }, () =>
Array.from({ length: 1000 }, () => Math.random())
);
const start = performance.now();
for (const arr of arrays) {
arr.sort((a, b) => a - b);
}
return performance.now() - start; // Measures only sort
}| Pitfall | Symptom | Fix |
|---|---|---|
| Dead code elimination | Suspiciously fast results | Use computed results |
| Constant folding | All iterations take same time | Use variable inputs |
| Measuring setup + work | Inconsistent with expectations | Separate setup from measurement |
| No warmup | First iterations much slower | Run warmup iterations before timing |
| GC spikes | Random slow iterations | Trim outliers, use median |
| Timer resolution | 0ms results for fast operations | Increase iteration count |
Rune AI
Key Insights
- performance.now() over Date.now(): Use the high-resolution monotonic timer for profiling; Date.now() lacks precision and can drift with system clock adjustments
- JIT warmup is mandatory: Run 100+ warmup iterations before measuring to ensure V8 has compiled the function with TurboFan optimizations
- Statistical analysis over single measurements: Collect multiple rounds, compute mean/median/stdDev, and report confidence intervals to distinguish real differences from noise
- Prevent dead code elimination: Always use computed results so the engine cannot optimize away the code you are trying to measure
- Benchmark suites for fair comparison: Run competing implementations under identical conditions with calibrated iteration counts and report relative ops/sec
Frequently Asked Questions
Why does performance.now() have reduced precision in browsers?
How many iterations do I need for reliable results?
Should I benchmark in the browser or in Node.js?
How do I prevent V8 from optimizing away my benchmark?
What is the best way to compare two implementations?
Conclusion
Accurate JavaScript timing requires performance.now() for sub-millisecond precision, JIT warmup to measure optimized code, outlier trimming to remove GC spikes, and statistical analysis with confidence intervals. Avoid microbenchmark pitfalls like dead code elimination and constant folding. Use a benchmark suite for fair comparisons with ops/sec and relative percentages. For profiling tools, see JavaScript Profiling: Advanced Performance Guide. For DevTools analysis, see Using Chrome DevTools for JS Performance Tuning.
More in this topic
OffscreenCanvas API in JS for UI Performance
Master the OffscreenCanvas API to offload rendering from the main thread. Covers worker-based 2D and WebGL rendering, animation loops inside workers, bitmap transfer, double buffering, chart rendering pipelines, image processing, and performance measurement strategies.
Advanced Web Workers for High Performance JS
Master Web Workers for truly parallel JavaScript execution. Covers dedicated and shared workers, structured cloning, transferable objects, SharedArrayBuffer with Atomics, worker pools, task scheduling, Comlink RPC patterns, module workers, and performance profiling strategies.
JavaScript Macros and Abstract Code Generation
Master JavaScript code generation techniques for compile-time and runtime metaprogramming. Covers AST manipulation, Babel plugin authorship, tagged template literals as macros, code generation pipelines, source-to-source transformation, compile-time evaluation, and safe eval alternatives.