Ignition Interpreter and JS Bytecode Tutorial
Master V8's Ignition interpreter and its bytecode execution model. Covers the dispatch loop, handler table, register file layout, stack frame structure, exception handling, generator support, and how Ignition feeds data to TurboFan for optimization.
Ignition is V8's bytecode interpreter. It compiles JavaScript to compact bytecodes and executes them through a dispatch loop, collecting runtime type information that feeds into TurboFan's optimization pipeline. This guide covers how Ignition works internally.
For the bytecode format itself, see JavaScript Bytecode Explained: Complete Guide.
The Dispatch Loop
Ignition executes bytecode through a dispatch loop. For each instruction, it reads the opcode, jumps to the handler for that opcode, executes it, and advances to the next instruction.
// Simplified model of Ignition's dispatch loop
class IgnitionInterpreter {
#bytecode;
#pc = 0; // Program counter
#accumulator = undefined;
#registers = [];
#constantPool = [];
#feedbackVector = [];
constructor(bytecodeArray, constantPool, registerCount) {
this.#bytecode = bytecodeArray;
this.#constantPool = constantPool;
this.#registers = new Array(registerCount).fill(undefined);
}
run() {
while (this.#pc < this.#bytecode.length) {
const opcode = this.#bytecode[this.#pc];
this.#dispatch(opcode);
}
return this.#accumulator;
}
#dispatch(opcode) {
// Each opcode has a handler
// Real V8 uses computed goto or tail calls for speed
switch (opcode) {
case 0x01: this.#handleLdaSmi(); break;
case 0x02: this.#handleStar(); break;
case 0x03: this.#handleLdar(); break;
case 0x04: this.#handleAdd(); break;
case 0x05: this.#handleReturn(); break;
case 0x06: this.#handleJumpIfFalse(); break;
case 0x07: this.#handleLdaConstant(); break;
default: throw new Error(`Unknown opcode: ${opcode}`);
}
}
#handleLdaSmi() {
// LdaSmi [immediate]: Load small integer into accumulator
this.#pc++;
this.#accumulator = this.#bytecode[this.#pc];
this.#pc++;
}
#handleStar() {
// Star rN: Store accumulator to register N
this.#pc++;
const regIndex = this.#bytecode[this.#pc];
this.#registers[regIndex] = this.#accumulator;
this.#pc++;
}
#handleLdar() {
// Ldar rN: Load register N into accumulator
this.#pc++;
const regIndex = this.#bytecode[this.#pc];
this.#accumulator = this.#registers[regIndex];
this.#pc++;
}
#handleAdd() {
// Add rN, [slot]: accumulator = accumulator + register[N]
this.#pc++;
const regIndex = this.#bytecode[this.#pc];
this.#pc++;
const feedbackSlot = this.#bytecode[this.#pc];
this.#pc++;
const left = this.#registers[regIndex];
const right = this.#accumulator;
// Record type feedback
this.#recordFeedback(feedbackSlot, typeof left, typeof right);
this.#accumulator = left + right;
}
#handleReturn() {
// Return: stop execution, accumulator holds the result
this.#pc = this.#bytecode.length; // End the loop
}
#handleJumpIfFalse() {
this.#pc++;
const offset = this.#bytecode[this.#pc];
this.#pc++;
if (!this.#accumulator) {
this.#pc += offset;
}
}
#handleLdaConstant() {
this.#pc++;
const index = this.#bytecode[this.#pc];
this.#accumulator = this.#constantPool[index];
this.#pc++;
}
#recordFeedback(slot, ...types) {
if (!this.#feedbackVector[slot]) {
this.#feedbackVector[slot] = new Set();
}
types.forEach((t) => this.#feedbackVector[slot].add(t));
}
}Register File Layout
// Ignition's register file is a contiguous block on the stack
// Registers are accessed by index relative to the frame pointer
// STACK FRAME LAYOUT for a function call:
//
// ┌────────────────────────────┐ High address
// │ Return address │
// │ Caller's frame pointer │
// ├────────────────────────────┤ <- Frame pointer (fp)
// │ Bytecode array pointer │ fp - 8
// │ Feedback vector pointer │ fp - 16
// │ Context pointer │ fp - 24
// │ Parameter count │ fp - 32
// ├────────────────────────────┤
// │ Register r0 │ fp - 40
// │ Register r1 │ fp - 48
// │ Register r2 │ fp - 56
// │ ... │
// └────────────────────────────┘ Low address (stack grows down)
//
// Parameters (a0, a1...) are above the frame pointer
// Registers (r0, r1...) are below the frame pointer
// Example: function with 3 locals and 2 parameters
function example(x, y) { // a0=x, a1=y
const a = x + 1; // r0
const b = y * 2; // r1
const c = a + b; // r2
return c;
}
// Register allocation:
// a0 (parameter x) = fp + 16
// a1 (parameter y) = fp + 24
// r0 (local a) = fp - 40
// r1 (local b) = fp - 48
// r2 (local c) = fp - 56
// V8 uses a special "register allocator" during bytecode generation
// that maps source-level variables to register indices
// ACCUMULATOR OPTIMIZATION
// The accumulator is a dedicated CPU register (not on the stack)
// Most bytecodes implicitly use the accumulator, saving
// explicit register operands and reducing bytecode size
// Without accumulator (hypothetical):
// Add r0, r1, r2 // r2 = r0 + r1 (3 operands = more bytes)
// With accumulator (actual Ignition):
// Ldar r0 // acc = r0
// Add r1, [slot] // acc = acc + r1 (2 operands = fewer bytes)
// Star r2 // r2 = acc
// The accumulator pattern reduces bytecode size by ~25%Exception Handling
// Ignition implements try-catch-finally through handler tables
function safeParse(json) {
try {
const result = JSON.parse(json);
return { ok: true, data: result };
} catch (error) {
return { ok: false, error: error.message };
} finally {
console.log("parse attempt complete");
}
}
// HANDLER TABLE for safeParse:
//
// Range [start, end) -> handler_offset, context, prediction
// [10, 25) -> 30 (catch handler), ctx2, CAUGHT
// [10, 40) -> 45 (finally handler), ctx2, UNCAUGHT
//
// When an exception occurs:
// 1. V8 checks the handler table for the current bytecode offset
// 2. If offset falls in [start, end), jump to handler_offset
// 3. The exception object is placed in the accumulator
// 4. The catch block receives it as a local variable
// Bytecode (simplified):
// 10: LdaGlobal "JSON" // try block starts
// 13: LdaNamedProperty <acc>, "parse"
// 16: CallProperty1 <acc>, a0 // JSON.parse(json)
// 20: Star0 // result = ...
// ... // Build return object
// 25: Return // Normal exit
//
// 30: Star1 // catch: r1 = error
// 32: LdaNamedProperty r1, "message" // error.message
// ... // Build error object
// 38: Return
//
// 45: LdaGlobal "console" // finally block
// 48: CallProperty1 <acc>, "log", <msg>
// 52: ReThrow // Re-throw if uncaught
// The handler table enables efficient try-catch without
// runtime overhead when no exception occurs
// Cost: only a table entry, no extra bytecodes in the happy path
// NESTED TRY-CATCH
function nested() {
try {
try {
riskyOp1();
} catch (e1) {
riskyOp2(); // This can also throw
}
} catch (e2) {
handleError(e2);
}
}
// Handler table has entries for both levels:
// riskyOp1's range -> inner catch handler
// riskyOp2's range -> outer catch handler
// Exceptions bubble outward through the handler tableGenerator and Async Support
// Ignition handles generators with SuspendGenerator and ResumeGenerator
function* counter(start) {
let current = start;
while (true) {
yield current;
current++;
}
}
// Bytecode for generator body (simplified):
//
// RESUME POINT 0 (initial):
// Ldar a0 // Load 'start' parameter
// Star0 // r0 = current = start
//
// LOOP:
// Ldar r0 // Load current
// SuspendGenerator r<gen>, [0], 1 // Yield current; save state
// // At this point, execution pauses
// // The generator's registers are saved to the heap
// // Control returns to the caller
//
// RESUME POINT 1 (after yield):
// ResumeGenerator r<gen> // Restore registers from heap
// Ldar r0 // Load current
// Inc [1] // current++
// Star0 // Store back
// Jump [LOOP] // Loop back
// GENERATOR STATE MACHINE
// Each yield point creates a resume point
// SuspendGenerator saves: registers, accumulator, bytecode offset
// ResumeGenerator restores them and continues from the saved offset
// ASYNC/AWAIT is implemented as generators + promises
async function fetchData(url) {
const response = await fetch(url);
const data = await response.json();
return data;
}
// V8 desugars this to something like:
function fetchData(url) {
return new Promise((resolve, reject) => {
const gen = (function* () {
const response = yield fetch(url); // Suspend, wait for fetch
const data = yield response.json(); // Suspend, wait for json
return data;
})();
function step(value) {
const result = gen.next(value);
if (result.done) resolve(result.value);
else result.value.then(step, reject);
}
step();
});
}
// The bytecodes are the same suspend/resume pattern
// but V8 wraps them in promise resolution machineryIgnition to TurboFan Handoff
// Ignition collects profiling data that tells TurboFan what to optimize
// The handoff process:
// 1. Function reaches "hot" threshold (call count + loop iterations)
// 2. V8 passes the bytecodes + feedback vector to TurboFan
// 3. TurboFan compiles in background while Ignition continues
// 4. When done, V8 patches the function to use TurboFan code
// 5. Next call executes optimized machine code
function hotFunction(arr) {
let sum = 0;
for (let i = 0; i < arr.length; i++) {
sum += arr[i]; // Feedback: always Smi + Smi -> Smi
}
return sum;
}
// Ignition execution timeline:
//
// Call 1-100: Ignition interprets bytecode
// Feedback vector accumulates type data:
// - arr[i]: always Smi (PACKED_SMI_ELEMENTS)
// - sum += arr[i]: always Smi Add
// - arr.length: monomorphic IC on Array
//
// Call ~100: V8 triggers TurboFan compilation (background)
// Ignition CONTINUES executing while TurboFan works
//
// Call ~120: TurboFan code ready
// V8 patches hotFunction to point to machine code
//
// Call 121+: Executes optimized machine code
// - No type checks for Smi (speculative)
// - Loop unrolled
// - Bounds check eliminated
// - Direct memory access for arr[i]
// ON-STACK REPLACEMENT (OSR)
// If Ignition is inside a long-running loop, V8 can replace
// the interpreter frame with a TurboFan frame mid-loop
function longLoop() {
let total = 0;
for (let i = 0; i < 10_000_000; i++) {
total += i * i;
// After ~10,000 iterations, V8 triggers OSR
// The loop counter JumpLoop bytecode increments an OSR counter
// When it exceeds the threshold, TurboFan compiles this function
// V8 replaces the Ignition stack frame with a TurboFan frame
// Loop continues from the current iteration in optimized code
}
return total;
}| Ignition Feature | Purpose | Performance Impact |
|---|---|---|
| Dispatch loop | Execute bytecodes sequentially | Baseline execution speed |
| Register file | Store local variables on stack | Fast access (CPU register width) |
| Accumulator | Implicit operand for most operations | Reduces bytecode size by ~25% |
| Handler table | Map exception ranges to handlers | Zero cost on happy path |
| Feedback vector | Profile types at each bytecode | Enables TurboFan optimization |
| Suspend/Resume | Support generators and async/await | Heap-allocated state on suspend |
Rune AI
Key Insights
- The dispatch loop reads opcodes and jumps to handlers, advancing the program counter after each instruction: This is the core execution mechanism, trading speed for fast compilation and compact bytecode
- The register file stores local variables on the stack with an implicit accumulator reducing bytecode size: Most instructions use the accumulator implicitly, saving operand bytes in the instruction encoding
- Handler tables map bytecode offset ranges to exception handlers with zero performance cost on the normal path: V8 only consults the handler table when an exception actually occurs
- Generators and async functions use SuspendGenerator/ResumeGenerator to save and restore execution state: Register contents are copied to the heap on suspension and restored on resumption
- Feedback vectors collected during interpretation are the critical input to TurboFan optimization: Every operation profiles its operand types, enabling TurboFan to generate type-specialized machine code
Frequently Asked Questions
How fast is Ignition compared to TurboFan-optimized code?
What triggers the transition from Ignition to TurboFan?
How does Ignition handle the arguments object?
Does Ignition run on a separate thread?
Conclusion
Ignition is V8's bytecode interpreter that provides fast startup and runtime type profiling. The dispatch loop executes compact bytecodes efficiently. The register file stores locals on the stack. Handler tables implement exception handling with zero overhead on the happy path. Feedback vectors bridge the gap to TurboFan optimization. For the bytecode instruction set, see JavaScript Bytecode Explained: Complete Guide. For how TurboFan uses Ignition's feedback data, explore TurboFan Compiler and JS Optimization Guide.
More in this topic
OffscreenCanvas API in JS for UI Performance
Master the OffscreenCanvas API to offload rendering from the main thread. Covers worker-based 2D and WebGL rendering, animation loops inside workers, bitmap transfer, double buffering, chart rendering pipelines, image processing, and performance measurement strategies.
Advanced Web Workers for High Performance JS
Master Web Workers for truly parallel JavaScript execution. Covers dedicated and shared workers, structured cloning, transferable objects, SharedArrayBuffer with Atomics, worker pools, task scheduling, Comlink RPC patterns, module workers, and performance profiling strategies.
JavaScript Macros and Abstract Code Generation
Master JavaScript code generation techniques for compile-time and runtime metaprogramming. Covers AST manipulation, Babel plugin authorship, tagged template literals as macros, code generation pipelines, source-to-source transformation, compile-time evaluation, and safe eval alternatives.