JavaScript Web Streams API: A Complete Tutorial
A complete tutorial on the JavaScript Web Streams API. Covers ReadableStream, WritableStream, TransformStream, piping streams together, creating custom readable sources, backpressure management, streaming fetch responses, text decoding streams, and building a real-time log viewer with chunked data.
The Web Streams API processes data incrementally as it arrives, rather than waiting for the entire payload. This is critical for large files, real-time data feeds, and network responses where memory efficiency and time-to-first-byte matter. Streams let you start processing data before the full download completes.
Stream Types Overview
| Stream Type | Purpose | Key Class |
|---|---|---|
| Readable | Source of data you consume | ReadableStream |
| Writable | Destination for data you produce | WritableStream |
| Transform | Modifies data passing through | TransformStream |
Reading a Fetch Response as a Stream
async function streamFetchResponse(url) {
const response = await fetch(url);
const reader = response.body.getReader();
const decoder = new TextDecoder();
let result = "";
while (true) {
const { done, value } = await reader.read();
if (done) break;
// value is a Uint8Array chunk
const text = decoder.decode(value, { stream: true });
result += text;
console.log(`Received ${value.length} bytes`);
}
return result;
}The response.body is a ReadableStream. Each reader.read() call returns the next chunk as it arrives from the network. See how to use the JS Fetch API complete tutorial for Fetch fundamentals.
Creating a Custom ReadableStream
function createCounterStream(limit) {
let count = 0;
return new ReadableStream({
start(controller) {
console.log("Stream started");
},
pull(controller) {
if (count >= limit) {
controller.close();
return;
}
const data = { count: count++, timestamp: Date.now() };
controller.enqueue(JSON.stringify(data) + "\n");
},
cancel(reason) {
console.log("Stream canceled:", reason);
},
});
}
// Read the stream
const stream = createCounterStream(5);
const reader = stream.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
console.log(value); // "{"count":0,"timestamp":...}\n"
}ReadableStream Controller Methods
| Method | Description |
|---|---|
controller.enqueue(chunk) | Push a chunk into the stream |
controller.close() | Signal no more data will be produced |
controller.error(e) | Signal a stream error |
controller.desiredSize | How much the consumer wants (backpressure) |
WritableStream
function createLogWriter() {
const logs = [];
const stream = new WritableStream({
write(chunk) {
const entry = `[${new Date().toISOString()}] ${chunk}`;
logs.push(entry);
console.log(entry);
},
close() {
console.log(`Log closed. Total entries: ${logs.length}`);
},
abort(reason) {
console.error("Log aborted:", reason);
},
});
return { stream, getLogs: () => [...logs] };
}
// Write to the stream
const { stream, getLogs } = createLogWriter();
const writer = stream.getWriter();
await writer.write("Application started");
await writer.write("User logged in");
await writer.write("Data loaded");
await writer.close();
console.log(getLogs());TransformStream
A TransformStream sits between a readable and writable stream, modifying data as it passes through:
function createUpperCaseTransform() {
return new TransformStream({
transform(chunk, controller) {
controller.enqueue(chunk.toUpperCase());
},
});
}
function createJsonParseTransform() {
return new TransformStream({
transform(chunk, controller) {
try {
const parsed = JSON.parse(chunk);
controller.enqueue(parsed);
} catch {
// Skip invalid JSON lines
}
},
});
}Piping Streams Together
// Pipe: ReadableStream -> TransformStream -> WritableStream
async function processStream() {
const source = createCounterStream(10);
const transform = createUpperCaseTransform();
const { stream: sink, getLogs } = createLogWriter();
await source
.pipeThrough(transform)
.pipeTo(sink);
console.log("Pipeline complete:", getLogs().length, "entries");
}Pipeline Methods
| Method | Description |
|---|---|
readable.pipeThrough(transform) | Pipe through a TransformStream, returns the readable side |
readable.pipeTo(writable) | Pipe directly to a WritableStream, returns a promise |
readable.tee() | Split a stream into two independent branches |
Streaming a Large File Download With Progress
async function downloadWithProgress(url, onProgress) {
const response = await fetch(url);
const contentLength = parseInt(response.headers.get("Content-Length") || "0", 10);
let receivedBytes = 0;
const progressStream = new TransformStream({
transform(chunk, controller) {
receivedBytes += chunk.length;
if (contentLength > 0) {
const percent = Math.round((receivedBytes / contentLength) * 100);
onProgress({ receivedBytes, contentLength, percent });
}
controller.enqueue(chunk);
},
});
const reader = response.body
.pipeThrough(progressStream)
.getReader();
const chunks = [];
while (true) {
const { done, value } = await reader.read();
if (done) break;
chunks.push(value);
}
// Combine chunks into a single Uint8Array
const totalLength = chunks.reduce((sum, c) => sum + c.length, 0);
const result = new Uint8Array(totalLength);
let offset = 0;
for (const chunk of chunks) {
result.set(chunk, offset);
offset += chunk.length;
}
return result;
}
// Usage
const data = await downloadWithProgress("/api/large-file", (progress) => {
console.log(`${progress.percent}% (${progress.receivedBytes}/${progress.contentLength})`);
});Line-by-Line Text Stream Processing
function createLineTransform() {
let buffer = "";
return new TransformStream({
transform(chunk, controller) {
buffer += chunk;
const lines = buffer.split("\n");
// Keep the last incomplete line in the buffer
buffer = lines.pop() || "";
for (const line of lines) {
if (line.trim()) {
controller.enqueue(line);
}
}
},
flush(controller) {
// Emit any remaining buffered content
if (buffer.trim()) {
controller.enqueue(buffer);
}
},
});
}
// Stream a text file line by line
async function processLines(url) {
const response = await fetch(url);
const reader = response.body
.pipeThrough(new TextDecoderStream())
.pipeThrough(createLineTransform())
.getReader();
let lineNumber = 0;
while (true) {
const { done, value } = await reader.read();
if (done) break;
console.log(`Line ${++lineNumber}: ${value}`);
}
}Tee: Splitting a Stream
async function streamAndCache(url) {
const response = await fetch(url);
const [branch1, branch2] = response.body.tee();
// Branch 1: display to user
const displayPromise = branch1
.pipeThrough(new TextDecoderStream())
.pipeTo(new WritableStream({
write(chunk) {
document.getElementById("output").textContent += chunk;
},
}));
// Branch 2: cache the raw bytes
const cachePromise = (async () => {
const reader = branch2.getReader();
const chunks = [];
while (true) {
const { done, value } = await reader.read();
if (done) break;
chunks.push(value);
}
return new Blob(chunks);
})();
await Promise.all([displayPromise, cachePromise]);
}Rune AI
Key Insights
- Streams process data incrementally: No need to buffer the entire payload in memory before processing begins
- Three stream types compose into pipelines: ReadableStream produces, TransformStream modifies, WritableStream consumes, all connected via
pipeThroughandpipeTo - Backpressure prevents memory overflow: The consumer controls the pace; producers automatically slow down when the consumer falls behind
- tee() enables fan-out: Split one stream into two independent readers for parallel processing like display + cache
- TextDecoderStream handles encoding: Use it as the first transform in text pipelines to convert raw bytes to strings
Frequently Asked Questions
When should I use streams instead of regular Fetch?
Do all browsers support the Web Streams API?
What is backpressure in streams?
Can I cancel a stream pipeline?
How do streams relate to async generators?
Conclusion
The Web Streams API enables incremental data processing with ReadableStream (source), WritableStream (sink), and TransformStream (processor). Piping chains streams into pipelines, teeing splits a stream into two branches, and backpressure prevents memory overflow. For the Fetch API that produces these streams, see how to use the JS Fetch API complete tutorial. For the event loop scheduling stream reads, see the JS event loop architecture complete guide.
More in this topic
OffscreenCanvas API in JS for UI Performance
Master the OffscreenCanvas API to offload rendering from the main thread. Covers worker-based 2D and WebGL rendering, animation loops inside workers, bitmap transfer, double buffering, chart rendering pipelines, image processing, and performance measurement strategies.
Advanced Web Workers for High Performance JS
Master Web Workers for truly parallel JavaScript execution. Covers dedicated and shared workers, structured cloning, transferable objects, SharedArrayBuffer with Atomics, worker pools, task scheduling, Comlink RPC patterns, module workers, and performance profiling strategies.
JavaScript Macros and Abstract Code Generation
Master JavaScript code generation techniques for compile-time and runtime metaprogramming. Covers AST manipulation, Babel plugin authorship, tagged template literals as macros, code generation pipelines, source-to-source transformation, compile-time evaluation, and safe eval alternatives.