Handling Async Flows with JS Generator Functions

Master async control flow using JavaScript generator functions. Covers coroutine runners, sequential and parallel execution, error propagation, cancellation tokens, retry logic, resource cleanup with finally, saga patterns, and building your own async/await with generators.

JavaScriptadvanced
18 min read

Generator functions can model asynchronous workflows by yielding promises and letting a runner drive execution. This pattern was the foundation for async/await and remains valuable for building cancellable workflows, saga orchestration, and complex control flow that async/await alone cannot express.

For generator fundamentals, see JavaScript Generators Deep Dive Full Guide.

Building a Coroutine Runner

The basic idea is simple: a runner takes a generator function, calls .next() to get each yielded promise, waits for it to resolve, then feeds the result back in. This loop continues until the generator is done. The enhanced version below also supports typed effects like CALL, ALL, RACE, and DELAY, which gives you more control over how each yielded value gets handled.

javascriptjavascript
// A coroutine runner takes a generator, drives it by resolving yielded promises,
// and feeds results back in via .next()
 
function run(generatorFn, ...args) {
  const generator = generatorFn(...args);
 
  return new Promise((resolve, reject) => {
    function step(method, value) {
      let result;
      try {
        result = generator[method](value);
      } catch (err) {
        return reject(err);
      }
 
      if (result.done) {
        return resolve(result.value);
      }
 
      // Handle yielded promises
      Promise.resolve(result.value)
        .then(
          resolved => step("next", resolved),
          rejected => step("throw", rejected)
        );
    }
 
    step("next", undefined);
  });
}
 
// Usage: looks like async/await but uses yield
function* fetchUserData(userId) {
  const user = yield fetch(`/api/users/${userId}`).then(r => r.json());
  const posts = yield fetch(`/api/users/${userId}/posts`).then(r => r.json());
  const comments = yield fetch(`/api/posts/${posts[0].id}/comments`).then(r => r.json());
 
  return { user, posts, comments };
}
 
// run(fetchUserData, 42).then(data => console.log(data));
 
// ENHANCED RUNNER WITH EFFECT TYPES
const EFFECTS = {
  CALL: "CALL",
  ALL: "ALL",
  RACE: "RACE",
  DELAY: "DELAY",
  FORK: "FORK"
};
 
function call(fn, ...args) {
  return { type: EFFECTS.CALL, fn, args };
}
 
function all(effects) {
  return { type: EFFECTS.ALL, effects };
}
 
function race(effects) {
  return { type: EFFECTS.RACE, effects };
}
 
function delay(ms) {
  return { type: EFFECTS.DELAY, ms };
}
 
function runEnhanced(generatorFn, ...args) {
  const gen = generatorFn(...args);
 
  return new Promise((resolve, reject) => {
    function step(method, value) {
      let result;
      try {
        result = gen[method](value);
      } catch (err) {
        return reject(err);
      }
 
      if (result.done) return resolve(result.value);
 
      const effect = result.value;
 
      if (effect && effect.type) {
        handleEffect(effect).then(
          val => step("next", val),
          err => step("throw", err)
        );
      } else {
        Promise.resolve(effect).then(
          val => step("next", val),
          err => step("throw", err)
        );
      }
    }
 
    function handleEffect(effect) {
      switch (effect.type) {
        case EFFECTS.CALL:
          return Promise.resolve(effect.fn(...effect.args));
        case EFFECTS.ALL:
          return Promise.all(effect.effects.map(e =>
            e.type ? handleEffect(e) : Promise.resolve(e)
          ));
        case EFFECTS.RACE:
          return Promise.race(Object.entries(effect.effects).map(
            ([key, e]) => (e.type ? handleEffect(e) : Promise.resolve(e))
              .then(val => ({ [key]: val }))
          ));
        case EFFECTS.DELAY:
          return new Promise(r => setTimeout(r, effect.ms));
        default:
          return Promise.resolve(effect);
      }
    }
 
    step("next", undefined);
  });
}

Sequential and Parallel Execution

Once you have a runner that handles effect types, you can control whether operations run one after another or all at once. Yielding a single call effect waits for that request before moving on. Wrapping multiple calls in all fires them concurrently. You can also mix both strategies in a single workflow, running some steps in sequence and batching others in parallel where it makes sense.

javascriptjavascript
// SEQUENTIAL: yield one promise at a time
function* sequential() {
  const results = [];
 
  for (const url of ["/api/a", "/api/b", "/api/c"]) {
    const data = yield call(fakeFetch, url);
    results.push(data);
  }
 
  return results; // Each request waits for the previous
}
 
// PARALLEL: yield an ALL effect
function* parallel() {
  const [a, b, c] = yield all([
    call(fakeFetch, "/api/a"),
    call(fakeFetch, "/api/b"),
    call(fakeFetch, "/api/c")
  ]);
 
  return { a, b, c }; // All three run concurrently
}
 
// RACE: first to resolve wins
function* withTimeout() {
  const result = yield race({
    data: call(fakeFetch, "/api/slow-endpoint"),
    timeout: call(() => new Promise((_, reject) =>
      setTimeout(() => reject(new Error("Request timed out")), 5000)
    ))
  });
 
  if (result.timeout) {
    throw new Error("Operation timed out");
  }
 
  return result.data;
}
 
// MIXED: sequential steps with parallel sub-tasks
function* mixedFlow(userId) {
  // Step 1: fetch user (sequential)
  const user = yield call(fakeFetch, `/api/users/${userId}`);
 
  // Step 2: fetch posts and profile in parallel
  const [posts, profile] = yield all([
    call(fakeFetch, `/api/users/${userId}/posts`),
    call(fakeFetch, `/api/users/${userId}/profile`)
  ]);
 
  // Step 3: fetch comments for each post (parallel within sequential)
  const postsWithComments = yield all(
    posts.map(post => call(fakeFetch, `/api/posts/${post.id}/comments`))
  );
 
  return { user, profile, postsWithComments };
}
 
function fakeFetch(url) {
  return new Promise(resolve =>
    setTimeout(() => resolve({ url, data: "mock" }), 100)
  );
}

Cancellation Patterns

Async/await has no built-in cancellation mechanism. With generators, you can pass a cancellation token into the workflow and check it between steps. When cancelled, the runner calls .return() on the generator, which triggers finally blocks for cleanup. The token pattern below tracks cancellation state, notifies listeners, and gives each step a way to bail out early.

javascriptjavascript
// Cancellation token pattern for generator-based workflows
 
class CancellationToken {
  #cancelled = false;
  #reason = null;
  #listeners = [];
 
  cancel(reason = "Cancelled") {
    if (this.#cancelled) return;
    this.#cancelled = true;
    this.#reason = reason;
    for (const listener of this.#listeners) {
      listener(reason);
    }
    this.#listeners = [];
  }
 
  get isCancelled() {
    return this.#cancelled;
  }
 
  get reason() {
    return this.#reason;
  }
 
  onCancel(listener) {
    if (this.#cancelled) {
      listener(this.#reason);
    } else {
      this.#listeners.push(listener);
    }
  }
 
  throwIfCancelled() {
    if (this.#cancelled) {
      const err = new Error(this.#reason);
      err.name = "CancellationError";
      throw err;
    }
  }
}
 
function runCancellable(generatorFn, token, ...args) {
  const gen = generatorFn(token, ...args);
 
  return new Promise((resolve, reject) => {
    // Listen for cancellation
    token.onCancel(reason => {
      try {
        gen.return(undefined); // Trigger finally blocks
      } catch (e) { /* ignore */ }
      reject(new Error(reason));
    });
 
    function step(method, value) {
      if (token.isCancelled) return;
 
      let result;
      try {
        result = gen[method](value);
      } catch (err) {
        return reject(err);
      }
 
      if (result.done) return resolve(result.value);
 
      Promise.resolve(result.value).then(
        val => step("next", val),
        err => step("throw", err)
      );
    }
 
    step("next", undefined);
  });
}
 
// Usage
function* downloadFiles(token, fileList) {
  const downloaded = [];
 
  try {
    for (const file of fileList) {
      token.throwIfCancelled();
 
      console.log(`Downloading: ${file}`);
      const data = yield fakeFetch(`/files/${file}`);
      downloaded.push({ file, data });
 
      console.log(`Completed: ${file} (${downloaded.length}/${fileList.length})`);
    }
 
    return downloaded;
  } finally {
    if (token.isCancelled) {
      console.log(`Download cancelled. ${downloaded.length} files completed.`);
      // Cleanup partial downloads
    }
  }
}
 
const token = new CancellationToken();
const promise = runCancellable(downloadFiles, token, ["a.txt", "b.txt", "c.txt"]);
 
// Cancel after 200ms
setTimeout(() => token.cancel("User pressed cancel"), 200);

Retry and Error Recovery

Generators give you a clean way to wrap retry logic around any async operation. The retryable generator below catches failures, waits with exponential backoff, and tries again up to a limit. Further down, you will find a circuit breaker that tracks failure counts and stops calling a broken service entirely, plus a saga-style compensation pattern that rolls back multi-step transactions when something goes wrong partway through.

javascriptjavascript
// Generator-based retry with exponential backoff
 
function* retryable(operation, maxRetries = 3, baseDelay = 1000) {
  let lastError;
 
  for (let attempt = 1; attempt <= maxRetries; attempt++) {
    try {
      const result = yield call(operation);
      return result;
    } catch (err) {
      lastError = err;
      console.log(`Attempt ${attempt}/${maxRetries} failed: ${err.message}`);
 
      if (attempt < maxRetries) {
        const backoff = baseDelay * Math.pow(2, attempt - 1);
        const jitter = Math.random() * backoff * 0.1;
        yield delay(backoff + jitter);
      }
    }
  }
 
  throw lastError;
}
 
// CIRCUIT BREAKER WITH GENERATORS
function* circuitBreaker(operation, options = {}) {
  const { failureThreshold = 5, resetTimeout = 30000 } = options;
  let failures = 0;
  let circuitOpen = false;
  let lastFailure = 0;
 
  while (true) {
    // Check if circuit should reset
    if (circuitOpen && Date.now() - lastFailure > resetTimeout) {
      console.log("Circuit half-open: attempting reset");
      circuitOpen = false;
      failures = 0;
    }
 
    if (circuitOpen) {
      yield { success: false, error: "Circuit open", retryAfter: resetTimeout };
      continue;
    }
 
    try {
      const result = yield call(operation);
      failures = 0;
      yield { success: true, data: result };
    } catch (err) {
      failures++;
      lastFailure = Date.now();
 
      if (failures >= failureThreshold) {
        circuitOpen = true;
        console.log(`Circuit opened after ${failures} failures`);
      }
 
      yield { success: false, error: err.message, failures };
    }
  }
}
 
// SAGA-STYLE ERROR COMPENSATION
function* bookingTransaction(details) {
  const compensations = [];
 
  try {
    // Step 1: Reserve flight
    const flight = yield call(reserveFlight, details.flight);
    compensations.push(() => cancelFlight(flight.id));
 
    // Step 2: Reserve hotel
    const hotel = yield call(reserveHotel, details.hotel);
    compensations.push(() => cancelHotel(hotel.id));
 
    // Step 3: Reserve car
    const car = yield call(reserveCar, details.car);
    compensations.push(() => cancelCar(car.id));
 
    // Step 4: Charge payment
    const payment = yield call(chargePayment, details.payment);
    compensations.push(() => refundPayment(payment.id));
 
    return { flight, hotel, car, payment, status: "confirmed" };
 
  } catch (err) {
    // Compensate in reverse order (undo completed steps)
    console.log(`Booking failed: ${err.message}. Running compensations...`);
 
    for (let i = compensations.length - 1; i >= 0; i--) {
      try {
        yield call(compensations[i]);
      } catch (compErr) {
        console.error(`Compensation ${i} failed: ${compErr.message}`);
      }
    }
 
    throw new Error(`Booking failed and rolled back: ${err.message}`);
  }
}
 
function reserveFlight(d) { return Promise.resolve({ id: "FL-001" }); }
function reserveHotel(d) { return Promise.resolve({ id: "HT-001" }); }
function reserveCar(d) { return Promise.resolve({ id: "CR-001" }); }
function chargePayment(d) { return Promise.resolve({ id: "PY-001" }); }
function cancelFlight(id) { return Promise.resolve(); }
function cancelHotel(id) { return Promise.resolve(); }
function cancelCar(id) { return Promise.resolve(); }
function refundPayment(id) { return Promise.resolve(); }

Resource Management with Finally

Generators have a useful property: their finally blocks run whether the generator completes normally, gets an error thrown into it, or is terminated early with .return(). This makes them a good fit for managing resources like database connections or file handles that need guaranteed cleanup. The patterns here show single-resource management, multi-resource acquisition with reverse-order release, and a disposable pattern similar to what you would find in languages with using statements.

javascriptjavascript
// Generators guarantee finally blocks run on .return() or completion
 
function* managedConnection(config) {
  const connection = yield call(createConnection, config);
  console.log("Connection established");
 
  try {
    // Yield the connection for the caller to use
    while (true) {
      const query = yield { connection, status: "ready" };
      if (!query) break;
 
      const result = yield call(executeQuery, connection, query);
      yield { result, status: "completed" };
    }
  } finally {
    // Always runs: normal completion, .return(), or .throw()
    console.log("Closing connection");
    yield call(closeConnection, connection);
    console.log("Connection closed");
  }
}
 
function createConnection(config) {
  return Promise.resolve({ id: Date.now(), config });
}
function executeQuery(conn, query) {
  return Promise.resolve({ rows: [], query });
}
function closeConnection(conn) {
  return Promise.resolve();
}
 
// MULTIPLE RESOURCE MANAGEMENT
function* withResources(resourceFactories) {
  const resources = [];
 
  try {
    // Acquire all resources
    for (const factory of resourceFactories) {
      const resource = yield call(factory.acquire);
      resources.push({ resource, release: factory.release });
    }
 
    // Yield resources for use
    return yield resources.map(r => r.resource);
 
  } finally {
    // Release in reverse order
    for (let i = resources.length - 1; i >= 0; i--) {
      try {
        yield call(resources[i].release, resources[i].resource);
      } catch (err) {
        console.error(`Failed to release resource ${i}: ${err.message}`);
      }
    }
  }
}
 
// DISPOSABLE PATTERN
function* using(resourceGen, bodyFn) {
  const resource = yield* resourceGen;
 
  try {
    const result = yield* bodyFn(resource);
    return result;
  } finally {
    if (resource && typeof resource.dispose === "function") {
      yield call(() => resource.dispose());
    }
  }
}

Saga Pattern Implementation

The saga pattern, popularized by Redux Saga, uses generators to manage side effects as plain data. Instead of calling fetch directly, you yield an effect object like call(fetch, url). The runtime interprets these effects, and your tests can inspect them without making real HTTP requests. The SagaRuntime class below drives generators, resolves effects, and supports take (wait for a dispatched action), put (dispatch an action), and select (read from state).

javascriptjavascript
// Redux saga-inspired pattern using generators for side effect management
 
class SagaRuntime {
  #store;
  #sagas = new Map();
  #running = new Set();
 
  constructor(store) {
    this.#store = store;
  }
 
  run(saga, ...args) {
    const gen = saga(this.#store.getState, ...args);
    const id = Symbol("saga");
    this.#running.add(id);
 
    return this.#drive(gen, id);
  }
 
  async #drive(gen, id) {
    let input = undefined;
 
    while (this.#running.has(id)) {
      const { value: effect, done } = gen.next(input);
 
      if (done) {
        this.#running.delete(id);
        return effect;
      }
 
      try {
        input = await this.#resolve(effect);
      } catch (err) {
        const { value, done: errDone } = gen.throw(err);
        if (errDone) {
          this.#running.delete(id);
          return value;
        }
        input = value;
      }
    }
  }
 
  async #resolve(effect) {
    if (!effect || !effect.type) {
      return effect;
    }
 
    switch (effect.type) {
      case "CALL":
        return effect.fn(...effect.args);
      case "SELECT":
        return effect.selector(this.#store.getState());
      case "PUT":
        return this.#store.dispatch(effect.action);
      case "TAKE":
        return this.#waitForAction(effect.actionType);
      case "ALL":
        return Promise.all(effect.effects.map(e => this.#resolve(e)));
      case "DELAY":
        return new Promise(r => setTimeout(r, effect.ms));
      default:
        return effect;
    }
  }
 
  #waitForAction(actionType) {
    return new Promise(resolve => {
      const unsub = this.#store.subscribe(() => {
        const action = this.#store.getLastAction();
        if (action && action.type === actionType) {
          unsub();
          resolve(action);
        }
      });
    });
  }
 
  cancel(id) {
    this.#running.delete(id);
  }
}
 
// Saga effect creators
function select(selector) {
  return { type: "SELECT", selector };
}
 
function put(action) {
  return { type: "PUT", action };
}
 
function take(actionType) {
  return { type: "TAKE", actionType };
}
 
// Example saga
function* loginSaga(getState) {
  while (true) {
    const action = yield take("LOGIN_REQUEST");
 
    try {
      yield put({ type: "LOGIN_LOADING" });
 
      const response = yield call(
        fakeFetch,
        `/api/login?user=${action.payload.username}`
      );
 
      yield put({ type: "LOGIN_SUCCESS", payload: response });
    } catch (err) {
      yield put({ type: "LOGIN_FAILURE", payload: err.message });
    }
  }
}
PatternUse CaseAdvantage Over async/await
Coroutine runnerCustom promise handlingIntercept and transform yielded values
Cancellation tokenUser-initiated abortGenerator cleanup via .return() + finally
Saga effectsSide effect managementTestable, declarative side effects
Circuit breakerFault toleranceStateful retry logic across calls
CompensationDistributed transactionsAutomatic rollback on failure
Resource managementConnection/file lifecycleGuaranteed cleanup via finally blocks
Rune AI

Rune AI

Key Insights

  • Coroutine runners intercept yielded promises to implement custom async semantics like cancellation, retry, and effect handling: This pattern was the historical basis for async/await and remains more flexible
  • Cancellation tokens combined with generator .return() guarantee cleanup via finally blocks: This is the only way to achieve cooperative cancellation with deterministic resource cleanup in JavaScript
  • Saga-style effects yield plain objects that describe side effects rather than executing them directly: This makes workflows fully testable by driving the generator step-by-step with mock data
  • Compensation patterns use a stack of undo functions to automatically rollback multi-step transactions on failure: Each successful step pushes its compensating action, and on error they execute in reverse
  • Generators provide natural backpressure because producers only generate the next value when the consumer calls .next(): This prevents memory overflow in data processing pipelines
RunePowered by Rune AI

Frequently Asked Questions

Why use generators for async flow when we have async/await?

Generators give the caller (the runner) control over how yielded values are handled. With async/await, the runtime automatically resolves promises. With generators, you can intercept effects, implement cancellation, add logging, inject test doubles, or transform the flow. This is why Redux Saga uses generators: each yielded effect is a plain object that can be inspected and tested without executing real side effects. Generators are also the only way to implement cooperative cancellation with guaranteed cleanup.

How do I test generator-based sagas?

Generator sagas yield plain effect objects rather than executing code directly. Testing is straightforward: call `.next()` on the generator and assert the yielded effect matches expectations. Feed mock results back via `.next(mockValue)` or inject errors via `.throw(mockError)`. No HTTP mocking, no timers, no async waits. The test drives the generator step by step, verifying each effect declaration. This makes saga tests fast, deterministic, and easy to write.

Can generators handle backpressure in data streams?

Yes. Because generators are pull-based (the consumer calls `.next()` to request the next value), they naturally implement backpressure. The producer does not generate the next value until the consumer is ready. For async generators, `for await...of` calls `.next()` sequentially, processing one value before requesting the next. This prevents buffer overflow and memory issues that push-based systems (like event emitters) can suffer from when producers outpace consumers.

What happens if I forget to consume all values from a generator?

If you abandon a generator before it completes (stop calling `.next()` and lose the reference), the generator object will eventually be garbage collected. However, any `finally` blocks inside the generator will NOT run unless you explicitly call `.return()` or the generator is consumed by `for...of` (which calls `.return()` on break/return). Always call `.return()` on generators that manage resources (connections, file handles, subscriptions) if you stop consuming early. For `for...of` and `for await...of`, this is handled automatically.

Conclusion

Generator-based async flow provides cancellation, compensation, and testability that async/await cannot match. Coroutine runners, saga patterns, and resource management via finally blocks give you fine-grained control over asynchronous workflows. For the iterator protocol that generators implement, see Advanced JavaScript Iterators Complete Guide. For understanding how generators interact with the event loop, see Call Stack vs Task Queue vs Microtask Queue.