cloudflare-parallel v0.3 · live demo

4N parallel V8 isolates per Worker request.

Burn CPU across real parallel isolates on Cloudflare Workers. Hash, transform, simulate, render, evolve — work that single-threaded JavaScript can't fan out, this library can. Pick a size below and watch the topology selector decide whether to run in-DO, hybrid, or tree.

This library is for CPU work. If you're awaiting fetch / KV / AI / R2, plain Promise.all on one isolate already gives you that for free. When to use →

① Hero · pool.map over a CPU-bound function

view source

Each item renders one Mandelbrot tile: a horizontal slab of an image at 20,000-iteration depth around the cardioid. Roughly half a second to a few seconds of CPU per tile depending on intensity. Pick a fan-out size and an intensity preset; the library picks the topology.

Fan out across
Intensity
Topology
size?
Per-tile CPU
ms
measured single-tile
Parallel wall
ms
Sequential extrapolated
ms
N × per-tile
Speedup
×
Why doesn't every size beat sequential?

The library's per-call dispatch floor is roughly 50–150 ms (Durable Object RPC + leaf coordination) plus the time to ship N task envelopes across the wire. Speedup is real only when per-task CPU ≫ dispatch floor. At small N (4–16) with the medium preset, per-tile CPU is in the 200–400 ms band — comparable to dispatch — so speedup can land near 1×. At N≥64, per-task work starts dominating and speedup climbs sharply. The heavy preset crosses 5× at N=64 and 50×+ at N=512; extreme lands ~100× at N=512.

Tip: the first request after a quiet period pays a one-time ~300–400 ms DO cold-start. The library's autoWarm: true default fires that off in parallel with the first dispatch, so you'll see it absorbed automatically — but the cold curve is still measurably slower than the warm curve. Click a size or intensity tab twice to see the warm-path numbers.

await pool.map(renderMandelbrotTile, slabs)

② Topology · how the selector picks

view source

The auto-selector reads items.length and picks one of three shapes. Hover any row to see the math; click Run to fan out and update the live numbers.

Size Topology Math V8 isolates Live
4in-do1 DO × 4 loaders4
32hybrid⌈32/4⌉=8 leaves × 432
128hybrid⌈128/4⌉=32 leaves × 4128 (4N)
256treeK=2 tiers, F=84·F²
512treeK=3 tiers, F=84·F³

③ Every primitive, hands-on

view source

Each card runs against the live test worker. The code shown is the actual call being made.

④ Scheduler · reactive job queue

view source

Enqueue a CPU-bound burst (each job: 1M LCG iterations). Reactive dispatch starts work the moment a slot frees — no alarm-batched delay. Fair round-robin across tenantId.

Queued
In-flight
Completed
Failed
Cancelled
await scheduler.enqueue({ fn, args, tenantId, idempotencyKey })

⑤ Actor · pinned state across submits

view source

A counter Actor whose state lives in a Coordinator DO's SQLite. Each submit mutates the state in place; the runtime structured-clones it after each call. Close the Actor; the state is gone.

Counter
await actor.submit((state) => ++state.count)

⑥ VM · sandboxed user code over HTTP

view source

Paste a function expression. It runs in a fresh sandboxed V8 isolate with globalOutbound: null (no fetch) and zero bindings exposed. Bearer-auth required. Don't try to bypass the sandbox — it's load-bearing.

click Submit to run

⑦ Cancel · live AbortSignal

view source

Start a 1M-iteration SHA chain inside an isolate. Hit Cancel; the request closes; env.signal.aborted trips inside the loaded isolate; the loop returns early. Watch the iteration counter stop.

idle
const cancel = new CancelToken();
await pool.submit(longLoop, iters, { cancel });
// hit /demo/cancel-start; close the SSE to fire cancel.

⑧ Bench · live edge measurements

bench-results-live.json

The honest curve. Each row: Mandelbrot tile renders over N parallel V8 isolates vs the sequential per-tile baseline ×N. Speedup grows with size as the per-call dispatch floor amortizes; tree topology kicks in at 256.

Size Topology Sequential Parallel Speedup Run live