Durable workflow execution for TypeScript

Keep your background work alive when the process dies.

Reflow is a durable workflow engine for TypeScript. It checkpoints every step to SQLite and resumes from the last completed step after a crash - no repeated LLM calls, no wasted tokens, no duplicate side effects. One package. Zero infrastructure.

bun add reflow-ts
  • Completed steps are never re-executed - even after a crash
  • Per-step retry with backoff for flaky APIs and rate limits
  • Typed inputs, outputs, and prev chaining between steps
  • Standard Schema support - bring Zod, Valibot, ArkType, or any compatible library
ai-pipeline.ts TypeScript
const pipeline = createWorkflow({
  name: 'process-content',
  input: z.object({ url: z.string() }),
})
  .step('scrape', async ({ input }) => {
    const page = await fetchPage(input.url)
    return { content: page.text }
  })
  .step('summarize', {
    retry: { maxAttempts: 3, backoff: 'exponential' },
    handler: async ({ prev }) => {
      const summary = await llm(prev.content)
      return { summary }
    },
  })
  .step('store', async ({ input, prev }) => {
    await db.insert({ url: input.url, ...prev })
  })

await engine.enqueue('process-content', {
  url: 'https://example.com/article',
})
Output

          
No re-runs

Each step is checkpointed. Completed LLM calls, API requests, and payments are never repeated.

Typed

Workflow names, validated inputs, and chained step outputs stay aligned at compile time.

SQLite

Durable state in a single file. Easy to ship in CLIs, small SaaS apps, and agent tools.

0 infra

No brokers, no control plane, no workflow cluster. Start with a package and a database file.

Why It Matters

Same crash. Only one wastes your money.

Both pipelines scrape a page, run two LLM calls, then store results. Both crash during storage. Watch what happens next.

Without Reflow Running
  1. scrape completed
  2. summarize $0.12 running...
  3. extract $0.08 pending
  4. store pending
With Reflow Running
  1. scrape completed
  2. summarize $0.12 running...
  3. extract $0.08 pending
  4. store pending

Quickstart

Three steps. One file. Zero infrastructure.

1

Define your pipeline

Scrape a page, summarize with an LLM, extract entities, store results. Each step's return value flows to the next via prev. LLM calls have automatic retry with exponential backoff.

const pipeline = createWorkflow({
  name: 'process-content',
  input: z.object({ url: z.string() }),
})
  // Fetch the page content
  .step('scrape', async ({ input }) => {
    const page = await fetchPage(input.url)
    return { content: page.text }
  })
  // LLM call with retry for rate limits
  .step('summarize', {
    retry: { maxAttempts: 3, backoff: 'exponential' },
    handler: async ({ prev }) => {
      const summary = await llm(prev.content)
      return { summary }
    },
  })
  // Extract entities from the summary
  .step('extract', async ({ prev }) => {
    const entities = await llm(prev.summary)
    return { ...prev, entities }
  })
  // Store results - if this fails, the LLM calls won't re-run
  .step('store', async ({ input, prev }) => {
    await db.insert({ url: input.url, ...prev })
  })
2

Connect storage

Point at a SQLite file. Reflow handles schema, checkpoints, and lease management.

// One file. start() handles schema automatically.
const storage = new SQLiteStorage('./reflow.db')
const engine = createEngine({
  storage,
  workflows: [pipeline],
})
3

Start and enqueue

Start the engine, then submit work as it arrives. If the process crashes, the next engine picks up where it left off.

// Starts polling. Survives crashes.
await engine.start()

await engine.enqueue('process-content', {
  url: 'https://example.com/article',
})

Where Reflow Fits

Best when you need durable local workflows, not distributed orchestration.

Use Reflow for

  • AI and LLM pipelines where repeated calls waste tokens and money
  • Multi-step agent workflows that need crash recovery
  • SaaS onboarding, billing, and provisioning flows
  • CLI imports and long-running local automation

Do not use Reflow for

  • Distributed workers across many machines
  • Sub-second workflow dispatch guarantees
  • Massive event fan-out and queue throughput
  • Teams that already need a full workflow platform