Each step is checkpointed. Completed LLM calls, API requests, and payments are never repeated.
Workflow names, validated inputs, and chained step outputs stay aligned at compile time.
Durable state in a single file. Easy to ship in CLIs, small SaaS apps, and agent tools.
No brokers, no control plane, no workflow cluster. Start with a package and a database file.
Why It Matters
Same crash. Only one wastes your money.
Both pipelines scrape a page, run two LLM calls, then store results. Both crash during storage. Watch what happens next.
-
scrape completed
-
summarize $0.12 running...
-
extract $0.08 pending
-
store pending
-
scrape completed
-
summarize $0.12 running...
-
extract $0.08 pending
-
store pending
Quickstart
Three steps. One file. Zero infrastructure.
Define your pipeline
Scrape a page, summarize with an LLM, extract entities, store results. Each step's return value flows to the next via prev. LLM calls have automatic retry with exponential backoff.
const pipeline = createWorkflow({
name: 'process-content',
input: z.object({ url: z.string() }),
})
// Fetch the page content
.step('scrape', async ({ input }) => {
const page = await fetchPage(input.url)
return { content: page.text }
})
// LLM call with retry for rate limits
.step('summarize', {
retry: { maxAttempts: 3, backoff: 'exponential' },
handler: async ({ prev }) => {
const summary = await llm(prev.content)
return { summary }
},
})
// Extract entities from the summary
.step('extract', async ({ prev }) => {
const entities = await llm(prev.summary)
return { ...prev, entities }
})
// Store results - if this fails, the LLM calls won't re-run
.step('store', async ({ input, prev }) => {
await db.insert({ url: input.url, ...prev })
})
Connect storage
Point at a SQLite file. Reflow handles schema, checkpoints, and lease management.
// One file. start() handles schema automatically.
const storage = new SQLiteStorage('./reflow.db')
const engine = createEngine({
storage,
workflows: [pipeline],
})
Start and enqueue
Start the engine, then submit work as it arrives. If the process crashes, the next engine picks up where it left off.
// Starts polling. Survives crashes.
await engine.start()
await engine.enqueue('process-content', {
url: 'https://example.com/article',
})
Where Reflow Fits
Best when you need durable local workflows, not distributed orchestration.
Use Reflow for
- AI and LLM pipelines where repeated calls waste tokens and money
- Multi-step agent workflows that need crash recovery
- SaaS onboarding, billing, and provisioning flows
- CLI imports and long-running local automation
Do not use Reflow for
- Distributed workers across many machines
- Sub-second workflow dispatch guarantees
- Massive event fan-out and queue throughput
- Teams that already need a full workflow platform