{"slug":"pause-a-workflow-until-a-human-clicks-approve","url":"https://tako.sh/blog/pause-a-workflow-until-a-human-clicks-approve/","canonical":"https://tako.sh/blog/pause-a-workflow-until-a-human-clicks-approve/","title":"Pause a Workflow Until a Human Clicks Approve","date":"2026-04-21T01:27","description":"A walkthrough of step.waitFor + signal — an order-fulfillment workflow that parks for days waiting on admin approval, then resumes exactly where it left off.","author":null,"image":"eaf36ad099f6","imageAlt":null,"headings":[{"depth":2,"slug":"the-setup","text":"The setup"},{"depth":2,"slug":"the-signal","text":"The signal"},{"depth":2,"slug":"what-for-days-actually-means","text":"What “for days” actually means"}],"markdown":"Some workflows can't finish on their own. An order over a certain amount needs a human to eyeball it. A new vendor needs compliance to sign off. A refund above some threshold needs a manager. The work is half-done, the rest depends on a click that might land in two minutes or two days.\n\nThe naïve answer is to poll a database column from a cron job. The slightly less naïve answer is to split the workflow into two and wire them together with a webhook. Both are awful — the first burns CPU, the second turns one logical process into three and loses you all your local variables.\n\nTako's [durable workflow engine](/blog/durable-workflows-are-here) gives you a primitive that's just better: park the run on a named event, sleep the worker, and wake up exactly where you left off when the event fires.\n\n## The setup\n\nImagine an order-fulfillment workflow. Charge the card, run a fraud check, **wait for an admin to approve high-value orders**, then ship.\n\n```ts\n// workflows/fulfill-order.ts\nimport { defineWorkflow } from \"tako.sh\";\n\nexport default defineWorkflow<{ orderId: string }>(\"fulfill-order\", {\n  retries: 4,\n  handler: async (payload, step) => {\n    const order = await step.run(\"load-order\", () => db.orders.find(payload.orderId));\n\n    await step.run(\"charge\", () =>\n      stripe.charges.create({ amount: order.total, source: order.token, idempotencyKey: order.id }),\n    );\n\n    if (order.total > 50_000) {\n      const decision = await step.waitFor<{ approved: boolean; by: string }>(\n        `approval:order-${order.id}`,\n        { timeout: 7 * 24 * 3600 * 1000 }, // 7 days\n      );\n\n      if (decision === null) step.bail(\"approval timed out — order held\");\n      if (!decision.approved) step.bail(`rejected by ${decision.by}`);\n    }\n\n    await step.run(\"ship\", () => easypost.shipments.create({ to: order.address }));\n    await step.run(\"notify\", () => mailer.send(order.email, { orderId: order.id }));\n  },\n});\n```\n\nThe interesting line is `step.waitFor`. When the run hits it, the worker doesn't sit and spin — it serializes the run state, marks the row `pending` in the per-app SQLite queue, inserts an `event_waiters` row keyed by the event name, and exits the handler. If nothing else is in flight, the worker subprocess itself shuts down. Zero CPU, zero memory, zero open connections — just a row in a file at `{tako_data_dir}/apps/<app>/runs.db`.\n\n## The signal\n\nAnywhere else in your code — an HTTP handler, a webhook receiver, an admin button — fire the matching signal:\n\n```ts\n// app/admin/approve.ts\nimport { signal } from \"tako.sh\";\n\nexport default async function fetch(req: Request) {\n  const { orderId, approverId } = await req.json();\n\n  await signal(`approval:order-${orderId}`, {\n    approved: true,\n    by: approverId,\n  });\n\n  return Response.json({ ok: true });\n}\n```\n\nThe signal lands on tako-server's [internal unix socket](/docs/tako-toml), the matching `event_waiters` row is consumed, the payload is stored as the result of the `waitFor` step, and the run flips back to `pending`. The supervisor wakes the worker, the worker re-claims the run, and execution resumes — `decision` is now `{ approved: true, by: \"...\" }` and the workflow ships the order.\n\nNotice what _doesn't_ happen on resume: the `load-order` and `charge` steps don't re-run. Their results are already in the `steps` table, keyed by `(run_id, name)`, so on the next claim they return cached values instantly. That's the [`step.run`](/docs/tako-toml) checkpoint contract — at-least-once for the in-flight step, exactly-once for everything before it.\n\n## What \"for days\" actually means\n\n```d2\ndirection: right\n\nenq: \"POST /orders\\n(enqueue run)\" {style.fill: \"#9BC4B6\"; style.font-size: 14}\nworker1: \"Worker\\nclaims, runs steps,\\nhits waitFor\" {style.fill: \"#E88783\"; style.font-size: 14}\npark: \"Run parked\\n(row in runs.db)\" {style.fill: \"#FFF9F4\"; style.stroke: \"#2F2A44\"; style.font-size: 14}\nsignal: \"Admin clicks Approve\\n→ signal()\" {style.fill: \"#9BC4B6\"; style.font-size: 14}\nworker2: \"Worker re-spawns,\\nresumes after waitFor,\\nships order\" {style.fill: \"#E88783\"; style.font-size: 14}\n\nenq -> worker1\nworker1 -> park: \"exit\"\npark -> signal: \"...3 days later...\"\nsignal -> worker2: \"wake\"\n```\n\nWhile the run is parked, your VPS isn't holding anything open for it. The worker process is gone. tako-server can restart, the host can reboot, you can [redeploy](/blog/what-happens-when-you-run-tako-deploy) — the row stays in SQLite, the event waiter stays indexed, and `signal` will still find it three days from now. The 7-day `timeout` is just a safety valve; if it fires first, `waitFor` returns `null` and the workflow takes the cleanup path via `step.bail`.\n\nThe same primitive covers webhook callbacks, multi-step onboarding flows that wait on user input, payment-confirmation hops, and anything else where the next step is \"the world tells us something happened.\" One file, one default export, no external queue, no cron polling. Drop it in `workflows/`, run [`tako dev`](/docs/development), and the [embedded scale-to-zero worker](/blog/workflow-workers-scale-to-zero) wires up the rest."}