{"slug":"named-worker-groups-for-tako-workflows","url":"https://tako.sh/blog/named-worker-groups-for-tako-workflows/","canonical":"https://tako.sh/blog/named-worker-groups-for-tako-workflows/","title":"Named Worker Groups for Tako Workflows","date":"2026-04-29T04:29","description":"Tako workflows now support named worker pools, so a slow image job in the media group can't starve auth-critical email or default work.","author":null,"image":"dfeabc4221aa","imageAlt":null,"headings":[{"depth":2,"slug":"pools-named-after-what-they-do","text":"Pools, named after what they do"},{"depth":2,"slug":"sized-independently-in-takotoml","text":"Sized independently in tako.toml"},{"depth":2,"slug":"why-isolation-matters","text":"Why isolation matters"},{"depth":2,"slug":"per-server-tuning","text":"Per-server tuning"}],"markdown":"Workflow queues have one classic failure mode: a slow job clogs the pipe and everything else waits behind it. A 30-second image resize lands in the queue, every worker grabs one, and the password-reset email that should have gone out in 200ms sits in `pending` while your users refresh their inbox.\n\nThat's head-of-line blocking. The fix is the same one every queue ends up shipping eventually: separate pools for separate kinds of work. As of today, [Tako workflows](/blog/durable-workflows-are-here) have it built in.\n\n## Pools, named after what they do\n\nYou assign a workflow to a named pool with one option:\n\n```ts\n// workflows/process-image.ts\nimport { defineWorkflow } from \"tako.sh\";\n\nexport default defineWorkflow<{ key: string }>(\"process-image\", {\n  worker: \"media\",\n  retries: 4,\n  handler: async (payload, step) => {\n    const buf = await step.run(\"download\", () => s3.get(payload.key));\n    await step.run(\"resize\", () => sharp(buf).resize(1024).toBuffer());\n    await step.run(\"upload\", () => s3.put(`thumb/${payload.key}`, buf));\n  },\n});\n```\n\nWorkflows without `worker:` belong to the `default` group, so existing apps keep working unchanged. Add `worker: \"email\"` to your transactional sender, `worker: \"media\"` to anything CPU-heavy, and the runtime takes care of routing each enqueue to the right pool.\n\n## Sized independently in `tako.toml`\n\nEach named group is its own row in the config, with the same two knobs as the base block — `workers` (always-on processes) and `concurrency` (parallel runs per worker). The base `[workflows]` block sets defaults that named groups inherit and override:\n\n```toml\n[workflows]\nworkers = 0          # scale-to-zero default for everything\nconcurrency = 10\n\n[workflows.email]\nworkers = 1          # one always-on worker for fast, light jobs\nconcurrency = 20     # plenty of parallelism per worker\n\n[workflows.media]\nworkers = 2          # two workers for heavy, CPU-bound jobs\nconcurrency = 4      # but keep per-worker fan-out low\n\n[servers.lax.workflows.media]\nworkers = 4          # bump it up on the box that has more cores\n```\n\nThe precedence chain reads top-down — built-in defaults, then `[workflows]`, then `[workflows.<group>]`, then any `[servers.<name>.workflows.<group>]` override on a specific host. The full table is in [`tako.toml`](/docs/tako-toml).\n\n## Why isolation matters\n\nWithout separate pools, every worker is a generalist. One image job lands, every worker grabs an image job, and the queue depth for `send-email` climbs while CPU is pinned by `sharp`. Your auth-critical work is _correct_ — it'll run eventually — but \"eventually\" is the wrong SLA for a password reset.\n\nWith named groups, the runtime spawns a separate subprocess per group, each loading only the workflows assigned to it. The email worker picks up `send-email` runs and ignores `process-image` entirely; the media worker does the inverse. They contend for CPU at the OS scheduler, not at the queue.\n\n```d2\ndirection: right\n\nent1: \"enqueue send-email\" {style.fill: \"#9BC4B6\"; style.font-size: 14}\nent2: \"enqueue process-image\" {style.fill: \"#9BC4B6\"; style.font-size: 14}\nserver: \"tako-server\" {style.fill: \"#E88783\"; style.font-size: 14}\nemail: \"email worker\\n(workers = 1)\" {style.fill: \"#FFF9F4\"; style.stroke: \"#2F2A44\"; style.font-size: 14}\nmedia: \"media worker\\n(workers = 2)\" {style.fill: \"#FFF9F4\"; style.stroke: \"#2F2A44\"; style.font-size: 14}\ndef: \"default worker\\n(scale-to-zero)\" {style.fill: \"#FFF9F4\"; style.stroke: \"#2F2A44\"; style.font-size: 14}\n\nent1 -> server -> email\nent2 -> server -> media\nserver -> def: \"everything else\"\n```\n\nEach pool keeps its own [scale-to-zero](/blog/scale-to-zero-without-containers) lifecycle: a group with `workers = 0` doesn't spawn until the first matching enqueue or cron tick lands, and idles back out when there's nothing to do. So the `media` group can sit at zero overnight and your VPS doesn't pay rent on it; the `email` group can stay warm because cold-starting an image library every 200ms email isn't free.\n\n## Per-server tuning\n\nThe same precedence rules cascade into per-server blocks. If your `lax` box has more cores than your `cdg` box, give `media` four workers there and one elsewhere — same `tako.toml`, [different defaults per host](/blog/one-config-many-servers), no fork in the workflow code.\n\nDrop `worker: \"name\"` into your handlers, add a `[workflows.<name>]` block to `tako.toml`, and `tako deploy`. The slow jobs get their own lane, the fast jobs stay fast, and your password resets stop waiting in line behind a thumbnail render."}