{"slug":"one-config-many-servers","url":"https://tako.sh/blog/one-config-many-servers/","canonical":"https://tako.sh/blog/one-config-many-servers/","title":"One Config, Many Servers","date":"2026-04-05T14:04","description":"One tako.toml, two environments, three servers across regions — how Tako takes a side project all the way to a real production setup.","author":null,"image":"e12303175ea2","imageAlt":null,"headings":[{"depth":2,"slug":"one-config-many-environments","text":"One config, many environments"},{"depth":2,"slug":"deploying-to-an-environment","text":"Deploying to an environment"},{"depth":2,"slug":"scaling-per-environment-per-server","text":"Scaling per environment, per server"},{"depth":2,"slug":"adding-a-region-later","text":"Adding a region later"}],"markdown":"Your side project works. It's deployed on one VPS, it has real HTTPS, you're happy. Now someone actually depends on it, and the questions start piling up: where do I test changes before prod? Where does the second server go when the first one falls over? How do I point staging at a different database without duplicating the whole config?\n\nTako's answer is: add a few lines to your `tako.toml`. One config file describes every environment you run and every server each environment lives on. Deploys know the difference, rolling updates happen in parallel, and rollback is a single command.\n\n## One config, many environments\n\nEnvironments are declared under `[envs.<name>]`. Each gets its own routes, its own server list, and its own idle-scaling policy. Here's a real-shaped [`tako.toml`](/docs/tako-toml):\n\n```toml\nname = \"myapp\"\n\n[build]\nrun = \"bun run build\"\n\n[vars]\nLOG_FORMAT = \"json\"\n\n[vars.production]\nAPI_URL = \"https://api.myapp.com\"\n\n[vars.staging]\nAPI_URL = \"https://api.staging.myapp.com\"\n\n[envs.production]\nroute = \"myapp.com\"\nservers = [\"la\", \"nyc\", \"fra\"]\nidle_timeout = 600\n\n[envs.staging]\nroute = \"staging.myapp.com\"\nservers = [\"staging\"]\nidle_timeout = 60\n```\n\n`[vars]` is the base, `[vars.<env>]` layers on top. Staging gets aggressive idle timeouts so it scales to zero almost immediately — [no resources wasted on code nobody is looking at](/blog/scale-to-zero-without-containers). Production gets longer warm windows and a three-server fleet.\n\nThe same server name can host multiple environments of the same app. `staging.myapp.com` and `myapp.com` could both live on one box if you want — Tako keeps them separated on disk under `/opt/tako/apps/myapp/staging` and `/opt/tako/apps/myapp/production`, with independent processes, secrets, and release histories.\n\n## Deploying to an environment\n\n`tako deploy` defaults to `production`. To ship staging instead:\n\n```bash\ntako deploy --env staging\n```\n\nTako builds the artifact once, then uploads and starts it on every server listed under `[envs.staging].servers` **in parallel**. Each server runs its own [rolling update](/docs/deployment): start a new instance, wait for the SDK readiness signal, drain the old one, repeat. If `fra` falls behind because of network weather, `la` and `nyc` don't wait for it — partial failures are reported at the end and successful servers stay on the new release.\n\n```d2\ndirection: right\n\nlaptop: tako deploy --env production {\n  shape: rectangle\n}\n\nartifact: Build artifact {\n  shape: document\n}\n\nla: la {shape: hexagon}\nnyc: nyc {shape: hexagon}\nfra: fra {shape: hexagon}\n\nlaptop -> artifact: build once\nartifact -> la: SFTP + rolling update\nartifact -> nyc: SFTP + rolling update\nartifact -> fra: SFTP + rolling update\n```\n\nSecrets follow the same model. They're [encrypted locally](/docs/cli), keyed per environment, and pushed to each server over the management socket. Tako hashes the local secrets and asks each server whether they match before sending anything — if they do, the deploy skips the secrets payload entirely. New servers and drifted ones are caught automatically.\n\n## Scaling per environment, per server\n\nInstance counts are runtime state, not config. You set them with [`tako scale`](/docs/cli):\n\n```bash\n# two warm instances on every production server\ntako scale 2 --env production\n\n# but LA is the big one — bump it to six\ntako scale 6 --server la --env production\n\n# staging stays on-demand (scale to zero)\ntako scale 0 --env staging\n```\n\nThese counts persist across deploys, rollbacks, and server restarts, stored on each server rather than baked into `tako.toml`. That means a production hotfix can't accidentally undo last night's scale-up decision.\n\n## Adding a region later\n\nThe nice thing about declaring servers per environment is that growing is just a list edit. Register the new server globally once:\n\n```bash\ntako servers add <host>\n```\n\nAdd its name to `[envs.production].servers`, run `tako deploy`, and your app is now serving from the new region alongside the existing ones. Point the DNS record for `myapp.com` at all three IPs (or front them with Cloudflare and let smart routing pick the nearest), and you've got your own edge network running on commodity VPS boxes. No Kubernetes, no orchestrator, no control plane to babysit.\n\nTako is aiming to be more than a deploy tool — the same `tako.toml` that describes your fleet today will describe [channels, queues, and workflows](/blog/why-tako-ships-an-sdk) tomorrow. Environments and multi-server deploys are the floor, not the ceiling.\n\nRead the [deployment docs](/docs/deployment) for the full story, or [how Tako works](/docs/how-tako-works) for the architecture underneath."}