Deployment
This guide covers the production path: installing tako-server, registering servers, mapping environments, deploying releases, scaling instances, syncing secrets, and operating day two.
Install the Server
Run the server installer as root on each target host:
sudo sh -c "$(curl -fsSL https://tako.sh/install-server.sh)"
The installer:
- creates the
takoservice/SSH user - creates
tako-appfor process-separation setups - installs
tako-serverto/usr/local/bin/tako-server - installs systemd or OpenRC service files
- configures privileged bind support for ports 80 and 443
- creates
/opt/takoand/var/run/tako - starts and verifies the service
- installs helpers needed for graceful reload and upgrade
For GitHub-hosted release downloads, the installer uses GH_TOKEN when set, falling back to GITHUB_TOKEN.
Set TAKO_SSH_PUBKEY to install an SSH public key non-interactively:
curl -fsSL https://tako.sh/install-server.sh | sudo TAKO_SSH_PUBKEY="$(cat ~/.ssh/id_ed25519.pub)" sh
Register the Server Locally
Add each server to your local global config:
tako servers add 203.0.113.10 --name la
tako servers add 203.0.113.11 --name nyc --description "New York"
The add command verifies SSH, detects the server target (arch and libc), and stores it in config.toml. Deploy requires that target metadata so it can choose the correct artifact.
List configured servers:
tako servers ls
Configure the Project
Map an environment to one or more servers:
name = "dashboard"
runtime = "bun"
preset = "tanstack-start"
[envs.production]
route = "dashboard.example.com"
servers = ["la", "nyc"]
Each non-development environment must define route or routes.
Routes can be exact hosts, wildcard hosts, or host plus path:
[envs.production]
routes = [
"dashboard.example.com",
"example.com/app/*",
"*.example.com/admin/*"
]
Deploy
tako deploy
tako deploy --env staging
tako deploy --env production --yes
--env defaults to production. Interactive production deploys require confirmation unless --yes / -y is set.
Deploy builds locally, ships artifacts to every server in the environment, prepares the release, runs the release command if configured, then rolls traffic to the new build.
If [envs.production].servers is empty and exactly one global server is configured, deploy can select it and write it into tako.toml. Otherwise, declare servers explicitly.
Build and Artifact Contract
The deploy source root is the git root when available, otherwise the selected config file’s parent directory.
Tako copies the source bundle into .tako/build, respecting .gitignore, symlinks local node_modules for build tools, runs configured build stages, verifies the runtime main, and archives the result.
Always excluded from deploy artifacts:
.git/.tako/.env*node_modules/
Additional excludes come from [build].exclude, per-stage exclude, and .gitignore.
Target artifacts are cached under .tako/artifacts/ and validated by checksum and size before reuse.
Release Commands
Use release for work that must happen once before traffic shifts:
release = "bun run db:migrate"
Override or clear it per environment:
[envs.staging]
release = ""
The release command runs only on the leader server, inside the new release directory, after production dependency install and before rolling update. It receives app env, secrets, TAKO_BUILD, and TAKO_DATA_DIR.
If the command fails or times out after 10 minutes, deploy aborts on every server. The old release keeps serving.
Rolling Updates
On each server, Tako:
- starts a new instance
- waits for health
- adds it to the load balancer
- drains an old instance
- repeats until the target count is replaced
- updates the
currentsymlink
The rolling target count comes from server-side desired instance state. Deploy does not reset it.
If desired instances are 0, deploy still keeps one warm instance for the new build so the app is reachable immediately after deploy. Later it can idle down.
Scaling
Scale every server in an environment:
tako scale 2 --env production
tako scale 0 --env production
Scale one server:
tako scale 3 --env production --server la
Outside a project directory:
tako scale 2 --app dashboard/production --server la
Desired counts persist across deploys, rollbacks, and server restarts.
Secrets
Set local encrypted secrets:
tako secrets set DATABASE_URL --env production
tako secrets set API_KEY --env staging
Sync them to servers:
tako secrets sync
tako secrets sync --env production
Deploy compares a local secrets hash with the server’s current hash. If unchanged, secrets are not resent. Fresh HTTP instances and workflow workers receive secrets through fd 3 at spawn time. Secret sync also refreshes workflow runtime and rolling-restarts HTTP instances so new processes receive updated values.
TLS
Public routes use Let’s Encrypt automatically. Certificates renew 30 days before expiry.
Private and local hostnames use self-signed certificates:
localhost*.localhost- single-label hosts
.local.test.invalid.example.home.arpa
Wildcard routes require DNS-01 configuration:
tako servers setup-wildcard --env production
If a wildcard route is deployed without DNS provider configuration, deploy fails with guidance.
Logs and Status
tako logs --env production
tako logs --env production --tail
tako logs --env production --json
tako servers status
tako releases ls --env production
servers status works from any directory and reports all configured servers.
logs includes app output plus server lifecycle, health, and proxy diagnostics for the app’s deployed routes. JS/TS production HTTP entrypoints route console.*, uncaught exceptions, and unhandled rejections into the same app log stream. Use --json for compact JSONL in agents and automation.
Rollback
tako releases ls --env production
tako releases rollback abc1234 --env production --yes
Rollback reuses the selected release, current routes, env, secrets, and desired scaling state, then performs the standard rolling-update flow.
Server Maintenance
Graceful reload:
tako servers restart la
Full restart:
tako servers restart la --force
Upgrade all servers or one server:
tako servers upgrade
tako servers upgrade la
Upgrade uses temporary process overlap and the management socket handoff so clients connect to the ready process.
GitHub-backed upgrade metadata and remote archive downloads use GH_TOKEN when set, falling back to GITHUB_TOKEN.
Data Layout
Production data lives under /opt/tako:
/opt/tako/
config.json
tako.db
runtimes/
certs/
apps/
{app}/{env}/
current -> releases/{version}
data/
logs/
releases/{version}/
The management socket lives at:
/var/run/tako/tako.sock
Common Failure Behavior
- insufficient disk space fails before upload
- missing server target metadata fails before deploy
- concurrent deploys for the same app fail immediately
- failed release commands abort before traffic shifts
- failed warm startup keeps old instances serving
- failed partial releases are cleaned up automatically