How Tako Works
Tako is a development and deployment platform made of three pieces:
tako: the local CLItako-server: the process running on each deployment hosttako.sh: the app SDK for JavaScript/TypeScript and Go
The CLI builds and ships your app. The server terminates TLS, routes traffic, manages processes, runs health checks, stores runtime state, and performs rolling updates. The SDK adapts your app to Tako’s runtime protocol.
Management Path
Management actions start on your machine:
tako deploy --env production
tako scale 2 --env production
tako secrets sync
tako releases rollback abc1234 --env production
The CLI connects to each target server over SSH and sends JSON commands to the server’s Unix management socket:
/var/run/tako/tako.sock
During a graceful server reload, the stable socket path is a symlink to a PID-specific socket. The new server process swaps the symlink atomically when it is ready, so management clients connect to the active process.
Traffic Path
Production traffic enters tako-server through Pingora:
- TLS is selected by SNI.
- HTTP is redirected to HTTPS except ACME challenge paths.
- The request host and path are matched against deployed app routes.
- Static assets are served directly from the app’s deployed
public/directory when possible. - Dynamic requests go to an app instance over loopback TCP.
Apps bind to 127.0.0.1 on an OS-assigned port. The SDK writes the actual port to fd 4 when the app is ready. Tako then routes traffic to that private endpoint.
App Identity
Remote apps are identified as:
{app}/{env}
The app name comes from name in tako.toml, or from the selected config file’s parent directory when name is omitted.
This lets one server host the same app in multiple environments:
dashboard/production
dashboard/staging
Deploy Flow
tako deploy targets one environment, defaulting to production.
At a high level it:
- validates
tako.toml, secrets, routes, and server target metadata - resolves runtime, package manager, preset, entrypoint, and build stages
- copies the source bundle into
.tako/build - runs build commands in the build directory
- merges static assets into
public/ - writes
app.json - creates a target-specific artifact
- uploads and extracts it on each server
- runs production dependency install on each server
- runs the release command on the leader server when configured
- performs rolling update on each server
Servers receive prebuilt artifacts. They do not run app build steps.
Rolling Updates
Rolling update happens per server:
- start one new instance
- wait for health checks
- add it to the load balancer
- drain and stop one old instance
- repeat until the release is current
- update the
currentsymlink
If startup or health checks fail, Tako kills new instances and keeps the old release serving.
Deploys are serialized per deployed app id. A second deploy for the same app and environment on the same server fails immediately with a retry message.
Health Checks
Tako actively probes each instance:
GET /status
Host: tako.internal
X-Tako-Internal-Token: <instance-token>
The SDK implements this endpoint. Probes use the private loopback endpoint, not the public proxy.
Health checks run every second during steady state and every 100ms while an instance is starting. A single failure after a successful probe marks the instance dead and triggers replacement.
Scaling
Desired instance count is server-side runtime state, not tako.toml config.
New deployments start with one desired instance per server. Change it with:
tako scale 2 --env production
tako scale 0 --env production
tako scale 1 --server la
Desired count persists across deploys, rollbacks, and server restarts.
When desired count is 0, the app scales to zero after its idle timeout. The next request triggers a cold start. If startup succeeds, the queued request continues. If startup times out, the proxy returns 504 App startup timed out; diagnostics include captured startup stdout/stderr when available.
Routing
Routes are configured per environment:
[envs.production]
routes = ["api.example.com", "example.com/app/*", "*.example.com/admin/*"]
Tako supports exact hosts, wildcard hosts, and path-prefixed routes. The most specific matching route wins. Unknown hosts return 404.
For static assets, Tako checks the deployed public/ directory. Path-prefixed routes also try prefix-stripped asset lookup, so /app/assets/main.js can serve /assets/main.js.
TLS
Tako uses SNI to choose certificates:
- exact certificate match
- wildcard fallback
- default self-signed fallback so HTTPS can complete
Public hostnames use ACME. Private or local hostnames such as localhost, .local, .test, .invalid, .example, and .home.arpa use self-signed certificates.
Wildcard routes require DNS-01 support. Configure it with:
tako servers setup-wildcard --env production
Secrets
Local secret source of truth is .tako/secrets.json. Values are encrypted per environment with AES-256-GCM, and keys live under .tako/keys/{env}.
Deploy sends secrets only when the server-side hash differs. Long-running app and worker processes receive secrets through fd 3 at spawn time, not through env vars. Release commands are one-shot and receive secrets as env vars.
Runtime Data
Each deployed app gets persistent data directories under:
/opt/tako/apps/{app}/{env}/data/
The app-owned path is exposed as TAKO_DATA_DIR.
Workflows and Channels
Workflow queues and channel storage are owned by tako-server. SDKs communicate with the server through a per-app internal Unix socket using TAKO_INTERNAL_SOCKET and TAKO_APP_NAME.
Workflows run in separate worker processes from HTTP instances. Workers can be always-on or scale-to-zero. Runs, steps, schedules, and event waiters are stored in SQLite under the app data directory.
Channels are durable pub-sub streams available at:
GET /channels/<name>
For JavaScript/TypeScript apps, <name> is the explicit name property passed
to defineChannel({ name: "<name>", ... }). SSE and WebSocket transports are supported,
with bounded replay for reconnects. Browser clients retry indefinitely across
network loss, laptop sleep, server restarts, and stream rotation, then resume
from the last received message id while it remains inside the replay window.
Local Development
tako dev talks to a persistent tako-dev-server daemon. The daemon owns local HTTPS, .test DNS routing, app process lifecycle, logs, wake-on-request, and workflow workers. Development routes can also include external hostnames when another tool points those hosts at the dev proxy.
Local app URLs are based on the app name:
https://dashboard.test/
On macOS, Tako uses a launchd-managed loopback proxy for portless :80 and :443 URLs. On Linux, it uses a loopback alias and redirect rules.
The local CA is generated once, stored under Tako’s home directory, and trusted through the system trust store after a one-time privileged setup.