Docs / Self-hosting
Self-hosting
Local-only, relay/runtime, cloud control plane.
Codencer is local-first by design. The execution layer always runs on the operator's own machine, near the code. The relay layer can run on the operator's infrastructure too. Three deployment modes ship today: local-only, self-host relay, and self-host cloud control plane.
Local-only
The simplest deployment. One machine, one user, one daemon.
[ Planner (any chat) ] ── HTTP ──> [ orchestratord (localhost) ] ── adapter ──> [ Local executor ] ── git ──> [ Code ]
│
└──> SQLite, artifacts
Use this when:
- You're a solo operator working on your own machines
- You want the lowest-friction path to proof
- Your planner is a desktop chat client (ChatGPT, Claude Desktop) or a CLI
- You don't need cross-machine access
Setup:
git clone https://github.com/lookmanrays/codencer
cd codencer
make build-supported
make smoke
Runs in a few minutes. The daemon binds to localhost and is not internet-facing. The repo's SETUP.md is the canonical reference.
Self-host relay + runtime
Adds a relay that the planner talks to over HTTP/MCP. The connector opens an outbound websocket from the operator's network to the relay; the daemon stays local and never accepts inbound traffic. This is the path that makes remote planners work without exposing a raw remote shell.
[ Planner ] ── HTTP/MCP ──> [ codencer-relayd ] ↔ [ codencer-connectord ] ── localhost ──> [ orchestratord ] ──> [ Executor ]
↓ ↓
Audit log SQLite
Use this when:
- You're running a team of operators, each with their own machine
- The planner needs to be remote (cloud-hosted ChatGPT, Claude.ai web)
- Your security model says no inbound ports to engineer machines
- You want a single audit log across the team
Setup:
make build
PLANNER_TOKEN=<planner-token> make self-host-smoke-mcp
The smoke target verifies the full chain: planner → relay → connector → daemon → adapter → result. The repo's SELF_HOST_REFERENCE.md is the deep reference.
Self-host cloud control plane
Multi-tenant. Adds the cloud control plane (codencer-cloudd, codencer-cloudworkerd, codencer-cloudctl) that bootstraps tenants, provisions runtime workers, and brokers between relay instances and tenant runtimes.
[ Planner ] ── ── ── > [ codencer-cloudd ] ── ── > [ Tenant runtime ] ── ── > [ engineer's daemon ] ── > [ Executor ]
↓
Tenant store, audit, quotas
Use this when:
- You're running across teams or business units inside one org
- You need tenant isolation, quotas, and central audit
- Your enterprise architecture requires the control plane inside your VPC
- You're piloting Codencer with the intent of moving to managed Cloud Preview later
Setup:
make build-cloud
make cloud-smoke
The repo's CLOUD_SELF_HOST.md covers the full bootstrap, tenant provisioning, and the SSO/SCIM-front-door pattern.
NOTE —
blocker-for-use: Cloud runtime HTTP and cloud MCP are useful only in composed mode. A relay connector must be claimed into org/workspace/project scope before cloud can use it. Startcodencer-clouddwith--relay-configorrelay_config_path, then claim the runtime connector into tenant scope.
NOTE —
friction: Connector enrollment-token issuance remains relay-config backed at v0.2.0-beta. Cloud hosts connector ingress in composed mode, but a cloud-native enrollment-token lifecycle is deferred. Issue enrollment tokens through the relay-backed self-host flow today.
Docker baseline
Every self-host mode has a Docker-baselined verification path:
make verify-beta-docker
This builds the supported binaries inside a Docker image and runs the canonical smoke targets inside it. Use this when you want to verify Codencer works on a known-clean image before deploying to your real infrastructure.
NOTE —
friction:make cloud-stack-smokeis the Docker baseline only. The Docker compose stack does not create a usable runtime instance by itself, and the broader runtime/MCP/SDK proof lives elsewhere. Use binary-nativemake cloud-smokewith composed-mode inputs when you need claimed runtime visibility, cloud runtime HTTP, cloud MCP, or official Go SDK proof.
Provider connectors
Beyond executor adapters, Codencer ships provider connectors — integrations with collaboration tools that planners use as part of their workflow. At v0.2.0-beta:
- Slack is strongest. The strongest provider-shaped operator path today.
- Jira is polling-first. Use
codencer-cloudworkerdfor Jira polling. Webhook ingest is explicitly deferred at this tag. - GitHub, GitLab, Linear are narrower operator/package surfaces. Use the generic cloud install/action routes and the provider-focused tests in
docs/CLOUD_CONNECTORS.md.
NOTE —
blocker-for-use: Jira webhook ingest returns501 webhook_deferredat v0.2.0-beta. Do not treat/webhookas the supported Jira ingest path. Configureconfig.jqlorconfig.project_keywithcloudworkerdpolling instead.
What "code never leaves the customer's network" means
In every self-host mode above, the executor runs locally on the operator's machine. Code never leaves the operator's network. The relay sees task specs and result evidence — structured metadata, not source. The connector proxies a narrow allowlisted API; it does not tunnel arbitrary network traffic. The cloud control plane sees tenant state and audit metadata; it does not see source.
This is the trust model that makes Codencer pilotable in regulated industries. See Security & audit for the deeper trust analysis.
What's deferred from the beta promise
NOTE —
friction: WSL / Windows / agent-broker topology is operator guidance, not an automated smoke-proof matrix. Keep the repo checkout, daemon, connector, worktrees, and artifacts in WSL/Linux; keepagent-brokeron Windows only when needed; inspect results through APIs and CLI rather than raw cross-side paths.
NOTE —
friction: Managed Cloud Preview is on the roadmap. Until then, Codencer is self-host-only. The architecture for the managed Cloud is the same ascloudd/cloudworkerd/cloudctlyou can run yourself today; the managed offering adds a hosted relay and tenant provisioning we operate.