Docs / Architecture

Architecture

Planner → bridge → executor, the run lifecycle.

Codencer's architecture is three roles, two contracts, one record. The roles are planner, bridge, and executor. The contracts are the planner-side task spec and the planner-side result evidence. The record is the run — a structured, append-only, SQLite-backed history of every step, attempt, artifact, validation, and gate the bridge executed.

High-level model

Planner / Chat
   |
   | Relay HTTP API or Relay MCP
   v
Relay
   |
   | Authenticated outbound websocket
   v
Connector
   |
   | Narrow allowlisted local API proxy
   v
Local Codencer Daemon
   |
   +--> SQLite state and settings
   +--> Artifact store
   +--> Workspace / git manager
   +--> Validation runner
   +--> Adapter dispatch
   +--> Gate and recovery services

Execution stays local. Planning stays outside Codencer. Every box above is a real binary in the repo: orchestratord (daemon), codencer-connectord (connector), codencer-relayd (relay), orchestratorctl (CLI), with codencer-cloudd, codencer-cloudworkerd, and codencer-cloudctl for the cloud control plane.

Core runtime roles

Local daemon (orchestratord)

The daemon is the local system of record. It owns the state machine. It dispatches adapters. It never accepts inbound traffic from the public internet.

Responsibilities:

  • manage run, step, attempt, and gate lifecycle
  • persist state to SQLite
  • dispatch adapters to executors
  • collect artifacts and validations
  • expose local /api/v1 and local compatibility/admin /mcp/call

The daemon binds to localhost. It is not the public-facing MCP server.

NOTE — blocker-for-use: Daemon-local /mcp/call is compatibility-only. It is not the public remote planner MCP contract. Remote planners belong on relay /mcp or cloud /api/cloud/v1/mcp. See docs/mcp/integrations.md.

Connector (codencer-connectord)

The connector is the outbound bridge between relay and local daemon. It opens an authenticated outbound websocket from the operator's network to the relay (no inbound port on the operator's network), then proxies a narrow allowlisted set of operations from relay to daemon. Discovery does not imply exposure — the config file is the allowlist.

Relay (codencer-relayd)

The relay is the public planner-facing surface. It speaks HTTP and MCP. It exposes only instance-scoped orchestration operations — open a run, dispatch a step, query a result. It does not expose raw shell, raw filesystem, or arbitrary network tunneling. Authentication is bearer-token; the relay authenticates planners and connectors and routes requests to the right shared local instance.

Cloud control plane (codencer-cloudd, codencer-cloudworkerd, codencer-cloudctl)

For multi-tenant deployments. cloudd is the API. cloudworkerd runs scheduled work (provider polling, audit aggregation). cloudctl is the CLI. Self-host today; managed Cloud Preview launches when the public beta promise extends to it.

NOTE — blocker-for-use: Cloud runtime HTTP and cloud MCP are useful only in composed mode. A relay connector must be claimed into org/workspace/project scope before cloud can use it. Start codencer-cloudd with --relay-config or relay_config_path, then claim the runtime connector into tenant scope.

Public surfaces

Remote / public

  • relay HTTP API under /api/v2
  • relay MCP under /mcp
  • relay MCP compatibility path /mcp/call
  • connector websocket under /ws/connectors

Local / private by default

  • daemon HTTP API under /api/v1
  • daemon-local /mcp/call (compatibility/admin bridge — not for remote planners)

State and evidence

The current authoritative state lives in the daemon:

  • runs
  • steps
  • attempts
  • gates
  • artifacts
  • validations

The relay stores only the remote control-plane state it needs:

  • connector identity
  • instance advertisement records
  • resource routing hints
  • audit events
  • enrollment / challenge state

Trust boundaries

  • planner decides what to do
  • relay authenticates and routes
  • connector limits remote reach to a narrow allowlist
  • daemon executes and records truth locally
  • adapters do local work only

There is no raw shell or arbitrary filesystem browsing surface in the relay or connector. See Security & audit for the deeper trust analysis.

WSL / Windows model

The practical default is:

  • daemon, connector, repo, worktrees, and artifacts in WSL/Linux
  • IDE and Antigravity broker on Windows when needed
  • relay wherever the operator hosts the remote control plane

For the deeper architectural detail, see the repo's docs/02_architecture.md. It tracks current implementation status.

Architecture · Codencer