How it works

The picture before the prose.

Codencer's architecture is three roles, two contracts, one record. The roles are planner, bridge, and executor. The contracts are TaskSpec (planner → bridge) and ResultSpec (bridge → planner). The record is the run.

If you want the deep prose, the docs are at /docs. This page is the picture.

How this changes your day

Your AI coding workflow today, then with Codencer.

If you use more than one AI tool to ship code — a chat to plan, a coding agent to execute — you are the bridge between them. Codencer is what removes that copy-paste loop without taking either tool away.

Today

You are the bridge.

  1. You ask ChatGPT (or Claude, Gemini, DeepSeek) to scope a new feature.
  2. It replies with a 6-step plan.
  3. You copy step 1 into Claude Code (or Codex, Qwen) in your terminal.
  4. The coding agent works on it — edits files, runs its own checks, returns a summary.
  5. You read the summary, decide if it looks right, then go back to the chat.
  6. You paste the result, ask the chat what to do for step 2.
  7. Repeat for every step. You hold the plan in your head and shuttle context between two windows.
# In your terminal:
$ git checkout -b feature/new-billing
$ # paste step 1 from chat
$ claude "implement the new billing endpoint ..."
$ # wait, read summary, copy to chat
$ # chat replies with step 2
$ # paste step 2 ... repeat ...
$ # you hold the plan across windows

With Codencer

Codencer is the bridge.

  1. You ask the same chat to scope the same feature — Codencer is connected.
  2. The chat sends step 1 to Codencer. Codencer dispatches to your local coding agent.
  3. The coding agent runs in an isolated git worktree, on your machine, near the code.
  4. Codencer captures every artifact and returns a structured result to the chat.
  5. The chat reads the result and decides what's next. Codencer dispatches step 2.
  6. The loop continues without you holding state between two windows.
  7. You watch the run tree, review when it matters, work on something else when it doesn't.
# In your terminal, just once:
$ orchestratord &
$ # connect your chat to Codencer's MCP endpoint
$ # then talk to the chat normally:
$ # "scope and ship the new billing feature"
$ # Codencer dispatches each step, captures artifacts, returns results.
$ # The chat reads each result and decides next.
$ # You watch the run:
$ orchestratorctl runs list --json
The bridge layer

Planner → Codencer → executor

PlannerChatGPTClaudeGeminihuman
Codencerrunsstepsattemptsartifactsvalidationsgates
ExecutorCodexClaude CodeAntigravityOpenClawQwen
Bridge, not brain. State, not chat.

Planners decide what to do. Executors do it. Codencer sits between them as state, contract, and audit trail.

The run lifecycle

Runs → steps → attempts → artifacts → validations → gates

Run lifecycle — runs, steps, attempts, artifacts, validations, gatesRUNSTEPATTEMPTARTIFACTVALIDATIONGATErunstepstepattemptattemptartifactvalidationgateevery node is recorded — including the failed ones

Every TaskSpec opens a run. Every run is a tree. Steps under runs, attempts under steps, artifacts under attempts. Validations and gates close the tree. Every node is recorded, append-only, SQLite-backed.

The state model is the thing other vendors' architectures flatten away. Codencer keeps it, because it's the asset.

Three deployment modes

Local-only. Self-host relay. Self-host cloud.

Three Codencer deployment modesLocal-onlysingle machine, single user, full proofPlannerCodencer DaemonLocal ExecutorCodeSQLiteSelf-host relay/runtimeremote planner without a raw remote shellPlannerRelayConnectorDaemonExecutorAudit log + SQLiteSelf-host cloudmulti-tenant control planePlannerCloud Control PlaneRelay BridgeDaemonExecutorTenant store

Codencer is local-first by design. Execution always runs on the operator's own machine, near the code. The relay layer can run on the operator's infrastructure too.

Local-only is the path of least resistance — single machine, single user, full proof in under a minute. Self-host relay adds a remote planner without exposing a raw remote shell. Self-host cloud adds the multi-tenant control plane.

What this is, what this isn't

Comparison matrix

ProductPlannerExecutorCross-vendorAI coding semanticsLocal execSelf-hostDurable run recordAdapters shippedWorktree isolation
ChatGPT → CodexOpenAIOpenAIchat onlychat history1
Claude → CCAnthropicAnthropictool calls onlypartialtool-call log1
Cursor self-hostedCursorCursorpartialpartial1
Copilot cloudGitHubGitHubpartialpartial1
DIY MCP glueanyanypartialyou build itvariesvariesyou build it0you build it
Codenceranyany5

Vendors deepen their own stacks. The neutral cross-vendor bridge with executor adapters and durable run records is still missing.

How Codencer works · Codencer