Docs / Overview

Overview

What Codencer is and what it isn't.

Codencer is the missing operational layer between AI planners and AI coding executors. Planners — ChatGPT, Claude, Gemini, a human lead — decide what to do. Executors — Codex, Claude Code, Antigravity, Qwen, OpenClaw — do it. Codencer sits between them as state, contract, and audit trail. It does not run inference. It does not host the LLM. It does not plan. It records, surfaces, and structures every run, step, attempt, and artifact, across any planner and any executor.

What Codencer is

Codencer is a planner-to-executor bridge — a deterministic control plane between LLM-driven planning and local code execution. The planner sends a structured task. Codencer dispatches the work to a coding executor through an adapter. The executor runs locally, in an isolated git worktree, against the operator's repo. Artifacts come back. Validations run. Gates evaluate. The result is a structured record the planner can read and decide on.

The planner can be any LLM chat. The executor can be any of the supported adapters. Codencer is neutral on both sides — that's the point.

What Codencer is not

We say no to a long list of things, on purpose:

  • Not a coding agent. Codencer does not write code. The executor does.
  • Not a planner. Codencer does not decide what to do. The planner does.
  • Not an LLM host. Codencer does not call OpenAI, Anthropic, or Google. The planner and executor do, on their own credentials.
  • Not a chat UI. Codencer is a daemon, a CLI, a relay, and a connector. There is no Codencer chat window.
  • Not a generic orchestration service. Codencer is shaped specifically for the planner-executor-validator loop in AI-driven coding work.
  • Not a remote shell. The relay does not expose raw shell, raw filesystem browsing, or arbitrary network tunneling. Operations are bounded by an explicit allowlist.

The architectural slogan: we record, we surface, we structure. We do not train, we do not infer.

The three roles

Every run involves three actors:

  • Planner — decides what step is next. Holds the high-level plan and the criteria. Outside Codencer's process boundary by design.
  • Bridge (Codencer) — accepts the task, dispatches the step, captures artifacts, runs validations, evaluates gates, returns the structured result. This is what we ship.
  • Adapter / executor — does the actual code change. codex, claude, qwen, antigravity*, openclaw-acpx. Runs locally on the operator's machine.

The contract between planner and bridge is the task spec. The contract between bridge and planner is the result evidence. The contract between bridge and executor is the adapter interface. Three roles, three contracts.

Why a bridge is the right shape

The vendors are deepening their own stacks. ChatGPT now dispatches work to Codex. Claude Desktop now dispatches work to Claude Code. Cursor self-hosted runs Cursor's planner over Cursor's executor. The pattern is the same: vendor-locked planning, vendor-locked execution.

Operators who run more than one AI tool — most of the serious ones — are still the bridge. They copy a plan from one chat, paste it into another tool, wait, and copy the result back. They cannot prove what happened across attempts. They cannot replay a successful run. They cannot govern across vendors.

A neutral cross-vendor bridge is the gap. Codencer is that bridge.

Where to start

If you want the proof in your terminal:

→ Read Quickstart. Local daemon and CLI in under a minute.

If you want the model in your head before the prose:

→ Read How it works. Three diagrams and a comparison matrix.

If you want the deep technical references — the operator-grade detail this site does not duplicate:

→ The canonical source is github.com/lookmanrays/codencer/tree/main/docs. Every claim there is labeled by its current beta truth and known limitation. The site holds the model. The repo holds the truth.

Overview · Codencer