The open source AI coding agent.
A fork of opencode, powered by CAR.
English | 简体中文 | 繁體中文 | 한국어 | Deutsch | Español | Français | Italiano | Dansk | 日本語 | Polski | Русский | Bosanski | العربية | Norsk | Português (Brasil) | ไทย | Türkçe | Українська | বাংলা | Ελληνικά | Tiếng Việt
Note
This is the Matt Liotta fork of opencode, rebuilt on top of CAR — the Common Agent Runtime. CAR is a deterministic Rust runtime for AI agents. The opencode TUI, CLI, configuration, MCP, LSP, and provider integrations stay; the agent engine is replaced. See Powered by CAR for what that brings.
This is a personal fork distributed as source. Build it yourself:
git clone https://github.com/mliotta/opencode.git
cd opencode
bun install # requires Bun (https://bun.sh)
bun run dev # run from sourceNote
The package-manager commands you may have seen for upstream opencode (curl | bash, npm i -g opencode-ai, brew install opencode, scoop, choco, pacman, mise, nixpkgs) all install sst/opencode, not this fork. To run this fork's CAR-powered engine, build from source.
OpenCode includes two built-in agents you can switch between with the Tab key.
- build - Default, full-access agent for development work
- plan - Read-only agent for analysis and code exploration
- Denies file edits by default
- Asks permission before running bash commands
- Ideal for exploring unfamiliar codebases or planning changes
Also included is a general subagent for complex searches and multistep tasks.
This is used internally and can be invoked using @general in messages.
Learn more about agents.
For more info on how to configure OpenCode, head over to our docs.
If you're interested in contributing to OpenCode, please read our contributing docs before submitting a pull request.
If you are working on a project that's related to OpenCode and is using "opencode" as part of its name, for example "opencode-dashboard" or "opencode-mobile", please add a note to your README to clarify that it is not built by the OpenCode team and is not affiliated with us in any way.
This fork runs opencode's agent engine on top of CAR (Common Agent Runtime), embedded in-process via the car-runtime napi bindings. CAR is a deterministic execution layer that sits between the model and tools: the model proposes, CAR validates, schedules, and executes.
Live in this fork today:
- CAR-routed tool execution — every built-in opencode tool and every MCP tool flows through
verifyProposal→executeProposalbefore reaching its host implementation. Plugin hooks, permission gating, snapshots, and bus events are preserved unchanged. - Pre-flight verification — invalid proposals (unknown tools, malformed actions) are rejected before any side effect.
- Graph memory with persistence — each project gets a per-instance memgine that loads from
$XDG_DATA_HOME/opencode/car/<projectID>.jsonon startup and persists on shutdown. User messages, completed assistant turns, and successful tool calls are all ingested as facts so the graph has real signal across sessions. - CAR-grounded system prompt — every LLM call appends
rt.buildContextoutput for the latest user query as a<car_context>block, additive to the existing system prompt and cache-friendly (sits in a non-cached element). - Native skills — opencode's
SKILL.mdfiles are auto-ingested into CAR's graph at startup, available forfindSkillmatching. - Tool-parameter validation — every tool registers a JSON Schema via
registerToolSchema;verifyProposaltype-checks the model's parameters before dispatch (catches shape mismatches like{path: 42}for apath: stringtool before any side effect). - CAR-mediated inference — every model call routes through
inferStreamagainst aModelSource::Delegatedmodel and back into a registeredInferenceRunner(v0.7.0, closes Parslee-ai/car-releases#24). opencode's AI-SDK provider stack stays as the wire (Anthropic, OpenAI, Google, GitLab Workflow, opencode-zen — all unchanged); CAR sits in the lifecycle path. A JS side-channel keyed bycallIdcarries the rich AI-SDK chunks back to the TUI without lossy translation; CAR receives a parallel stream oftext/tool_start/usageevents for replay, policy, and fact ingestion. - Inspectable runtime —
opencode debug carprints a per-instance state summary (fact count, registered tools, ingested skills, memory path).
Coming next:
- DAG-parallel tool execution (batch parallel tool calls into multi-action proposals so CAR's scheduler runs them concurrently with full retry/rollback)
- Multi-agent dispatch via
runSwarm/runPipeline
(Declarative permission policies are deferred by design: CAR's recommended pattern for session scoping is per-runtime isolation, but opencode benefits more from cross-session memory continuity than from CAR-side permission enforcement. Permissions stay inline via ctx.ask.)
The opencode TUI, CLI, config, MCP client, LSP, providers, and storage are untouched. The engine is what changes.
This fork rebuilds opencode's agent engine on top of CAR. The user-facing surface — TUI, CLI, configuration, providers, MCP, LSP — matches upstream so existing setups continue to work. Internally, sessions are scheduled by CAR, which adds DAG-parallel tool execution, declarative permission policies, graph-based memory, snapshots, and replayable execution. See Powered by CAR for details.
It's very similar to Claude Code in terms of capability. Here are the key differences:
- 100% open source
- Not coupled to any provider. Although we recommend the models we provide through OpenCode Zen, OpenCode can be used with Claude, OpenAI, Google, or even local models. As models evolve, the gaps between them will close and pricing will drop, so being provider-agnostic is important.
- Built-in opt-in LSP support
- A focus on TUI. OpenCode is built by neovim users and the creators of terminal.shop; we are going to push the limits of what's possible in the terminal.
- A client/server architecture. This, for example, can allow OpenCode to run on your computer while you drive it remotely from a mobile app, meaning that the TUI frontend is just one of the possible clients.
