Skip to content

mliotta/opencode

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12,249 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OpenCode logo

The open source AI coding agent.

A fork of opencode, powered by CAR.

Discord npm Build status

English | 简体中文 | 繁體中文 | 한국어 | Deutsch | Español | Français | Italiano | Dansk | 日本語 | Polski | Русский | Bosanski | العربية | Norsk | Português (Brasil) | ไทย | Türkçe | Українська | বাংলা | Ελληνικά | Tiếng Việt

OpenCode Terminal UI


Note

This is the Matt Liotta fork of opencode, rebuilt on top of CAR — the Common Agent Runtime. CAR is a deterministic Rust runtime for AI agents. The opencode TUI, CLI, configuration, MCP, LSP, and provider integrations stay; the agent engine is replaced. See Powered by CAR for what that brings.

Installation

This is a personal fork distributed as source. Build it yourself:

git clone https://github.com/mliotta/opencode.git
cd opencode
bun install                # requires Bun (https://bun.sh)
bun run dev                # run from source

Note

The package-manager commands you may have seen for upstream opencode (curl | bash, npm i -g opencode-ai, brew install opencode, scoop, choco, pacman, mise, nixpkgs) all install sst/opencode, not this fork. To run this fork's CAR-powered engine, build from source.

Agents

OpenCode includes two built-in agents you can switch between with the Tab key.

  • build - Default, full-access agent for development work
  • plan - Read-only agent for analysis and code exploration
    • Denies file edits by default
    • Asks permission before running bash commands
    • Ideal for exploring unfamiliar codebases or planning changes

Also included is a general subagent for complex searches and multistep tasks. This is used internally and can be invoked using @general in messages.

Learn more about agents.

Documentation

For more info on how to configure OpenCode, head over to our docs.

Contributing

If you're interested in contributing to OpenCode, please read our contributing docs before submitting a pull request.

Building on OpenCode

If you are working on a project that's related to OpenCode and is using "opencode" as part of its name, for example "opencode-dashboard" or "opencode-mobile", please add a note to your README to clarify that it is not built by the OpenCode team and is not affiliated with us in any way.

Powered by CAR

This fork runs opencode's agent engine on top of CAR (Common Agent Runtime), embedded in-process via the car-runtime napi bindings. CAR is a deterministic execution layer that sits between the model and tools: the model proposes, CAR validates, schedules, and executes.

Live in this fork today:

  • CAR-routed tool execution — every built-in opencode tool and every MCP tool flows through verifyProposalexecuteProposal before reaching its host implementation. Plugin hooks, permission gating, snapshots, and bus events are preserved unchanged.
  • Pre-flight verification — invalid proposals (unknown tools, malformed actions) are rejected before any side effect.
  • Graph memory with persistence — each project gets a per-instance memgine that loads from $XDG_DATA_HOME/opencode/car/<projectID>.json on startup and persists on shutdown. User messages, completed assistant turns, and successful tool calls are all ingested as facts so the graph has real signal across sessions.
  • CAR-grounded system prompt — every LLM call appends rt.buildContext output for the latest user query as a <car_context> block, additive to the existing system prompt and cache-friendly (sits in a non-cached element).
  • Native skills — opencode's SKILL.md files are auto-ingested into CAR's graph at startup, available for findSkill matching.
  • Tool-parameter validation — every tool registers a JSON Schema via registerToolSchema; verifyProposal type-checks the model's parameters before dispatch (catches shape mismatches like {path: 42} for a path: string tool before any side effect).
  • CAR-mediated inference — every model call routes through inferStream against a ModelSource::Delegated model and back into a registered InferenceRunner (v0.7.0, closes Parslee-ai/car-releases#24). opencode's AI-SDK provider stack stays as the wire (Anthropic, OpenAI, Google, GitLab Workflow, opencode-zen — all unchanged); CAR sits in the lifecycle path. A JS side-channel keyed by callId carries the rich AI-SDK chunks back to the TUI without lossy translation; CAR receives a parallel stream of text / tool_start / usage events for replay, policy, and fact ingestion.
  • Inspectable runtimeopencode debug car prints a per-instance state summary (fact count, registered tools, ingested skills, memory path).

Coming next:

  • DAG-parallel tool execution (batch parallel tool calls into multi-action proposals so CAR's scheduler runs them concurrently with full retry/rollback)
  • Multi-agent dispatch via runSwarm / runPipeline

(Declarative permission policies are deferred by design: CAR's recommended pattern for session scoping is per-runtime isolation, but opencode benefits more from cross-session memory continuity than from CAR-side permission enforcement. Permissions stay inline via ctx.ask.)

The opencode TUI, CLI, config, MCP client, LSP, providers, and storage are untouched. The engine is what changes.

CAR · Matt Liotta

FAQ

How is this different from upstream opencode?

This fork rebuilds opencode's agent engine on top of CAR. The user-facing surface — TUI, CLI, configuration, providers, MCP, LSP — matches upstream so existing setups continue to work. Internally, sessions are scheduled by CAR, which adds DAG-parallel tool execution, declarative permission policies, graph-based memory, snapshots, and replayable execution. See Powered by CAR for details.

How is this different from Claude Code?

It's very similar to Claude Code in terms of capability. Here are the key differences:

  • 100% open source
  • Not coupled to any provider. Although we recommend the models we provide through OpenCode Zen, OpenCode can be used with Claude, OpenAI, Google, or even local models. As models evolve, the gaps between them will close and pricing will drop, so being provider-agnostic is important.
  • Built-in opt-in LSP support
  • A focus on TUI. OpenCode is built by neovim users and the creators of terminal.shop; we are going to push the limits of what's possible in the terminal.
  • A client/server architecture. This, for example, can allow OpenCode to run on your computer while you drive it remotely from a mobile app, meaning that the TUI frontend is just one of the possible clients.

Join our community Discord | X.com

About

The open source coding agent.

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • TypeScript 60.6%
  • MDX 35.8%
  • CSS 2.8%
  • Rust 0.4%
  • Astro 0.2%
  • JavaScript 0.1%
  • Other 0.1%