feat: Add tools dialog accessible via F9#24
Merged
Conversation
Contributor
Author
|
Open to have it be some other key but |
xywsxp
pushed a commit
to xywsxp/opencode
that referenced
this pull request
Apr 24, 2026
* Add tools dialog * Remove sorting and double items * Update key handling
sorted-ai-bot
pushed a commit
to sorted-ai/opencode
that referenced
this pull request
Apr 25, 2026
…mpliance) **NOT Phase B completion.** Visual skeleton only. 23/39 Designer spec items PASS. Continuation required per CTO/STATE.md — char corruption + Core Mention override. Architecture - Internal plugin: packages/opencode/src/cli/cmd/tui/feature-plugins/home/cockpit/ - Same-process session multiplexer (Option a, CEO approved 2026-04-23) - Phase B scope: Roster + Stage + Overlays UI骨格 + 1 live @vega seat + 3 fixture - Phase C scope: true 4-session parallel substrate, session router Core changes (CTO-D-041) - TuiThemeCurrent + 7 keys: backgroundInner/textHeadline/textDim/textGhost /roleBuild/roleQa/rolePlan/roleExplore (all 33 theme JSONs + generateSystem) - home.tsx maxWidth moved inside slot (Option A', proven Session anomalyco#24) Cockpit components (23 new files) - roster/: ticker-bar, footer-bar, session-seat, stage-monitor, tower-control, session-log, hints-panel, roster - stage/: stage, navigator, transcript (with InlinePrompt), inspector, status-dot (with usePulse) - overlays/: palette (fuzzy filter), handoff, help-legend (all via Portal) - substrate/: session-roster, ipc-bridge (Phase B minimum — session.updated tap) - state, helpers (statusAccent/roleTint/usePulse/useBreakpoint with hysteresis), fixtures (4 callsign: vega/altair/orion/rigel) Session anomalyco#25 iterations - nested <text> → <span> × 3 (TextNodeRenderable crash fix) - useTerminalDimensions moved from registerCockpit to component (No renderer fix) - Portal overlays (vs sibling stack) - Column width: width="25%" → flexBasis/minWidth=0 - Real Prompt embedded in @tower (workspaceId + ref from slot props) - TickerBar/FooterBar: height={1} + flexShrink={0} (prevent collapse) - Session type + callsign field (mockup @vega PM-01 · sonnet-4.6 format) - Category B: Shift+Tab cycle, gate_state [IN_PROGRESS] display, coffer UNLOCKED/LOCKED on FooterBar, breakpoint hysteresis ±2 col Known gaps (next Session scope — see P5-RST_PhaseB_Continuation_Brief_2026-04-23.md) - Character corruption on all panels (gap / padEnd / wrapMode interaction) - 12 MISS spec items (Palette/Handoff/Help shortcut, Roster→Stage transition, /login OAuth, crash fallback, fallback breakpoint verify, PC-RST mapping) - 4 PARTIAL items (hysteresis verify, Session type gateState/frozen usage) Build note - OOM killer multiple hits during full build. Use: NODE_OPTIONS="--max-old-space-size=1024" bun script/build.ts --single --skip-embed-web-ui
mliotta
pushed a commit
to mliotta/opencode
that referenced
this pull request
May 9, 2026
Closes the loop opened by Parslee-ai/car-releases#24 (shipped in CAR v0.7.0): every opencode LLM call now routes through `inferStream` against a `ModelSource::Delegated` model and back into a registered `InferenceRunner`. opencode's AI-SDK provider stack stays as the wire — every Anthropic / OpenAI / Google / GitLab-Workflow / opencode-zen integration keeps working unchanged — and CAR sits in the lifecycle path observing every event for replay, policy, and fact ingestion. Architecture (the "JS owns the wire, CAR owns the policy" pattern from issue anomalyco#24's Option B): - `packages/opencode/src/car/inference-bridge.ts` — module-scoped side channel: `Map<callId, {params, queue}>` plus a process-global `registerInferenceRunner` callback. The runner pulls the `streamText` params from the side channel by `_opencode_call_id`, runs `streamText`, fans out each chunk two ways: - to the JS queue (rich AI-SDK fullStream chunks → opencode's processor, which already handles 18 event variants from `text-delta` to `tool-input-delta` to `finish-step`); - to CAR via `inferenceRunnerEmitEvent` (simplified `text` / `tool_start` / `usage` per the v0.7.0 event taxonomy → CAR's bus / replay / policy). - `packages/opencode/src/car/index.ts` — exposes `runInference` on the `Car` service; per-runtime registers the delegated model schema once via `registerModel`. - `packages/opencode/src/session/llm.ts` — replaces the direct `streamText({...})` call with `yield* car.runInference({...})`. Identical surface (returns `{fullStream}`) so `processor.ts` needs no change. Lossless rich streaming is the explicit reason for the side channel: CAR's runner event taxonomy is intentionally a smaller surface than AI-SDK's fullStream (5 vs 18 event types), so a pure event-translation roundtrip would degrade the TUI. The side channel keeps the rich path alive while CAR sees a parallel summary stream. `bun run tsc --noEmit -p packages/opencode/tsconfig.json` clean. Runtime untested on this machine — `bun install`'s postinstall that fetches the car-runtime native binary fails because of the broken homebrew node 25.9.0 / libllhttp 9.4.1 mismatch (unrelated to this change). Verify with a working node before relying on it. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
bussard76
pushed a commit
to bussard76/openwork
that referenced
this pull request
May 12, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Test plan
🤖 Generated with opencode