Problem
Paged metadata reconciliation in DMC v0.33.0 prevents one giant timeout, but each page can still redo expensive repo fetches, git probes, and GitHub PR lookups.
On a large workspace (~580 worktrees), page-by-page reconciliation is still slower than it should be. The first page found useful cleanup eligibility promotions, but draining the inventory remained operationally slow.
Desired outcome
Persist/cache reconciliation probe results across pages/runs where safe.
Acceptance criteria
- Repo fetch/prune happens at most once per repo per reconciliation run, not once per page whenever avoidable.
- GitHub PR lookup results are cached by repo/PR number with a sensible TTL.
- Cached evidence is included in output so operators can tell whether a decision came from a fresh probe or cache.
- Cache invalidation is conservative: dirty/unpushed/current worktree safety checks still run fresh at apply time.
- Works for direct apply, job-backed drain, and time-budgeted drain modes.
Related
AI assistance
- AI assistance: Yes
- Tool(s): OpenCode (GPT-5.5)
- Used for: Drafting issue from post-release deployment and cleanup attempt evidence reviewed by Chris.
Problem
Paged metadata reconciliation in DMC
v0.33.0prevents one giant timeout, but each page can still redo expensive repo fetches, git probes, and GitHub PR lookups.On a large workspace (~580 worktrees), page-by-page reconciliation is still slower than it should be. The first page found useful cleanup eligibility promotions, but draining the inventory remained operationally slow.
Desired outcome
Persist/cache reconciliation probe results across pages/runs where safe.
Acceptance criteria
Related
AI assistance