Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 34 additions & 0 deletions docs/proposals/01-agent-smart-routing.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,3 +116,37 @@ Store routing configuration in `llm_config.json`:
- User satisfaction rating increases (via post-plan survey)

- No increase in pipeline failure rate

## Detailed Implementation Plan

### Phase A — Routing Contract and Registry

1. Define an explicit routing contract in `run_plan_pipeline.py` with:
- stage name
- routing signal inputs
- selected agent class
- fallback class
2. Build an agent registry file (YAML/JSON) mapping capabilities to stages.
3. Add deterministic routing mode for reproducible runs.

### Phase B — Dynamic Selection Engine

1. Implement router scoring using:
- stage complexity
- domain type
- latency/cost budget
2. Add weighted scoring for each candidate agent and choose top-ranked.
3. Add confidence threshold to trigger fallback routing when uncertain.

### Phase C — Observability and Controls

1. Emit route decisions as structured events.
2. Track route success/failure by stage.
3. Add policy overrides for forced agent selection in sensitive flows.

### Validation Checklist

- Deterministic routing under fixed seeds
- Correct fallback activation under low confidence
- Route-quality lift vs static baseline

30 changes: 30 additions & 0 deletions docs/proposals/02-plans-as-LLM-templates.md
Original file line number Diff line number Diff line change
Expand Up @@ -184,3 +184,33 @@ CREATE TABLE plan_templates (
- Jinja2 documentation: https://jinja.palletsprojects.com/

- Similar pattern: Terraform modules, Helm charts, AWS CloudFormation templates

## Detailed Implementation Plan

### Phase A — Template Spec

1. Define template schema with:
- variables
- defaults
- required constraints
- output contract
2. Add validation to reject unresolved variables at render time.

### Phase B — Render Pipeline

1. Convert plan sections into parameterized templates.
2. Support profile-specific render presets (investor, technical, compliance).
3. Add preview endpoint to inspect rendered output before execution.

### Phase C — Governance

1. Version templates and freeze approved revisions.
2. Add compatibility checker between template versions and old plans.
3. Log rendered parameter values for auditability.

### Validation Checklist

- No unresolved placeholders in final render
- Backward compatibility checks pass
- Render latency within interactive SLA

27 changes: 27 additions & 0 deletions docs/proposals/03-distributed-plan-execution.md
Original file line number Diff line number Diff line change
Expand Up @@ -218,3 +218,30 @@ def execute_outline_stage(plan_id, research_results):
- Railway multi-service deploys: https://docs.railway.app/

- DAG scheduling patterns: Apache Airflow, Prefect, Temporal

## Detailed Implementation Plan

### Phase A — Distributed Runtime Topology

1. Define coordinator + worker architecture.
2. Partition execution graph into shardable task groups.
3. Add worker heartbeat and lease ownership semantics.

### Phase B — Queue and Retry Semantics

1. Introduce queue topics by task class and priority.
2. Implement idempotent workers with attempt counters.
3. Add dead-letter queues and replay tooling.

### Phase C — Consistency and Recovery

1. Persist checkpoint snapshots per milestone.
2. Implement coordinator failover strategy.
3. Add exactly-once/at-least-once mode selection by task type.

### Validation Checklist

- Throughput scaling under worker expansion
- Recovery time after worker/node failure
- No duplicate side effects for idempotent tasks

30 changes: 30 additions & 0 deletions docs/proposals/04-plan-explain-as-API-service.md
Original file line number Diff line number Diff line change
Expand Up @@ -254,3 +254,33 @@ post_to_twitter(tweet)
- Prompt engineering best practices: Anthropic prompt guide

- Caching strategies: Redis best practices

## Detailed Implementation Plan

### Phase A — Explainability Contract

1. Define explanation schema:
- summary
- rationale
- assumptions
- caveats
2. Add response styles (executive, technical, regulator).

### Phase B — API + Caching

1. Implement explanation endpoint with plan version hash keying.
2. Add cache layer with invalidation on plan updates.
3. Add token/cost controls for explanation generation.

### Phase C — Quality and Safety

1. Add hallucination guards using evidence references.
2. Add sensitivity filters for confidential sections.
3. Include confidence labels and uncertainty notes.

### Validation Checklist

- Explanation consistency across reruns
- Evidence reference coverage thresholds
- Low hallucination rate in review samples

27 changes: 27 additions & 0 deletions docs/proposals/05-semantic-plan-search-graph.md
Original file line number Diff line number Diff line change
Expand Up @@ -359,3 +359,30 @@ redis.setex(cache_key, 3600, json.dumps(results)) # 1h TTL
- sentence-transformers: https://www.sbert.net/

- Semantic search best practices: https://www.pinecone.io/learn/semantic-search/

## Detailed Implementation Plan

### Phase A — Index Foundation

1. Build embedding pipeline for plan sections and metadata.
2. Store vectors in pgvector with namespace partitioning.
3. Define hybrid retrieval (semantic + keyword + metadata filters).

### Phase B — Graph Layer

1. Create plan similarity edges with confidence scores.
2. Add relation types (similar-risk, similar-finance, similar-domain).
3. Expose neighborhood exploration APIs.

### Phase C — Ranking and Feedback

1. Rank results with blended score (similarity + quality + freshness).
2. Capture click/selection feedback to tune ranking.
3. Add dedup and near-duplicate suppression.

### Validation Checklist

- Retrieval precision@k
- Latency under index growth
- Duplicate suppression effectiveness

27 changes: 27 additions & 0 deletions docs/proposals/06-adopt-on-the-fly.md
Original file line number Diff line number Diff line change
Expand Up @@ -251,3 +251,30 @@ Phase 4 - Measure + iterate
- Do not touch `open_dir_server` allowlist/path validation unless explicitly asked.

- Do not change MCP to advertise tasks protocol ("Run as task") - tools-only stays.

## Detailed Implementation Plan

### Phase A — Focus Classification Runtime

1. Add pre-planning classifier stage for business/software/hybrid focus.
2. Emit confidence and missing-info flags.
3. Support explicit user override with trace logging.

### Phase B — Track-Specific Prompting and Levers

1. Build track prompt packs for business and software tracks.
2. Route lever generation using track-aware templates.
3. Enforce mandatory lever coverage per selected track.

### Phase C — Track-Specific Gates

1. Define no-go gate sets by track.
2. Add auto-fail conditions for missing critical artifacts.
3. Add hybrid sequencing logic for mixed plans.

### Validation Checklist

- Classification accuracy benchmark
- Gate relevance by plan type
- User override frequency and satisfaction

27 changes: 27 additions & 0 deletions docs/proposals/07-elo-ranking.md
Original file line number Diff line number Diff line change
Expand Up @@ -1661,3 +1661,30 @@ Completed items for immediate usability improvements:
**Last updated:** 2026-02-08
**Maintainer:** OpenClaw team
**Feedback:** Open issues at https://github.com/VoynichLabs/PlanExe2026/issues

## Detailed Implementation Plan

### Phase A — Pairwise Ranking Core

1. Implement candidate sampling strategy.
2. Run pairwise comparisons with structured KPI outputs.
3. Apply Elo updates with configurable K-factor profiles.

### Phase B — Data Products

1. Store per-comparison details and reasons.
2. Generate percentile tiers and confidence bands.
3. Add per-user and global leaderboard views.

### Phase C — Calibration and Governance

1. Calibrate ranking against real outcomes (where available).
2. Add anti-gaming heuristics and anomaly detection.
3. Add periodic re-ranking for drift control.

### Validation Checklist

- Ranking stability across reruns
- Predictive value vs downstream outcomes
- Fairness checks across domains

27 changes: 27 additions & 0 deletions docs/proposals/08-ui-for-editing-plan.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,3 +81,30 @@ Limitations:
- As execution reveals surprises, incorporate them into the existing plan.

- Maintain topological ordering so downstream parts update correctly.

## Detailed Implementation Plan

### Phase A — Editor Data Model

1. Define editable plan document schema and version nodes.
2. Add section-level locking and optimistic concurrency controls.
3. Persist edit history with reversible diffs.

### Phase B — Collaboration UX

1. Build section editor with structured side panels (assumptions, risks, costs).
2. Add inline validation and warning badges.
3. Add comparison view for baseline vs edited variants.

### Phase C — Workflow Integration

1. Trigger downstream recalculations on critical edits.
2. Add approval flow for high-impact changes.
3. Sync edits to audit pack and evidence ledger references.

### Validation Checklist

- Conflict resolution correctness
- Edit-to-recompute latency
- Usability score in editor sessions

27 changes: 27 additions & 0 deletions docs/proposals/11-investor-thesis-matching-engine.md
Original file line number Diff line number Diff line change
Expand Up @@ -224,3 +224,30 @@ Below are three example project archetypes and the specific hypothesis checks th
## Why This Matters

This proposal shifts fundraising from persuasion-first to evidence-first. It helps credible, high-upside plans get surfaced even when founders are not exceptional marketers, improving capital allocation efficiency for everyone.

## Detailed Implementation Plan

### Phase A — Thesis Schema and Intake

1. Define investor thesis schema (sector, ticket size, geography, stage, constraints).
2. Ingest and normalize investor profile records.
3. Add confidence labels for inferred thesis signals.

### Phase B — Matching Engine

1. Compute thesis-plan alignment with weighted feature scoring.
2. Add exclusion filters (hard constraints).
3. Produce explainable match reasons and mismatch flags.

### Phase C — Feedback Loop

1. Capture investor response outcomes.
2. Tune matching weights with outcome data.
3. Add cold-start defaults by investor archetype.

### Validation Checklist

- Precision of top matches
- Response-rate uplift vs baseline outreach
- Explainability quality review

27 changes: 27 additions & 0 deletions docs/proposals/12-evidence-based-founder-execution-index.md
Original file line number Diff line number Diff line change
Expand Up @@ -204,3 +204,30 @@ Create a **Founder Execution Index (FEI)** calculated from measurable evidence:
## Why This Matters

A transparent execution index gives investors a stronger ROI signal and gives disciplined builders a fairer path to capital, independent of pitch theatrics.

## Detailed Implementation Plan

### Phase A — Signal Definition

1. Define founder execution signals (delivery cadence, milestone completion, evidence quality).
2. Add normalization across project sizes and stages.
3. Set anti-manipulation controls for self-reported metrics.

### Phase B — Index Calculation

1. Compute composite index with transparent weights.
2. Attach confidence intervals based on data completeness.
3. Version index formulas for auditability.

### Phase C — Product Surfaces

1. Show index trendline over time.
2. Expose driver-level breakdown for coaching actions.
3. Feed index into investor matching and readiness gates.

### Validation Checklist

- Correlation with independent execution outcomes
- Stability under sparse data
- Resistance to metric gaming

29 changes: 28 additions & 1 deletion docs/proposals/13-portfolio-aware-capital-allocation.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,4 +134,31 @@ For each investor:

## Why This Matters

Investors care about total portfolio outcomes, not isolated deal quality. Portfolio-aware matching improves capital allocation quality and makes ROI predictions more actionable.
Investors care about total portfolio outcomes, not isolated deal quality. Portfolio-aware matching improves capital allocation quality and makes ROI predictions more actionable.

## Detailed Implementation Plan

### Phase A — Portfolio Model

1. Define portfolio objective functions (return, risk, diversification).
2. Add constraint model (sector caps, stage caps, geographic limits).
3. Ingest candidate plan opportunities as allocatable units.

### Phase B — Allocation Solver

1. Implement optimizer (heuristic + optional convex optimization mode).
2. Support scenario-based allocation stress tests.
3. Output recommended allocations with rationale and alternatives.

### Phase C — Monitoring and Rebalancing

1. Track realized vs expected performance.
2. Trigger rebalance suggestions on drift.
3. Log decision history for governance review.

### Validation Checklist

- Constraint satisfaction rate
- Risk-adjusted return vs baseline policy
- Rebalance action quality over time

29 changes: 28 additions & 1 deletion docs/proposals/14-confidence-weighted-funding-auctions.md
Original file line number Diff line number Diff line change
Expand Up @@ -133,4 +133,31 @@ Implement a **Confidence-Weighted Auction Protocol**:

## Why This Matters

Structured auctions create better price discovery and better ROI alignment while reducing dependence on personal charisma and closed-door negotiation dynamics.
Structured auctions create better price discovery and better ROI alignment while reducing dependence on personal charisma and closed-door negotiation dynamics.

## Detailed Implementation Plan

### Phase A — Auction Mechanism Design

1. Define bid object with confidence and evidence support fields.
2. Set auction rules (sealed/open, rounds, reserve conditions).
3. Add anti-collusion and identity integrity checks.

### Phase B — Confidence Weighting Engine

1. Compute confidence-adjusted bid utility score.
2. Penalize low-evidence high-claims bids.
3. Expose explainable ranking to participants.

### Phase C — Settlement and Post-Auction Analytics

1. Finalize winners with compliance checks.
2. Record auction telemetry for mechanism tuning.
3. Add dispute workflow and audit exports.

### Validation Checklist

- Bid quality improvement over rounds
- Reduction of winner’s-curse outcomes
- Fairness and manipulation resistance tests

Loading