Transform ad-hoc Claude sessions into reproducible development pipelines with parallel execution, automatic retry, and full state management.
- Features
- Installation
- Quick Start
- Usage
- Examples
- Documentation
- Troubleshooting
- Contributing
- License
- Acknowledgments
- ✨ Workflow Orchestration - Define complex development workflows in simple YAML
- ⚡ Parallel Execution - Run multiple Claude agents simultaneously with MapReduce
- 🔄 Automatic Retry - Smart retry strategies with exponential backoff and circuit breakers
- 💾 Full State Management - Checkpoint and resume interrupted workflows exactly where they left off
- 🌳 Git Integration - Automatic worktree isolation for every workflow execution with commit tracking
- 🛡️ Error Recovery - Comprehensive failure handling with on-failure handlers
- 📊 Analytics - Cost tracking, performance metrics, and optimization recommendations
- 🔧 Extensible - Custom validators, handlers, and workflow composition
- 📚 Documentation - Comprehensive man pages and built-in help system
cargo install prodigy# Clone the repository
git clone https://github.com/iepathos/prodigy
cd prodigy
# Build and install
cargo build --release
cargo install --path .
# Optional: Install man pages
./scripts/install-man-pages.shGet up and running in under 5 minutes with these simple examples.
- Initialize Prodigy in your project:
prodigy init- Create a simple workflow (
fix-tests.yml):
name: fix-failing-tests
steps:
- shell: "cargo test"
on_failure:
claude: "/fix-test-failures"
max_attempts: 3- Run the workflow:
prodigy run fix-tests.ymlProcess multiple files simultaneously with MapReduce:
name: add-documentation
mode: mapreduce
setup:
- shell: "find src -name '*.rs' -type f > files.json"
map:
input: files.json
agent_template:
- claude: "/add-rust-docs ${item}"
max_parallel: 10
reduce:
- claude: "/summarize Documentation added to ${map.successful} files"Run with:
prodigy run add-documentation.yml# Run a workflow
prodigy run workflow.yml
# Execute a single command with retries
prodigy exec "claude: /refactor main.rs" --retry 3
# Process files in parallel
prodigy batch "*.py" --command "claude: /add-types" --parallel 5
# Resume an interrupted workflow
prodigy resume workflow-123
# View analytics and costs
prodigy analytics --session abc123
# Manage worktrees (all workflow executions use isolated git worktrees by default)
prodigy worktree ls # List active worktrees
prodigy worktree ls --detailed # Show enhanced session information
prodigy worktree ls --json # Output in JSON format
prodigy worktree ls --detailed --json # Combine detailed info with JSON output
prodigy worktree clean # Clean up inactive worktreesProdigy supports reusable workflow templates that can be registered, shared, and invoked with parameters. This enables building a library of common workflows and patterns.
# Initialize a template directory
prodigy template init templates/
# Register a template from a file
prodigy template register my-workflow.yml \
--name refactor-pipeline \
--description "Refactoring workflow with tests" \
--version 1.0.0 \
--tags refactor,testing \
--author "Your Name"
# List all registered templates
prodigy template list
prodigy template list --long # Show detailed information
prodigy template list --tag refactor # Filter by tag
# Show template details
prodigy template show refactor-pipeline
# Search for templates
prodigy template search "refactor"
prodigy template search testing --by-tag
# Validate a template file
prodigy template validate my-workflow.yml
# Delete a template
prodigy template delete refactor-pipeline --forceTemplates are standard workflow files with parameter definitions:
name: refactor-with-tests
description: Refactor code and ensure tests pass
version: 1.0.0
parameters:
target:
description: File or directory to refactor
type: string
required: true
test_command:
description: Command to run tests
type: string
default: "cargo test"
commands:
- claude: "/refactor ${target}"
- shell: "${test_command}"
on_failure:
claude: "/fix-test-failures"
max_attempts: 3Pass parameters via CLI flags or parameter files:
# Pass parameters individually
prodigy run refactor-pipeline.yml \
--param target=src/main.rs \
--param test_command="cargo test --all"
# Pass parameters from a JSON file
prodigy run refactor-pipeline.yml \
--param-file params.json
# Pass parameters from a YAML file
prodigy run refactor-pipeline.yml \
--param-file params.yamlParameter file example (params.json):
{
"target": "src/main.rs",
"test_command": "cargo test --all",
"timeout": 300
}Parameter file example (params.yaml):
target: src/main.rs
test_command: cargo test --all
timeout: 300Parameter Type Inference:
- Numbers (integer and float) are automatically detected
- Booleans (
true/false) are parsed correctly - Strings are the default type
- CLI parameters override file parameters
Parameter Substitution:
Parameters are available in commands using ${parameter_name} syntax:
${target}- Direct parameter reference${test_command}- Parameter with default value- CLI and file parameters are merged (CLI takes precedence)
retry_defaults:
attempts: 3
backoff: exponential
initial_delay: 2s
max_delay: 30s
jitter: true
steps:
- shell: "deploy.sh"
retry:
attempts: 5
backoff:
fibonacci:
initial: 1s
retry_on: [network, timeout]
retry_budget: 5menv:
NODE_ENV: production
WORKERS:
command: "nproc"
cache: true
secrets:
API_KEY: ${vault:api/keys/production}
steps:
- shell: "npm run build"
env:
BUILD_TARGET: production
working_dir: ./frontendimports:
- path: ./common/base.yml
alias: base
templates:
test-suite:
parameters:
- name: language
type: string
steps:
- shell: "${language} test"
workflows:
main:
extends: base.default
steps:
- use: test-suite
with:
language: cargoProdigy automatically tracks git changes during workflow execution and provides context variables for accessing file changes, commits, and statistics:
${step.files_added}- Files added in the current step${step.files_modified}- Files modified in the current step${step.files_deleted}- Files deleted in the current step${step.files_changed}- All files changed (added + modified + deleted)${step.commits}- Commit hashes created in the current step${step.commit_count}- Number of commits in the current step${step.insertions}- Lines inserted in the current step${step.deletions}- Lines deleted in the current step
${workflow.files_added}- All files added across the workflow${workflow.files_modified}- All files modified across the workflow${workflow.files_deleted}- All files deleted across the workflow${workflow.files_changed}- All files changed across the workflow${workflow.commits}- All commit hashes across the workflow${workflow.commit_count}- Total commits across the workflow${workflow.insertions}- Total lines inserted across the workflow${workflow.deletions}- Total lines deleted across the workflow
Variables support pattern filtering using glob patterns:
# Get only markdown files added
- shell: "echo '${step.files_added:*.md}'"
# Get only Rust source files modified
- claude: "/review ${step.files_modified:*.rs}"
# Get specific directory changes
- shell: "echo '${workflow.files_changed:src/*}'"Control output format with modifiers:
# JSON array format
- shell: "echo '${step.files_added:json}'" # ["file1.rs", "file2.rs"]
# Newline-separated (for scripts)
- shell: "echo '${step.files_added:lines}'" # file1.rs\nfile2.rs
# Comma-separated
- shell: "echo '${step.files_added:csv}'" # file1.rs,file2.rs
# Space-separated (default)
- shell: "echo '${step.files_added}'" # file1.rs file2.rsname: code-review-workflow
steps:
# Make changes
- claude: "/implement feature X"
commit_required: true
# Review only the changed Rust files
- claude: "/review-code ${step.files_modified:*.rs}"
# Generate changelog for markdown files
- shell: "echo 'Changed docs:' && echo '${step.files_added:*.md:lines}'"
# Conditional execution based on changes
- shell: "cargo test"
when: "${step.files_modified:*.rs}" # Only run if Rust files changed
# Summary at the end
- claude: |
/summarize-changes
Total files changed: ${workflow.files_changed:json}
Commits created: ${workflow.commit_count}
Lines added: ${workflow.insertions}
Lines removed: ${workflow.deletions}The write_file command allows workflows to create files with content, supporting multiple formats with validation and automatic formatting.
Basic Syntax:
- write_file:
path: "output/results.txt"
content: "Processing complete!"
format: text # text, json, or yaml
mode: "0644" # Unix permissions (default: 0644)
create_dirs: false # Create parent directories (default: false)Supported Formats:
- Text (default) - Plain text with no processing:
- write_file:
path: "logs/build.log"
content: "Build started at ${timestamp}"
format: text- JSON - Validates and pretty-prints JSON:
- write_file:
path: "output/results.json"
content: '{"status": "success", "items_processed": ${map.total}}'
format: json
create_dirs: true- YAML - Validates and formats YAML:
- write_file:
path: "config/settings.yml"
content: |
environment: production
server:
port: 8080
host: localhost
format: yamlVariable Interpolation:
All fields support variable interpolation:
# In MapReduce map phase
- write_file:
path: "output/${item.name}.json"
content: '{"id": "${item.id}", "processed": true}'
format: json
create_dirs: true
# In reduce phase
- write_file:
path: "summary.txt"
content: "Processed ${map.total} items, ${map.successful} successful"
format: textSecurity Features:
- Path traversal protection (rejects paths containing
..) - JSON/YAML validation before writing
- Configurable file permissions (Unix systems only)
Common Use Cases:
- Aggregating MapReduce results:
reduce:
- write_file:
path: "results/summary.json"
content: '{"total": ${map.total}, "successful": ${map.successful}, "failed": ${map.failed}}'
format: json- Generating configuration files:
- write_file:
path: ".config/app.yml"
content: |
name: ${PROJECT_NAME}
version: ${VERSION}
features:
- authentication
- caching
format: yaml- Creating executable scripts:
- write_file:
path: "scripts/deploy.sh"
content: |
#!/bin/bash
echo "Deploying ${APP_NAME}"
./deploy.sh --env production
mode: "0755"
create_dirs: trueProdigy supports multi-step validation and error recovery with two formats:
Array Format (for simple command sequences):
validate:
- shell: "prep-command-1"
- shell: "prep-command-2"
- claude: "/validate-result"Object Format (when you need metadata like threshold, max_attempts, etc.):
validate:
commands:
- shell: "prep-command-1"
- shell: "prep-command-2"
- claude: "/validate-result"
result_file: "validation-results.json"
threshold: 75 # Validation must score at least 75/100
on_incomplete:
commands:
- claude: "/fix-gaps --gaps ${validation.gaps}"
- shell: "rebuild-and-revalidate.sh"
max_attempts: 3
fail_workflow: falseKey Points:
- Use array format when you only need to run commands
- Use object format when you need to set
threshold,result_file,max_attempts, orfail_workflow - Fields like
thresholdandmax_attemptsbelong at the config level, not on individual commands on_incompletesupports the same two formats (array or object withcommands:)
Example: Multi-step validation workflow
- claude: "/implement-feature spec.md"
commit_required: true
validate:
commands:
- shell: "cargo test"
- shell: "cargo clippy"
- claude: "/validate-implementation spec.md"
result_file: ".prodigy/validation.json"
threshold: 90
on_incomplete:
commands:
- claude: "/fix-issues --gaps ${validation.gaps}"
- shell: "cargo test"
max_attempts: 5
fail_workflow: trueProdigy looks for configuration in these locations (in order):
.prodigy/config.yml- Project-specific configuration~/.config/prodigy/config.yml- User configuration/etc/prodigy/config.yml- System-wide configuration
Example configuration:
# .prodigy/config.yml
claude:
model: claude-3-opus
max_tokens: 4096
worktree:
max_parallel: 20
cleanup_policy:
idle_timeout: 300
max_age: 3600
retry:
default_attempts: 3
default_backoff: exponential
storage:
events_dir: ~/.prodigy/events
state_dir: ~/.prodigy/stateFix all test failures automatically with intelligent retry:
name: test-pipeline
steps:
- shell: "cargo test"
on_failure:
- claude: "/analyze-test-failure ${shell.output}"
- claude: "/fix-test-failure"
- shell: "cargo test"
retry:
attempts: 3
backoff: exponential
- shell: "cargo fmt -- --check"
on_failure: "cargo fmt"
- shell: "cargo clippy -- -D warnings"
on_failure:
claude: "/fix-clippy-warnings"Analyze and improve multiple files concurrently:
name: parallel-analysis
mode: mapreduce
setup:
- shell: |
find . -name "*.rs" -exec wc -l {} + |
sort -rn |
head -20 |
awk '{print $2}' > complex-files.json
map:
input: complex-files.json
agent_template:
- claude: "/analyze-complexity ${item}"
- claude: "/suggest-refactoring ${item}"
- shell: "cargo test --lib $(basename ${item} .rs)"
max_parallel: 10
reduce:
- claude: "/generate-refactoring-report ${map.results}"
- shell: "echo 'Analyzed ${map.total} files, ${map.successful} successful'"📚 Full documentation is available at https://iepathos.github.io/prodigy
Quick links:
# Install mdBook
cargo install mdbook
# Serve with live reload
mdbook serve book --open- 📝 Workflow Syntax (Single Page) - Complete syntax reference in one file
- 🏗️ Architecture - System design and internals
- 🤝 Contributing Guide - How to contribute to Prodigy
- 📚 Man Pages - Unix-style manual pages for all commands
| Command | Description |
|---|---|
prodigy run <workflow> |
Execute a workflow |
prodigy exec <command> |
Run a single command |
prodigy batch <pattern> |
Process files in parallel |
prodigy resume <id> |
Resume interrupted workflow |
prodigy analytics |
View session analytics |
prodigy worktree |
Manage git worktrees |
prodigy init |
Initialize Prodigy in project |
Performance: Workflows running slowly
- Check parallel execution limits:
prodigy run workflow.yml --max-parallel 20- Enable verbose mode to identify bottlenecks:
prodigy run workflow.yml -vNote: The -v flag also enables Claude streaming JSON output for debugging Claude interactions.
- Review analytics for optimization opportunities:
prodigy analytics --session <session-id>Resume: How to recover from interrupted workflows
Prodigy automatically creates checkpoints. To resume:
# List available checkpoints
prodigy checkpoints list
# Resume from latest checkpoint
prodigy resume
# Resume specific workflow
prodigy resume workflow-abc123MapReduce: Jobs failing with "DLQ not empty"
Review and reprocess failed items:
# View failed items
prodigy dlq view <job-id>
# Reprocess failed items
prodigy dlq retry <job-id> --max-parallel 5Configuration: Settings not being applied
Check configuration precedence:
# Show effective configuration
prodigy config show
# Validate configuration
prodigy config validateInstallation: Man pages not available
Install man pages manually:
cd prodigy
./scripts/install-man-pages.sh
# Or install to user directory
./scripts/install-man-pages.sh --userDebugging: Need more information about failures
Enable debug logging:
# Set log level
export RUST_LOG=debug
prodigy run workflow.yml -vv
# View detailed events
prodigy events --job-id <job-id> --verboseVerbosity: Controlling Claude streaming output
Prodigy provides fine-grained control over Claude interaction visibility:
Default behavior (no flags):
prodigy run workflow.yml
# Shows progress and results, but no Claude JSON streaming outputVerbose mode (-v):
prodigy run workflow.yml -v
# Shows Claude streaming JSON output for debugging interactionsDebug mode (-vv) and trace mode (-vvv):
prodigy run workflow.yml -vv
prodigy run workflow.yml -vvv
# Also shows Claude streaming output plus additional internal logsForce Claude output (environment override):
PRODIGY_CLAUDE_CONSOLE_OUTPUT=true prodigy run workflow.yml
# Shows Claude streaming output regardless of verbosity levelThis allows you to keep normal runs clean while enabling detailed debugging when needed.
We welcome contributions! Please see our Contributing Guide for details.
# Fork and clone the repository
git clone https://github.com/YOUR-USERNAME/prodigy
cd prodigy
# Set up development environment
cargo build
cargo test
# Run with verbose output
RUST_LOG=debug cargo run -- run test.yml
# Before submitting PR
cargo fmt
cargo clippy -- -D warnings
cargo test- 📦 Package manager distributions (brew, apt, yum)
- 🌍 Internationalization and translations
- 📚 Documentation and examples
- 🧪 Testing and bug reports
- ⚡ Performance optimizations
- 🎨 UI/UX improvements
Prodigy is licensed under MIT. See LICENSE for details.
Prodigy builds on the shoulders of giants:
- Claude Code CLI - The AI pair programmer that powers Prodigy
- Tokio - Async runtime for Rust
- Clap - Command-line argument parsing
- Serde - Serialization framework
Special thanks to all contributors who have helped make Prodigy better!
Made with ❤️ by developers, for developers
Features • Quick Start • Docs • Contributing
# Test merge