mcda is a JSON-first command-line tool for multi-criteria decision analysis. The current
implementation supports two analysis methods:
-
weighted-sum: normalizes each criterion, applies resolved weights, and returns a clear score-based ranking. -
electre-iii: uses thresholds, vetoes, and outranking credibility to expose cases where options are not cleanly comparable. -
Create an MCDA project.
-
Add participants, alternatives, and criteria.
-
Record participant weights, thresholds, and performance scores.
-
Aggregate participant inputs.
-
Run weighted-sum or ELECTRE III analysis.
-
Inspect the resulting candidate ranking.
Generated project data is stored under a project-local .mcda/ directory. The surrounding
project directory stays available for notes, spreadsheets, source files, and other working
materials.
This repository is early-stage. The implemented workflow is useful and tested, but import commands, reports, briefing generation, and sensitivity analysis are still planned work.
From this repository:
pip install -e .You can also run the CLI without installing the entrypoint:
python -m mcda.cli --helpAfter installation, use:
mcda --helpAn MCDA project is a normal directory containing .mcda/meta.json.
office_lease_selection/
.mcda/
meta.json
alternatives/
criteria/
participants/
weights/
thresholds/
perf/
policies/
sessions/
results/
Commands find the project by walking upward from your current directory until they find
.mcda/meta.json. You can also pass --project <path> from anywhere.
Commands return JSON by default:
mcda infoThe response shape is:
{
"data": {},
"warnings": []
}Use --human when you want concise human-readable output:
mcda --human infoErrors are structured JSON by default:
{
"error": {
"code": "missing_project",
"message": "No .mcda directory found at /path",
"details": {}
}
}IDs are used in filenames, so they must be simple identifiers:
^[a-zA-Z_][a-zA-Z0-9_]*$
Good:
downtown_loft
annual_cost
alice
Bad:
downtown-loft
annual cost
2026_option
Alternatives are the options being evaluated.
Two types are supported:
candidate: eligible for the final recommendation.reference: included for comparison, but excluded from the default candidate ranking.
Reference alternatives are useful for baselines such as the current supplier, current office, or status quo. They help answer “is switching worth it?” without letting the baseline accidentally become the recommended new option.
Criteria describe what matters.
A criterion can be:
- a leaf criterion, with
directionofminormax - a group criterion, created with
crit add-group
Only leaf criteria receive performance values and thresholds. Groups are used to organize criteria and receive local weights.
Participants submit raw local weights. Raw weights do not need to sum to 1. The analyzer normalizes sibling weights and then computes global leaf weights.
For example:
financial = 30
commute_time = 25
space_quality = 30
lease_flexibility = 15
If annual_cost is the only child under financial, then annual_cost inherits the
global weight of financial.
ELECTRE III uses three thresholds per leaf criterion:
q: indifference thresholdp: preference thresholdv: veto threshold
Use --no-veto for criteria where no veto should apply.
Performance values score each alternative on each leaf criterion. For max criteria,
higher values are better. For min criteria, lower values are better.
The current analyzer aggregates across all project participants.
Default strategies:
- weights:
median - performance values:
confidence-weighted-mean - thresholds:
median
You can override these in analyze run.
Use weighted-sum when you want a clear score-based recommendation:
mcda analyze run --method weighted-sumWeighted-sum normalizes each criterion to a 0-1 scale, reverses min criteria so higher
normalized values are better, multiplies by resolved weights, and sums the contributions.
It is easy to explain and always produces a total score.
Use ELECTRE III when thresholds, vetoes, and incomparability matter:
mcda analyze run --method electre-iiiELECTRE III asks whether one alternative credibly outranks another. It may produce a tie or incomparability when the evidence does not support a clean ordering. That is often a feature, not a failure.
mcda --project <path> ...
mcda --human ...
mcda --quiet ...--project <path> points to a project root directory containing .mcda/.
--human switches from default JSON output to a simpler human-readable presentation.
--quiet is reserved for suppressing non-data human output.
Create a project:
mcda init <name> [--description "..."]Show project status:
mcda infoAdd participants:
mcda participant add <id> "<name>" [--bio "..."]List participants:
mcda participant listShow one participant:
mcda participant show <id>Set a trait:
mcda participant set-trait <id> <key> <json-value>Examples:
mcda participant set-trait alice role '"operations"'
mcda participant set-trait alice years_experience 10Set scope:
mcda participant set-scope <id> --may-weight true
mcda participant set-scope <id> --may-weight falseThe current implementation supports may_weight; criterion-specific evaluation scope is
planned but not yet implemented.
Add an alternative:
mcda alt add <id> "<name>" [--type candidate|reference]List alternatives:
mcda alt list
mcda alt list --type candidate
mcda alt list --type referenceShow an alternative:
mcda alt show <id>Retag an alternative:
mcda alt tag <id> --as candidate
mcda alt tag <id> --as referenceAdd a leaf criterion:
mcda crit add <id> "<name>" --direction min|max --unit "<unit>"Add a criterion under a group:
mcda crit add <id> "<name>" --direction min|max --unit "<unit>" --parent <group-id>Add a group:
mcda crit add-group <id> "<name>"List criteria:
mcda crit listShow one criterion:
mcda crit show <id>Set a participant's local weight:
mcda weights set <participant-id> <criterion-id> <value> [--confidence 0.0-1.0]Examples:
mcda weights set alice financial 30 --confidence 0.9
mcda weights set alice commute_time 25 --confidence 0.8Show weight records:
mcda weights showSet thresholds for a leaf criterion:
mcda thresholds set <participant-id> <criterion-id> --q <value> --p <value> --v <value>Set thresholds with no veto:
mcda thresholds set <participant-id> <criterion-id> --q <value> --p <value> --no-vetoShow threshold records:
mcda thresholds showSet a performance value:
mcda perf set <participant-id> <alternative-id> <criterion-id> <value> [--confidence 0.0-1.0]Record an abstention:
mcda perf abstain <participant-id> <alternative-id> <criterion-id> --reason "..."Show performance records:
mcda perf showSet a missing-data policy:
mcda policy set <key> <value> --by <participant-id> [--rationale "..."]Show current policies:
mcda policy listCurrent defaults:
perf-missing = exclude-participant
perf-abstention = exclude-participant
weights-missing = exclude-participant
thresholds-missing = exclude-participant
Start a session:
mcda session start --id <session-id> --participants <id> --participants <id>Check session status:
mcda session statusList sessions:
mcda session listClose a session:
mcda session close --notes "..."When a session is active, new append-style records include a session field.
Run weighted-sum:
mcda analyze run --method weighted-sumRun ELECTRE III:
mcda analyze run --method electre-iiielectre-iii is the default method, so mcda analyze run is equivalent to
mcda analyze run --method electre-iii.
Override aggregation:
mcda analyze run --method weighted-sum --weights-from median --perf-from confidence-weighted-meanUse one participant for all values:
mcda analyze run --method weighted-sum --participant aliceOverride lambda for ELECTRE III:
mcda analyze run --method electre-iii --lambda 0.8Show the latest candidate ranking:
mcda analyze rankingInclude reference alternatives:
mcda analyze ranking --include-referencesThis example walks through a complete decision. The team needs to choose a new office lease for a 35-person software company.
There are three candidate offices:
downtown_loftmidtown_suitesuburban_campus
There is also one reference alternative:
current_office
The current office is included so the team can compare the new options to the status quo, but it should not be selected as the default recommendation.
mcda init office_lease_selection --description "Select the best office lease for the next three years."
cd office_lease_selectionCheck the project:
mcda infoThe project data now lives under:
office_lease_selection/.mcda/
mcda participant add alice "Alice Rivera"
mcda participant add bob "Bob Chen"
mcda participant add carol "Carol Singh"Add a few traits:
mcda participant set-trait alice role '"operations"'
mcda participant set-trait alice years_experience 10
mcda participant set-trait bob role '"engineering"'
mcda participant set-trait bob years_experience 7
mcda participant set-trait carol role '"finance"'
mcda participant set-trait carol years_experience 12mcda alt add downtown_loft "Downtown Loft" --type candidate
mcda alt add midtown_suite "Midtown Suite" --type candidate
mcda alt add suburban_campus "Suburban Campus" --type candidate
mcda alt add current_office "Current Office" --type referenceThe team will evaluate each office by:
- annual cost
- commute time
- space quality
- lease flexibility
annual_cost sits under a financial group, which exercises hierarchical weighting.
mcda crit add-group financial "Financial"
mcda crit add annual_cost "Annual cost" --direction min --unit "thousands USD per year" --parent financial
mcda crit add commute_time "Commute time" --direction min --unit "average minutes"
mcda crit add space_quality "Space quality" --direction max --unit "score 0-100"
mcda crit add lease_flexibility "Lease flexibility" --direction max --unit "score 0-100"Interpretation:
annual_costismin: lower cost is better.commute_timeismin: shorter commute is better.space_qualityismax: higher quality is better.lease_flexibilityismax: more flexibility is better.
Participants submit local sibling weights. Root-level siblings are:
financial, commute_time, space_quality, lease_flexibility
Because annual_cost is the only child of financial, everyone gives it local weight
1 inside that group.
Alice:
mcda weights set alice financial 30 --confidence 0.9
mcda weights set alice commute_time 25 --confidence 0.8
mcda weights set alice space_quality 30 --confidence 0.9
mcda weights set alice lease_flexibility 15 --confidence 0.7
mcda weights set alice annual_cost 1 --confidence 1.0Bob:
mcda weights set bob financial 20 --confidence 0.8
mcda weights set bob commute_time 35 --confidence 0.9
mcda weights set bob space_quality 30 --confidence 0.8
mcda weights set bob lease_flexibility 15 --confidence 0.7
mcda weights set bob annual_cost 1 --confidence 1.0Carol:
mcda weights set carol financial 40 --confidence 0.95
mcda weights set carol commute_time 20 --confidence 0.8
mcda weights set carol space_quality 25 --confidence 0.8
mcda weights set carol lease_flexibility 15 --confidence 0.8
mcda weights set carol annual_cost 1 --confidence 1.0With median aggregation, the expected global weights are:
| Criterion | Expected weight |
|---|---|
annual_cost |
0.30 |
commute_time |
0.25 |
space_quality |
0.30 |
lease_flexibility |
0.15 |
Use the same thresholds for all participants in this example.
Annual cost:
mcda thresholds set alice annual_cost --q 25 --p 75 --v 175
mcda thresholds set bob annual_cost --q 25 --p 75 --v 175
mcda thresholds set carol annual_cost --q 25 --p 75 --v 175Commute time:
mcda thresholds set alice commute_time --q 3 --p 8 --v 20
mcda thresholds set bob commute_time --q 3 --p 8 --v 20
mcda thresholds set carol commute_time --q 3 --p 8 --v 20Space quality:
mcda thresholds set alice space_quality --q 5 --p 15 --v 35
mcda thresholds set bob space_quality --q 5 --p 15 --v 35
mcda thresholds set carol space_quality --q 5 --p 15 --v 35Lease flexibility has no veto:
mcda thresholds set alice lease_flexibility --q 5 --p 15 --no-veto
mcda thresholds set bob lease_flexibility --q 5 --p 15 --no-veto
mcda thresholds set carol lease_flexibility --q 5 --p 15 --no-vetoInterpretation:
- Cost differences within 25k USD are mostly indifferent.
- A commute difference over 20 minutes can veto an otherwise attractive option.
- Lease flexibility matters, but it cannot veto an alternative by itself.
For this example, each participant records the same performance values. This keeps the example focused on the decision model rather than disagreement over source data.
The values are:
| Alternative | annual_cost | commute_time | space_quality | lease_flexibility |
|---|---|---|---|---|
downtown_loft |
620 | 28 | 92 | 55 |
midtown_suite |
500 | 35 | 80 | 70 |
suburban_campus |
390 | 52 | 72 | 88 |
current_office |
540 | 38 | 68 | 45 |
Alice records the values:
mcda perf set alice downtown_loft annual_cost 620 --confidence 1
mcda perf set alice downtown_loft commute_time 28 --confidence 1
mcda perf set alice downtown_loft space_quality 92 --confidence 1
mcda perf set alice downtown_loft lease_flexibility 55 --confidence 1
mcda perf set alice midtown_suite annual_cost 500 --confidence 1
mcda perf set alice midtown_suite commute_time 35 --confidence 1
mcda perf set alice midtown_suite space_quality 80 --confidence 1
mcda perf set alice midtown_suite lease_flexibility 70 --confidence 1
mcda perf set alice suburban_campus annual_cost 390 --confidence 1
mcda perf set alice suburban_campus commute_time 52 --confidence 1
mcda perf set alice suburban_campus space_quality 72 --confidence 1
mcda perf set alice suburban_campus lease_flexibility 88 --confidence 1
mcda perf set alice current_office annual_cost 540 --confidence 1
mcda perf set alice current_office commute_time 38 --confidence 1
mcda perf set alice current_office space_quality 68 --confidence 1
mcda perf set alice current_office lease_flexibility 45 --confidence 1Bob records the values:
mcda perf set bob downtown_loft annual_cost 620 --confidence 1
mcda perf set bob downtown_loft commute_time 28 --confidence 1
mcda perf set bob downtown_loft space_quality 92 --confidence 1
mcda perf set bob downtown_loft lease_flexibility 55 --confidence 1
mcda perf set bob midtown_suite annual_cost 500 --confidence 1
mcda perf set bob midtown_suite commute_time 35 --confidence 1
mcda perf set bob midtown_suite space_quality 80 --confidence 1
mcda perf set bob midtown_suite lease_flexibility 70 --confidence 1
mcda perf set bob suburban_campus annual_cost 390 --confidence 1
mcda perf set bob suburban_campus commute_time 52 --confidence 1
mcda perf set bob suburban_campus space_quality 72 --confidence 1
mcda perf set bob suburban_campus lease_flexibility 88 --confidence 1
mcda perf set bob current_office annual_cost 540 --confidence 1
mcda perf set bob current_office commute_time 38 --confidence 1
mcda perf set bob current_office space_quality 68 --confidence 1
mcda perf set bob current_office lease_flexibility 45 --confidence 1Carol records the values:
mcda perf set carol downtown_loft annual_cost 620 --confidence 1
mcda perf set carol downtown_loft commute_time 28 --confidence 1
mcda perf set carol downtown_loft space_quality 92 --confidence 1
mcda perf set carol downtown_loft lease_flexibility 55 --confidence 1
mcda perf set carol midtown_suite annual_cost 500 --confidence 1
mcda perf set carol midtown_suite commute_time 35 --confidence 1
mcda perf set carol midtown_suite space_quality 80 --confidence 1
mcda perf set carol midtown_suite lease_flexibility 70 --confidence 1
mcda perf set carol suburban_campus annual_cost 390 --confidence 1
mcda perf set carol suburban_campus commute_time 52 --confidence 1
mcda perf set carol suburban_campus space_quality 72 --confidence 1
mcda perf set carol suburban_campus lease_flexibility 88 --confidence 1
mcda perf set carol current_office annual_cost 540 --confidence 1
mcda perf set carol current_office commute_time 38 --confidence 1
mcda perf set carol current_office space_quality 68 --confidence 1
mcda perf set carol current_office lease_flexibility 45 --confidence 1Run ELECTRE III with default aggregation:
mcda analyze run --method electre-iiiBecause electre-iii is currently the default, mcda analyze run gives the same result.
Run weighted-sum for a score-based comparison:
mcda analyze run --method weighted-sumBoth methods include:
resolved_weightsresolved_perfcandidate_rankingreference_ranking
Weighted-sum also includes:
normalized_perfweighted_contributionsscores
ELECTRE III also includes:
resolved_thresholdsconcordancecredibilityrelationsdistillation
To inspect just the default candidate ranking:
mcda analyze rankingTo inspect the full ranking including the reference alternative:
mcda analyze ranking --include-referencesUse the output as decision support, not as an automatic command to act.
In this scenario:
downtown_loftis strongest on commute and space quality, but expensive.suburban_campusis strongest on cost and flexibility, but has a much worse commute.midtown_suiteis the compromise option.current_officeis shown as a reference baseline, not as a default candidate.
The final decision should consider:
- the candidate ranking
- any incomparability or close credibility values
- whether the reference office suggests staying put is still reasonable
- stakeholder judgment about tradeoffs not captured in the criteria
If the ranking is surprising, rerun with a different lambda or aggregation strategy:
mcda analyze run --lambda 0.8
mcda analyze run --weights-from confidence-weighted-mean
mcda analyze run --participant aliceThe repository includes runnable Python demos that execute complete decisions, run analysis, save result JSON, and generate visualizations.
Run both demos from the repository root:
python examples/vendor_selection_demo.py
python examples/office_lease_demo.pyThe two examples show different kinds of output:
- Vendor selection uses weighted-sum to produce a clear recommendation.
- Office lease selection uses ELECTRE III to show a genuinely ambiguous tradeoff.
The vendor selection demo chooses a cloud vendor for production hosting. It evaluates three candidate vendors and one reference vendor:
balanced_vendorbudget_vendorpremium_vendorcurrent_vendor
Run it:
python examples/vendor_selection_demo.pyThe reusable outputs are:
docs/vendor_selection_result.json
docs/vendor_selection_outranking_graph.mmd
docs/figures/vendor_selection_weights.png
docs/figures/vendor_selection_normalized_performance.png
docs/figures/vendor_selection_score_contributions.png
docs/figures/vendor_selection_candidate_ranking.png
docs/figures/vendor_selection_credibility.png
The resolved weights are:
| Criterion | Weight |
|---|---|
| Annual cost | 0.20 |
| Migration effort | 0.15 |
| Security | 0.35 |
| Uptime | 0.30 |
Security and uptime dominate the decision. Cost matters, but it is not enough for the cheapest vendor to win if reliability and security are weak.
The normalized performance heatmap shows the tradeoff:
- Balanced Vendor is strong on uptime, security, and migration effort while staying reasonably priced.
- Budget Vendor wins on cost but performs poorly on security and uptime.
- Premium Vendor is excellent on uptime and security but is expensive and harder to migrate to.
- Current Vendor is easy to keep but weak on security.
Weighted-sum is useful because it can explain the score. Each stacked bar shows how much each criterion contributes to the final score.
The score table from docs/vendor_selection_result.json is:
| Alternative | Weighted-sum score | Candidate rank |
|---|---|---|
| Balanced Vendor | 0.810 | 1 |
| Premium Vendor | 0.650 | 2 |
| Budget Vendor | 0.225 | 3 |
The reference vendor, Current Vendor, scores 0.488. It is included in the result JSON
for comparison but excluded from the default candidate ranking.
The candidate ranking is:
[
{
"rank": 1,
"alternatives": ["balanced_vendor"],
"score": 0.809699
},
{
"rank": 2,
"alternatives": ["premium_vendor"],
"score": 0.65
},
{
"rank": 3,
"alternatives": ["budget_vendor"],
"score": 0.225
}
]This is the clean recommendation case: Balanced Vendor is the best overall choice under the stated weights and normalized performance values.
The same vendor data can be cross-checked with ELECTRE III. A network view is often clearer than a heatmap because ELECTRE credibility is directional: an arrow means the source alternative credibly outranks the target at the selected lambda.
flowchart LR
balanced_vendor[Balanced Vendor]
budget_vendor[Budget Vendor]
premium_vendor[Premium Vendor]
current_vendor[(Current Vendor<br/>reference)]
classDef candidate fill:#eef6ff,stroke:#4C78A8,stroke-width:1px;
classDef reference fill:#f7f7f7,stroke:#666,stroke-dasharray:4 3;
class balanced_vendor,budget_vendor,premium_vendor candidate;
class current_vendor reference;
balanced_vendor -->|0.80| budget_vendor
balanced_vendor -->|1.00| premium_vendor
balanced_vendor -->|0.85| current_vendor
current_vendor -->|0.80| budget_vendor
The network shows that Balanced Vendor credibly outranks Budget Vendor, Premium Vendor, and Current Vendor at the default cutoff. That makes the recommendation more robust than a score table alone. The heatmap preserves the underlying credibility values for readers who want the full pairwise detail.
The office lease demo executes the office lease example from above. It produces both a weighted-sum score and an ELECTRE III outranking analysis.
Run it from the repository root:
python examples/office_lease_demo.pyThe script writes a disposable project here:
examples/output/office_lease_selection/
That project is ignored by Git. The reusable outputs are written here:
docs/office_lease_result.json
docs/office_lease_weighted_sum_result.json
docs/office_lease_lambda_sweep.json
docs/office_lease_outranking_graph.mmd
docs/figures/
The demo script deliberately calls the CLI through subprocess, so it exercises the same
workflow a user would run in a shell. The helper function is:
def mcda(args: list[str], project: bool = True) -> dict:
command = [sys.executable, "-m", "mcda.cli"]
if project:
command.extend(["--project", str(PROJECT)])
command.extend(args)
completed = subprocess.run(command, cwd=ROOT, text=True, capture_output=True, check=False)
if completed.returncode != 0:
raise RuntimeError(f"Command failed: {' '.join(command)}\n{completed.stdout}\n{completed.stderr}")
return json.loads(completed.stdout)["data"]The result is ordinary JSON, so downstream analysis can use standard Python:
result = json.loads(Path("docs/office_lease_result.json").read_text())
weights = result["resolved_weights"]
credibility = result["credibility"]
ranking = result["candidate_ranking"]For the weighted-sum comparison:
weighted = json.loads(Path("docs/office_lease_weighted_sum_result.json").read_text())
scores = weighted["scores"]The group’s median normalized weights are:
| Criterion | Weight |
|---|---|
| Annual cost | 0.30 |
| Commute time | 0.25 |
| Space quality | 0.30 |
| Lease flexibility | 0.15 |
The important point is that the tool records raw participant weights, then resolves them into normalized global leaf weights for analysis. In this case, cost and quality carry the most influence, commute is close behind, and flexibility matters but has less leverage.
The plotting code is straightforward:
weights = result["resolved_weights"]
criteria = list(weights)
values = [weights[criterion] for criterion in criteria]
fig, ax = plt.subplots(figsize=(8, 4.5))
ax.bar([DISPLAY_NAMES[c] for c in criteria], values)
ax.set_ylabel("Resolved global weight")
ax.set_title("What the group decided matters most")
fig.tight_layout()
fig.savefig("docs/figures/office_lease_weights.png", dpi=160)Raw criterion values have different units: thousands of dollars, commute minutes, and 0-100 scores. For visualization, the demo normalizes each criterion so:
1.0means best observed value on that criterion0.0means worst observed value on that criterionmincriteria are reversed so lower raw values become higher normalized scores
This makes the tradeoffs visible:
- Downtown Loft is excellent on commute and space quality, but poor on cost.
- Suburban Campus is excellent on cost and flexibility, but poor on commute.
- Midtown Suite is the compromise option.
- Current Office is useful as a baseline, but is weak on space quality and flexibility.
The normalization code is:
def normalized_performance(result: dict) -> dict[str, dict[str, float]]:
perf = result["resolved_perf"]
normalized = {alt: {} for alt in perf}
for criterion_id, spec in CRITERIA.items():
values = [perf[alt][criterion_id] for alt in perf]
low, high = min(values), max(values)
span = high - low or 1
for alternative_id in perf:
raw = perf[alternative_id][criterion_id]
if spec["direction"] == "max":
score = (raw - low) / span
else:
score = (high - raw) / span
normalized[alternative_id][criterion_id] = score
return normalizedThe best summary of the office ELECTRE result is the outranking network:
flowchart LR
downtown_loft[Downtown Loft]
midtown_suite[Midtown Suite]
suburban_campus[Suburban Campus]
current_office[(Current Office<br/>reference)]
classDef candidate fill:#eef6ff,stroke:#4C78A8,stroke-width:1px;
classDef reference fill:#f7f7f7,stroke:#666,stroke-dasharray:4 3;
class downtown_loft,midtown_suite,suburban_campus candidate;
class current_office reference;
midtown_suite -->|1.00| current_office
suburban_campus -->|0.75| current_office
There are no arrows among the three candidate offices, which means ELECTRE III did not
find a decisive outranking relation between them at lambda 0.75. That communicates the
core result more directly than the numeric matrix: two candidate offices beat the current
office, but the new-office choice remains unresolved.
The credibility matrix is still useful as the detailed diagnostic view. Each cell answers:
How credible is it that the row alternative outranks the column alternative?
At the default lambda of 0.75, a row alternative only outranks a column alternative when
its credibility is at least 0.75.
In the generated result:
- Midtown Suite strongly outranks Current Office with credibility
1.00. - Suburban Campus outranks Current Office at exactly
0.75. - Downtown Loft does not clearly outrank the other candidates because its cost and flexibility weaknesses matter.
- Suburban Campus does not clearly outrank the other candidates because the commute veto pressure is severe.
This is where ELECTRE differs from a simple weighted average: it can say “these options are not cleanly comparable” instead of forcing a fragile total ordering.
The relation matrix converts credibility values into ELECTRE relations:
outranksoutranked byindifferentincomparable
For this scenario, the most important relation is that both Midtown Suite and Suburban Campus outrank the Current Office reference. That supports the practical conclusion that moving is plausibly better than staying put.
Among the three candidates, however, the default model leaves them incomparable. That is a useful result: the model is telling the facilitator that the final decision depends on a real tradeoff, not a hidden arithmetic winner.
The default candidate ranking is:
[
{
"rank": 1,
"alternatives": [
"downtown_loft",
"midtown_suite",
"suburban_campus"
]
}
]This does not mean the three offices are identical. It means that, under the default lambda and thresholds, none of the candidates decisively outranks the others.
That gives the decision-maker a concrete next step:
- revisit the commute veto threshold
- test a different lambda
- ask whether cost or quality should receive more weight
- collect more evidence on space quality or flexibility
- make an explicit managerial judgment among the unresolved candidates
Weighted-sum does produce a total order for the same office data:
| Alternative | Weighted-sum score | Candidate rank |
|---|---|---|
| Downtown Loft | 0.585 | 1 |
| Midtown Suite | 0.571 | 2 |
| Suburban Campus | 0.500 | 3 |
This is useful, but the tiny gap between Downtown Loft and Midtown Suite is a warning. The ELECTRE analysis explains why the total order should not be overinterpreted: the options make different tradeoffs, and no candidate decisively outranks the others at the chosen thresholds and lambda.
The demo also reruns the analysis over several lambda values:
rows = []
for lambda_value in [0.55, 0.60, 0.65, 0.70, 0.75, 0.80, 0.85, 0.90]:
result = mcda(["analyze", "run", "--lambda", f"{lambda_value:.2f}"])
relations = [
relation
for source, targets in result["relations"].items()
for target, relation in targets.items()
if source != target
]
rows.append(
{
"lambda": lambda_value,
"outranks": relations.count("outranks"),
"incomparable": relations.count("incomparable"),
"top_candidate_group_size": len(result["candidate_ranking"][0]["alternatives"]),
}
)The sweep shows how demanding the outranking standard is:
| Lambda | Outranking directions | Incomparable directions | Top candidate group size |
|---|---|---|---|
| 0.55 | 3 | 2 | 2 |
| 0.60 | 3 | 6 | 3 |
| 0.65 | 3 | 6 | 3 |
| 0.70 | 3 | 6 | 3 |
| 0.75 | 2 | 8 | 3 |
| 0.80 | 1 | 10 | 3 |
| 0.85 | 1 | 10 | 3 |
| 0.90 | 1 | 10 | 3 |
As lambda rises, the model requires stronger evidence before saying one option outranks another. Incomparability increases. This is useful in deliberation because it separates robust conclusions from conclusions that depend on an aggressive cutoff.
For this example, the tool does not produce a simplistic “winner.” It produces a structured decision picture:
- The group’s resolved priorities are explicit.
- The performance tradeoffs are visible criterion by criterion.
- The current office is beaten by plausible new options, so moving looks justified.
- The three new candidate offices remain meaningfully hard to compare.
- The final choice should focus on the unresolved tradeoff: pay more for Downtown Loft’s commute and quality, accept Midtown Suite’s compromise, or choose Suburban Campus for cost and flexibility despite the commute burden.
That is the point of the approach: it narrows the decision, exposes the tradeoffs, and shows where judgment is still needed.
Sessions are useful when you want to group a round of elicitation.
mcda session start --id round_1 --participants alice --participants bob --participants carolAny new weight, threshold, or performance record written while the session is open will include:
{
"session": "round_1"
}Close the session:
mcda session close --notes "Initial elicitation complete."Sessions are administrative state, not a formal audit log.
Implemented:
.mcda/project initialization and discovery- JSON-first CLI
- participants
- alternatives
- criteria and criterion groups
- weight records
- threshold records
- performance records and abstentions
- sessions
- policy listing and setting
- weighted-sum analysis
- ELECTRE III analysis
- latest-result candidate ranking
Planned:
- CSV and JSON imports
- report generation
- briefing generation
- sensitivity sweeps
- richer human-readable tables
- stricter all-issues-at-once validation
- removal commands
- more complete policy behavior
- additional MCDA methods
Run tests:
pytest -qThe current test suite exercises the first vertical slice:
- project creation under
.mcda/ - ID validation
- session stamping
- office lease scenario setup
- hierarchical median weight aggregation
- weighted-sum analysis
- ELECTRE III analysis
- candidate/reference ranking separation











