Skip to content

Show configured missing dashboard services#173

Open
kfiramar wants to merge 5 commits into
mainfrom
broken_dashboard_configured_missing_service_visibility-2
Open

Show configured missing dashboard services#173
kfiramar wants to merge 5 commits into
mainfrom
broken_dashboard_configured_missing_service_visibility-2

Conversation

@kfiramar
Copy link
Copy Markdown
Collaborator

@kfiramar kfiramar commented May 2, 2026

Summary

This PR updates dashboard state tracking and rendering so the dashboard can show and restart services that are configured for a project but are missing from the active service records. This specifically makes project-scoped configured backend/frontend services visible as “stopped” when they are expected for a project but not currently running or previously recorded as stopped.

Functional Changes

  • Adds project-scoped dashboard metadata for configured services:

    • Successful startup state now records configured app service types per selected project under dashboard_project_configured_services.
    • Only supported dashboard app service types are tracked: backend and frontend.
  • Updates dashboard rendering to include configured-but-missing services:

    • If a project is configured for a backend or frontend and that service is absent from active services and stopped-service metadata, the dashboard now shows it as not running [Stopped].
    • The dashboard service totals now count these configured missing services as “not running.”
    • Frontend-only project configuration no longer causes an unconfigured backend row to appear.
  • Updates dashboard restart behavior:

    • Interactive restart now offers configured missing services as restartable options.
    • Restart selection can target synthetic service names such as Main Backend even when there is no prior service record.
    • Restart logic avoids offering unconfigured missing services.
  • Centralizes dashboard metadata handling:

    • Adds shared helpers for normalizing configured service types, reading stopped services, resolving canonical dashboard service names, and detecting configured missing services.
  • Improves project discovery filtering:

    • .omx and .envctl-state are ignored during planning discovery.
    • Tree project candidates that only contain stale .omx state are skipped.
    • Discovery now avoids treating empty/stale iteration directories as valid project roots.

Tests Added/Updated

  • Added coverage for:

    • Dashboard rendering of project-configured missing services.
    • Ensuring frontend-only configuration does not imply a backend.
    • Interactive restart offering configured missing backend services.
    • Restarting a configured missing backend without a prior service record.
    • Writing project-scoped configured service metadata during startup finalization.
    • Ignoring stale .omx-only tree iteration directories.
  • Updated existing discovery tests to use non-stale project roots and reflect the new filtering behavior.


Summary

This PR updates dashboard service metadata so that only services that are actually configured for a selected project are recorded as dashboard-configured services. This ensures missing-service visibility is based on project-specific service availability instead of only runtime mode enablement.

Changes

  • Added project-level dashboard service configuration detection for supported dashboard app service types:

    • Checks explicit ENVCTL_BACKEND_START_CMD / ENVCTL_FRONTEND_START_CMD values from environment or raw config.
    • Falls back to command auto-detection via suggest_service_start_command.
    • Ignores unsupported service types.
  • Updated startup finalization metadata generation to:

    • Evaluate configured dashboard services per selected project.
    • Include only backend/frontend services that are both enabled for the runtime mode and actually configured for that project.
    • Skip projects without a valid name or root.
    • Omit unconfigured service types from dashboard_project_configured_services.
  • Updated startup orchestrator profile tests to reflect the new configuration-aware behavior:

    • Existing metadata tests now provide explicit start commands where needed.
    • Added coverage for a frontend-only project layout to verify backend is not marked configured when only frontend can be detected.

Files Changed

  • python/envctl_engine/shared/dashboard_metadata.py

    • Added dashboard_project_service_configured.
  • python/envctl_engine/startup/finalization.py

    • Changed dashboard project configured-service metadata creation to use per-project service configuration checks.
  • tests/python/startup/test_startup_orchestrator_profiles.py

    • Updated test fixtures and added frontend-only layout coverage.
  • .sisyphus/ralph-loop.local.md

    • Added local workflow/session metadata.

kfiramar and others added 4 commits April 29, 2026 22:23
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
@kfiramar

This comment has been minimized.

)
if nested_iters:
for iter_dir in nested_iters:
if not _looks_like_tree_project_root(iter_dir):
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kody code-review Kody Rules high

Missing test coverage for new planning filter logic. The new _looks_like_tree_project_root path filtering changes which feature projects are appended, but the PR does not include tests for success paths or edge cases such as directories containing only .omx and unreadable paths.

Kody Rule violation: Tests Required for New Business Logic

def test_looks_like_tree_project_root_rejects_directory_containing_only_omx(tmp_path):
    project_dir = tmp_path / "feature"
    project_dir.mkdir()
    (project_dir / ".omx").mkdir()

    assert _looks_like_tree_project_root(project_dir) is False


def test_looks_like_tree_project_root_accepts_directory_with_project_content(tmp_path):
    project_dir = tmp_path / "feature"
    project_dir.mkdir()
    (project_dir / ".omx").mkdir()
    (project_dir / "README.md").write_text("content")

    assert _looks_like_tree_project_root(project_dir) is True


def test_looks_like_tree_project_root_rejects_unreadable_path(monkeypatch, tmp_path):
    project_dir = tmp_path / "feature"
    project_dir.mkdir()

    def raise_os_error():
        raise OSError("permission denied")

    monkeypatch.setattr(project_dir, "iterdir", raise_os_error)

    assert _looks_like_tree_project_root(project_dir) is False

Talk to Kody by mentioning @kody

Was this suggestion helpful? React with 👍 or 👎 to help Kody learn from this interaction.

DASHBOARD_APP_SERVICE_TYPES = frozenset({"backend", "frontend"})


def normalize_dashboard_service_types(raw_types: object) -> list[str]:
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kody code-review Kody Rules high

Missing test coverage for new dashboard metadata business logic. The added normalization, serialization, stopped-service derivation, and missing-service computation logic introduces data transformation behavior without tests for success paths and edge cases.

Kody Rule violation: Tests Required for New Business Logic

Add unit tests for the new dashboard metadata behavior, covering at least:

- normalize_dashboard_service_types returns sorted supported service types and ignores unsupported values.
- normalize_dashboard_service_types returns an empty list for invalid metadata shapes.
- serialize_dashboard_project_configured_services skips empty project names and projects with no valid service types.
- dashboard_project_configured_services returns an empty mapping for invalid metadata and normalizes valid mappings.
- dashboard_stopped_services_by_project ignores non-mapping entries, unsupported service types, and empty projects, and falls back to canonical service names when name is missing.
- dashboard_configured_missing_services_by_project excludes services already active in state.services and services already present in stopped metadata.

Talk to Kody by mentioning @kody

Was this suggestion helpful? React with 👍 or 👎 to help Kody learn from this interaction.

Comment thread python/envctl_engine/startup/finalization.py
def _has_dashboard_stopped_services(state: RunState) -> bool:
return bool(DashboardOrchestrator._dashboard_stopped_services_by_project(state))

@staticmethod
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kody code-review Kody Rules high

Missing tests for new dashboard restart business logic: the PR changes restart eligibility, project ordering, configured-missing service offering, available service types, and service-name resolution without adding test coverage. Add tests covering configured missing services, stopped services, active-service de-duplication, project ordering, and requested service-type filtering edge cases.

Kody Rule violation: Tests Required for New Business Logic

Add or update tests for the new restart/dashboard behavior, including:
- `_has_restartable_inactive_services` returns true when configured missing services exist even when no stopped services exist.
- `_restart_project_order` includes projects from configured dashboard services without duplicating existing projects case-insensitively.
- `_restart_services_by_project` offers configured missing services, emits `dashboard.restart.configured_missing_offered`, and skips service types already represented by stopped services.
- `_available_service_types_for_projects` includes project-level configured service types before falling back to global configured service types.
- `_service_names_for_projects_and_types` adds canonical configured service names only when requested, inactive, not stopped, and not already selected.
Prompt for LLM

File python/envctl_engine/ui/dashboard/orchestrator.py:

Line 1603:

Add test coverage for the new dashboard restart business logic in `python/envctl_engine/ui/dashboard/orchestrator.py`. The codebase rule requires tests for any PR that adds or changes business logic, data transformation, validation, authorization, background jobs, or API contracts, including success cases and important edge cases. This PR changes restart eligibility to include configured missing services, adds configured-service projects to restart ordering, offers configured missing services during restart, changes available service-type resolution, and adds canonical configured service-name resolution. Create focused tests for these behaviors, including de-duplication against active and stopped services, case-insensitive project ordering, emitted telemetry for configured missing services, requested service-type filtering, and fallback from project-level configured service types to global configured service types.

Suggested Code:

Add or update tests for the new restart/dashboard behavior, including:
- `_has_restartable_inactive_services` returns true when configured missing services exist even when no stopped services exist.
- `_restart_project_order` includes projects from configured dashboard services without duplicating existing projects case-insensitively.
- `_restart_services_by_project` offers configured missing services, emits `dashboard.restart.configured_missing_offered`, and skips service types already represented by stopped services.
- `_available_service_types_for_projects` includes project-level configured service types before falling back to global configured service types.
- `_service_names_for_projects_and_types` adds canonical configured service names only when requested, inactive, not stopped, and not already selected.

Talk to Kody by mentioning @kody

Was this suggestion helpful? React with 👍 or 👎 to help Kody learn from this interaction.

Comment on lines +1618 to +1622
for project_name in dashboard_project_configured_services(state):
if project_name.casefold() in seen:
continue
seen.add(project_name.casefold())
names.append(project_name)
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kody code-review Kody Rules low

Duplicated de-duplication logic in _restart_project_order: the new configured-services loop repeats the same case-insensitive membership check, seen update, and append sequence used by the preceding loop. Extract the repeated add-if-unseen behavior into a small helper and reuse it for both project sources.

Kody Rule violation: Extract duplicated logic into functions

def add_project_if_unseen(project_name: str) -> None:
    normalized = project_name.casefold()
    if normalized in seen:
        return
    seen.add(normalized)
    names.append(project_name)

for project_name in self._tree_project_names(state):
    add_project_if_unseen(project_name)
for project_name in dashboard_project_configured_services(state):
    add_project_if_unseen(project_name)
return names
Prompt for LLM

File python/envctl_engine/ui/dashboard/orchestrator.py:

Line 1618 to 1622:

Refactor `_restart_project_order` in `python/envctl_engine/ui/dashboard/orchestrator.py` to comply with the rule that duplicate sequences of operations must be moved into named helpers or utilities and reused. The new loop over configured dashboard services repeats the same case-insensitive project de-duplication sequence already used immediately above: check whether `project_name.casefold()` exists in `seen`, continue or return when present, add the normalized name to `seen`, and append the original project name to `names`. Extract that repeated behavior into a small helper and call it from both loops without changing ordering semantics.

Suggested Code:

        def add_project_if_unseen(project_name: str) -> None:
            normalized = project_name.casefold()
            if normalized in seen:
                return
            seen.add(normalized)
            names.append(project_name)

        for project_name in self._tree_project_names(state):
            add_project_if_unseen(project_name)
        for project_name in dashboard_project_configured_services(state):
            add_project_if_unseen(project_name)
        return names

Talk to Kody by mentioning @kody

Was this suggestion helpful? React with 👍 or 👎 to help Kody learn from this interaction.

return dashboard_stopped_services_by_project(state)


def _dashboard_visible_stopped_service_count(
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kody code-review Kody Rules high

New dashboard business logic lacks test coverage. The configured-missing-services path changes projection membership, event emission, service row visibility, and stopped-service counting with duplicate suppression, but the PR does not add tests for success paths or edge cases.

Kody Rule violation: Tests Required for New Business Logic

Add tests covering the configured-missing-services dashboard behavior, including: configured missing services are counted when absent from state.services; services already present in state.services are not counted; duplicate service names across stopped and configured-missing metadata are counted once; configured missing projects are added to the projection; and dashboard.configured_missing_services is emitted with sorted service types.
Prompt for LLM

File python/envctl_engine/ui/dashboard/rendering.py:

Line 490:

Review this PR for missing tests. The code adds dashboard business logic for configured-but-missing services, including changes to stopped-service counting, duplicate suppression, projection updates, event emission, and backend/frontend row visibility. Our codebase requires tests for any new or changed business logic, including success behavior and important edge cases. Add tests that verify configured missing services are counted only when absent from state.services, duplicate names are counted once, configured missing projects are included in the dashboard projection, service rows render as not running, and the dashboard.configured_missing_services event is emitted with sorted service data.

Suggested Code:

Add tests covering the configured-missing-services dashboard behavior, including: configured missing services are counted when absent from state.services; services already present in state.services are not counted; duplicate service names across stopped and configured-missing metadata are counted once; configured missing projects are added to the projection; and dashboard.configured_missing_services is emitted with sorted service types.

Talk to Kody by mentioning @kody

Was this suggestion helpful? React with 👍 or 👎 to help Kody learn from this interaction.

if isinstance(metadata_roots, dict):
projection = {str(project).strip(): {} for project in metadata_roots if str(project).strip()}
stopped_services = _dashboard_stopped_services_by_project(state)
configured_missing_services = dashboard_configured_missing_services_by_project(state)
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kody code-review Kody Rules low

User-facing dashboard behavior changed without documentation or changelog coverage. The dashboard now emits configured-missing-service metadata and renders configured-but-missing backend/frontend services as not running, which requires an end-user documentation/runbook update and a concise changelog entry.

Kody Rule violation: Update docs and changelog when user-facing

Add a concise changelog entry and update the relevant dashboard user/operator documentation or runbook to describe configured-but-missing services being displayed as not running and the new dashboard.configured_missing_services emitted metadata.
Prompt for LLM

File python/envctl_engine/ui/dashboard/rendering.py:

Line 57:

Review this PR for a user-facing dashboard behavior change. The code now reads configured missing services from dashboard metadata, emits a dashboard.configured_missing_services event, includes missing configured services in the dashboard projection, and displays configured-but-missing backend/frontend services as not running. Our codebase requires end-user docs or runbooks and a concise changelog entry whenever a change affects users or operators. Create the appropriate documentation and changelog update for this behavior.

Suggested Code:

Add a concise changelog entry and update the relevant dashboard user/operator documentation or runbook to describe configured-but-missing services being displayed as not running and the new dashboard.configured_missing_services emitted metadata.

Talk to Kody by mentioning @kody

Was this suggestion helpful? React with 👍 or 👎 to help Kody learn from this interaction.

Comment on lines +921 to +923
def fake_restart_selector(**kwargs): # noqa: ANN001
selector_calls.append(kwargs)
return ["__RESTART__:service:Main Backend"]
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kody code-review Kody Rules low

Missing type annotations in the nested fake_restart_selector function signature. The untyped **kwargs parameter and missing return annotation reduce readability and weaken static type checking.

Kody Rule violation: Use Type Annotations for Better Readability

def fake_restart_selector(**kwargs: Any) -> list[str]:
    selector_calls.append(kwargs)
    return ["__RESTART__:service:Main Backend"]

Talk to Kody by mentioning @kody

Was this suggestion helpful? React with 👍 or 👎 to help Kody learn from this interaction.

self.assertIn("Frontend: not running [Stopped]", output)
self.assertNotIn("Frontend: n/a [Unknown]", output)

def test_dashboard_shows_project_configured_missing_backend_as_stopped(self) -> None:
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kody code-review Kody Rules low

Duplicated test setup spans both added dashboard tests, including temporary repo/runtime creation, runtime construction, state construction, stdout capture, and dashboard rendering. Extract the repeated setup/rendering sequence into a named helper and reuse it with only the configured services differing per test.

Kody Rule violation: Extract duplicated logic into functions

def _render_dashboard_for_configured_services(self, configured_services: list[str]) -> str:
    with tempfile.TemporaryDirectory() as tmpdir:
        repo = Path(tmpdir) / "repo"
        runtime = Path(tmpdir) / "runtime"
        (repo / ".git").mkdir(parents=True, exist_ok=True)
        engine = PythonEngineRuntime(load_config(self._config(repo, runtime)), env={"NO_COLOR": "1"})
        engine._reconcile_state_truth = lambda _state: []  # type: ignore[method-assign]

        state = RunState(
            run_id="run-1",
            mode="main",
            services={
                "Main Frontend": ServiceRecord(
                    name="Main Frontend",
                    type="frontend",
                    cwd=str(repo),
                    requested_port=9000,
                    actual_port=9000,
                    pid=1234,
                    status="running",
                ),
            },
            metadata={
                "project_roots": {"Main": str(repo)},
                "dashboard_project_configured_services": {"Main": configured_services},
            },
        )

        buffer = io.StringIO()
        with redirect_stdout(buffer):
            engine._print_dashboard_snapshot(state)
        return buffer.getvalue()

def test_dashboard_shows_project_configured_missing_backend_as_stopped(self) -> None:
    output = self._render_dashboard_for_configured_services(["backend", "frontend"])

    self.assertIn("services: 2 total | 1 running | 1 not running | 0 starting/unknown | 0 issues", output)
    self.assertIn("Backend: not running [Stopped]", output)
    self.assertIn("Frontend: http://localhost:9000", output)
    self.assertNotIn("Backend: not running [Configured]", output)

def test_dashboard_does_not_infer_backend_for_project_configured_frontend_only(self) -> None:
    output = self._render_dashboard_for_configured_services(["frontend"])

    self.assertIn("services: 1 total | 1 running | 0 starting/unknown | 0 issues", output)
    self.assertIn("Frontend: http://localhost:9000", output)
    self.assertNotIn("Backend:", output)
Prompt for LLM

File tests/python/ui/test_dashboard_rendering_parity.py:

Line 205:

Refactor the added dashboard rendering parity tests to comply with the rule that duplicated sequences of statements and repeated operations must be extracted into named helper functions or utilities. The two tests repeat the same temporary directory setup, repo/runtime path creation, .git directory creation, PythonEngineRuntime construction, reconcile-state override, RunState construction, stdout capture, and dashboard rendering; only the configured services and assertions differ. Extract the repeated setup/rendering logic into a helper method and update both tests to call it while preserving the current assertions and behavior.

Suggested Code:

    def _render_dashboard_for_configured_services(self, configured_services: list[str]) -> str:
        with tempfile.TemporaryDirectory() as tmpdir:
            repo = Path(tmpdir) / "repo"
            runtime = Path(tmpdir) / "runtime"
            (repo / ".git").mkdir(parents=True, exist_ok=True)
            engine = PythonEngineRuntime(load_config(self._config(repo, runtime)), env={"NO_COLOR": "1"})
            engine._reconcile_state_truth = lambda _state: []  # type: ignore[method-assign]

            state = RunState(
                run_id="run-1",
                mode="main",
                services={
                    "Main Frontend": ServiceRecord(
                        name="Main Frontend",
                        type="frontend",
                        cwd=str(repo),
                        requested_port=9000,
                        actual_port=9000,
                        pid=1234,
                        status="running",
                    ),
                },
                metadata={
                    "project_roots": {"Main": str(repo)},
                    "dashboard_project_configured_services": {"Main": configured_services},
                },
            )

            buffer = io.StringIO()
            with redirect_stdout(buffer):
                engine._print_dashboard_snapshot(state)
            return buffer.getvalue()

    def test_dashboard_shows_project_configured_missing_backend_as_stopped(self) -> None:
        output = self._render_dashboard_for_configured_services(["backend", "frontend"])

        self.assertIn("services: 2 total | 1 running | 1 not running | 0 starting/unknown | 0 issues", output)
        self.assertIn("Backend: not running [Stopped]", output)
        self.assertIn("Frontend: http://localhost:9000", output)
        self.assertNotIn("Backend: not running [Configured]", output)

    def test_dashboard_does_not_infer_backend_for_project_configured_frontend_only(self) -> None:
        output = self._render_dashboard_for_configured_services(["frontend"])

        self.assertIn("services: 1 total | 1 running | 0 starting/unknown | 0 issues", output)
        self.assertIn("Frontend: http://localhost:9000", output)
        self.assertNotIn("Backend:", output)

Talk to Kody by mentioning @kody

Was this suggestion helpful? React with 👍 or 👎 to help Kody learn from this interaction.

Scope: dashboard rendering, restart selection, startup run-state metadata, and regression coverage for configured-but-missing app services.

Key behavior changes:
- Add `dashboard_project_configured_services` as project-scoped dashboard metadata for active run states.
- Gate that metadata by per-project service start capability so frontend-only layouts do not record backend as configured just because the runtime mode enables backend globally.
- Render configured-missing backend/frontend rows as `not running [Stopped]` only when project-scoped metadata says that app type is configured.
- Include configured-missing services in dashboard totals and restart resource selection without auto-starting them on render/resume.
- Synthesize canonical restart service names such as `Main Backend` so restart can start a configured app service even when no prior `ServiceRecord` exists.

Files/modules touched:
- `python/envctl_engine/shared/dashboard_metadata.py`
- `python/envctl_engine/startup/finalization.py`
- `python/envctl_engine/ui/dashboard/rendering.py`
- `python/envctl_engine/ui/dashboard/orchestrator.py`
- `tests/python/startup/test_startup_orchestrator_profiles.py`
- `tests/python/startup/test_startup_spinner_integration.py`
- `tests/python/ui/test_dashboard_orchestrator_restart_selector.py`
- `tests/python/ui/test_dashboard_rendering_parity.py`

Tests run and results:
- New regression subset: `8 passed`.
- Frontend-only metadata regression: failed before the capability gate, then passed.
- Targeted spec suite: `129 passed`.
- Full Python suite: `1949 passed, 12 skipped`.
- LSP diagnostics on modified source/tests: no diagnostics.
- Manual dashboard smoke: backend row rendered as `not running [Stopped]`; restart selector offered `Backend — Main (stopped)` and dispatched `services=['Main Backend']`, `restart_service_types=['backend']`.

Config/env/migrations: no migrations or external runtime configuration changes.

Risks/notes:
- Project-scoped configured services use mode/profile enablement plus per-project command/layout detection to avoid exposing unstartable backend/frontend rows for single-app project layouts.
@kfiramar
Copy link
Copy Markdown
Collaborator Author

kfiramar commented May 2, 2026

Code Review Completed! 🔥

The code review was successfully completed based on your current configurations.

Kody Guide: Usage and Configuration
Interacting with Kody
  • Request a Review: Ask Kody to review your PR manually by adding a comment with the @kody start-review command at the root of your PR.

  • Validate Business Logic: Ask Kody to validate your code against business rules by adding a comment with the @kody -v business-logic command.

  • Provide Feedback: Help Kody learn and improve by reacting to its comments with a 👍 for helpful suggestions or a 👎 if improvements are needed.

Current Kody Configuration
Review Options

The following review options are enabled or disabled:

Options Enabled
Bug
Performance
Security
Business Logic

Access your configuration settings here.

@@ -0,0 +1,35 @@
---
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kody code-review Kody Rules low

Noisy local artifact added to the PR: .sisyphus/ralph-loop.local.md contains session metadata and local agent workflow instructions unrelated to application behavior. Remove this file from the diff and keep local runtime/control files out of version control.

Kody Rule violation: Remove dead code and noisy diffs

Prompt for LLM

File .sisyphus/ralph-loop.local.md:

Line 1:

Remove `.sisyphus/ralph-loop.local.md` from this PR because the codebase rule requires diffs to avoid commented-out code, unused files, unrelated snapshots, and noisy local artifacts. The added file contains local Sisyphus session metadata such as `started_at`, `session_id`, iteration counters, and local agent workflow instructions, which are not application code or necessary project documentation.

Talk to Kody by mentioning @kody

Was this suggestion helpful? React with 👍 or 👎 to help Kody learn from this interaction.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant