Replace shutdown channels with CancellationToken#9
Conversation
…Token Switch from tokio broadcast and watch channels to tokio_util CancellationToken for shutdown signaling in both Devnet and RunningNode. This simplifies the shutdown logic and enables concurrent node shutdown in devnet. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
This PR updates devnet and node shutdown signaling to use tokio_util::CancellationToken, and adjusts health monitoring to match a now-synchronous is_running check.
Changes:
- Replace
broadcast/watchshutdown channels withCancellationTokeninDevnetandRunningNode - Shut down devnet nodes concurrently via
futures::future::join_all - Update health monitor checks to call
is_running()synchronously (no.await)
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
| tests/e2e/testnet.rs | Updates health monitor to use non-async is_running() |
| src/node.rs | Replaces watch shutdown signaling with CancellationToken in RunningNode |
| src/devnet.rs | Replaces broadcast shutdown signaling with CancellationToken and concurrently shuts down nodes |
| Cargo.toml | Adds tokio-util dependency for CancellationToken |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
|
||
| # Async runtime | ||
| tokio = { version = "1.35", features = ["full", "signal"] } | ||
| tokio-util = { version = "0.7", features = ["rt"] } |
There was a problem hiding this comment.
CancellationToken lives under tokio_util::sync and typically requires enabling the tokio-util crate's sync feature. With only features = [\"rt\"], this may fail to compile. Consider switching to tokio-util = { version = \"0.7\", features = [\"sync\"] } (or include both \"sync\" and any other needed features).
| tokio-util = { version = "0.7", features = ["rt"] } | |
| tokio-util = { version = "0.7", features = ["rt", "sync"] } |
Greptile OverviewGreptile SummaryRefactored shutdown signaling from Key changes:
Note: The PR description mentions DHT-based routing and self-node filtering, but these features are not part of this PR (they were merged in PR #8). This PR only contains the Confidence Score: 5/5
|
| Filename | Overview |
|---|---|
| Cargo.toml | Added tokio-util dependency with rt feature for CancellationToken support |
| src/devnet.rs | Replaced broadcast channels with CancellationToken for shutdown signaling; parallelized node shutdown with join_all; removed unused constant |
| src/node.rs | Replaced watch channels with CancellationToken for shutdown signaling; simplified shutdown logic by removing channel state checks |
| tests/e2e/testnet.rs | Updated is_running() call to synchronous method (removed .await), consistent with P2PNode API |
Sequence Diagram
sequenceDiagram
participant Caller
participant Devnet
participant ShutdownSignal
participant HealthMonitor
participant DevnetNode
participant P2PNode
Note over Devnet,ShutdownSignal: Shutdown Initiated
Caller->>Devnet: shutdown()
Devnet->>ShutdownSignal: cancel()
par Health Monitor Shutdown
ShutdownSignal-->>HealthMonitor: cancelled() fires
HealthMonitor->>HealthMonitor: exit loop
and Node Shutdown (parallel for all nodes)
Devnet->>DevnetNode: take p2p_node
Devnet->>P2PNode: shutdown().await
P2PNode-->>Devnet: Result
Devnet->>DevnetNode: set state to Stopped
end
Devnet-->>Caller: shutdown complete
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| let p2p_node = node.p2p_node.take(); | ||
|
|
||
| shutdown_futures.push(async move { | ||
| if let Some(p2p) = p2p_node { | ||
| if let Err(e) = p2p.shutdown().await { | ||
| warn!("Error shutting down node {}: {}", node_index, e); | ||
| } | ||
| } | ||
| } | ||
| *node.state.write().await = NodeState::Stopped; | ||
| *node_state.write().await = NodeState::Stopped; | ||
| }); |
There was a problem hiding this comment.
The async block pushed into shutdown_futures is not spawned as a task or wrapped in a future that will execute independently. Since join_all expects futures that are already running or self-contained, this code will execute the shutdown logic sequentially when join_all polls each future, defeating the purpose of concurrent shutdown. Spawn each shutdown as a tokio task (e.g., tokio::spawn(async move { ... })) and collect the join handles instead.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 4 out of 4 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| *node_state.write().await = NodeState::Stopped; | ||
| }); | ||
| } | ||
| futures::future::join_all(shutdown_futures).await; |
There was a problem hiding this comment.
Missing import for futures crate. The code uses futures::future::join_all but there is no corresponding use futures statement at the top of the file. This will cause a compilation error. Add use futures or use futures::future to the imports.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 4 out of 4 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
dirvine
left a comment
There was a problem hiding this comment.
Clean refactor: broadcast/watch shutdown channels → CancellationToken. Concurrent shutdown via join_all is a solid improvement. All CI gates pass. LGTM.
Complete the Section 18 test matrix with the remaining scenarios: - #3: Fresh replication stores chunk + updates PaidForList on remote nodes - #9: Fetch retry rotates to alternate source - #10: Fetch retry exhaustion with single source - #11: Repeated ApplicationFailure events decrease peer trust score - #12: Bootstrap node discovers keys stored on multiple peers - #14: Hint construction covers all locally stored keys - #15: Data and PaidForList survive node shutdown (partition) - #17: Neighbor sync request returns valid response (admission test) - #21: Paid-list majority confirmed from multiple peers via verification - #24: PaidNotify propagates paid-list entries after fresh replication - #25: Paid-list convergence verified via majority peer queries - #44: PaidForList persists across restart (cold-start recovery) - #45: PaidForList lost in fresh directory (unrecoverable scenario) All 56 Section 18 scenarios now have test coverage. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Complete the Section 18 test matrix with the remaining scenarios: - #3: Fresh replication stores chunk + updates PaidForList on remote nodes - #9: Fetch retry rotates to alternate source - #10: Fetch retry exhaustion with single source - #11: Repeated ApplicationFailure events decrease peer trust score - #12: Bootstrap node discovers keys stored on multiple peers - #14: Hint construction covers all locally stored keys - #15: Data and PaidForList survive node shutdown (partition) - #17: Neighbor sync request returns valid response (admission test) - #21: Paid-list majority confirmed from multiple peers via verification - #24: PaidNotify propagates paid-list entries after fresh replication - #25: Paid-list convergence verified via majority peer queries - #44: PaidForList persists across restart (cold-start recovery) - #45: PaidForList lost in fresh directory (unrecoverable scenario) All 56 Section 18 scenarios now have test coverage. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
tokio::sync::broadcastandtokio::sync::watchshutdown channels withtokio_util::CancellationTokeninDevnetandRunningNodejoin_allinstead of sequentiallyCancellationToken::cancelled()needs no extra borrow-check, removing nestedifguardsTest plan
cargo build --releasecompiles cleanlycargo testpasses all existing tests🤖 Generated with Claude Code