Refactor id and hashing#17
Conversation
Pass NodeIdentity to saorsa-core's NodeConfig for message signing. Fix peer ID type conversions to use NodeId hex strings consistently. - Set core_config.node_identity in NodeBuilder and Devnet - Use NodeId::to_hex() for peer_id instead of node_id_to_peer_id - Validate target peer as hex NodeId in chunk_protocol - Fix pick_target_peer to clone peer_id String correctly Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Update for saorsa-core 817e193 which decoupled channel IDs from peer IDs. Rename transport_peer_id() → channel_id() at call sites, clarify misleading names: DevnetNode.node_id → label (it's a human label, not a NodeId), node_id_to_peer_id() → node_id_to_hex() (it's just hex encoding), and target_peer_id → target_channel_id where the value comes from channel_id(). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Align with saorsa-core's PeerId struct rename: update imports from NodeId to PeerId, rename node_id_to_hex to peer_id_to_hex, and change .node_id() calls to .peer_id() throughout. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace SHA256 with BLAKE3 for chunk content-addressing across the codebase. Update all relevant functions, comments, and tests to reflect the change. Adjust error handling and PeerId type usage for consistency.
Refactor across the codebase to directly use PeerId instances instead of hex-encoded strings where possible. Remove `peer_id_to_hex` utility and update related data structures, imports, and tests accordingly.
Optimize by directly dereferencing PeerId instances instead of cloning, streamlining identity resolution and configuration setup.
Update node.rs and testnet.rs to match saorsa-core's simplified bootstrap_peers field (removed bootstrap_peers_str, uses wait_for_peer_identity() for real PeerIds). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Simplify test code by directly dereferencing PeerId instances instead of cloning. Update affected functions in `testnet.rs` and `integration_tests.rs` for consistency.
d9acbd5 to
220f224
Compare
Directly compare `source` with dereferenced `target_peer` to remove redundant `as_ref` call.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Greptile SummaryThis PR performs two related refactors: (1) a full migration of the content-addressing hash algorithm from SHA256 ( Key changes:
Important: The SHA256 → BLAKE3 switch is a breaking change for any persistent data. Chunk addresses stored under the old scheme are SHA256-derived and will not be found by the new BLAKE3-based lookups. There is no migration path included in this PR — all persisted storage will need to be wiped and re-ingested after deployment. Confidence Score: 5/5
Last reviewed commit: c0e3c77 |
There was a problem hiding this comment.
Pull request overview
Refactors peer identity handling and switches content-addressing/hashing from SHA-256 to BLAKE3 across the node, storage, client, and E2E test utilities.
Changes:
- Replace SHA-256 with BLAKE3 for
XorName/content addressing and update related docs/tests/expected vectors. - Simplify
PeerIdusage by passing references directly (remove redundant copies / fully-qualified types). - Ensure devnet/testnet nodes have a
node_identityconfigured for signing/identity announce behavior.
Reviewed changes
Copilot reviewed 18 out of 19 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/e2e/testnet.rs | Uses PeerId aliasing and adjusts remote chunk ops; adds node identity setup (currently duplicated/overwritten). |
| tests/e2e/live_testnet.rs | Updates content-addressing doc comment to BLAKE3. |
| tests/e2e/integration_tests.rs | Adjusts peer selection and updates address/hash expectation comments. |
| tests/e2e/data_types/scratchpad.rs | Switches scratchpad address derivation to BLAKE3. |
| tests/e2e/data_types/pointer.rs | Switches pointer address derivation to BLAKE3. |
| tests/e2e/data_types/graph_entry.rs | Switches graph entry address derivation to BLAKE3. |
| tests/e2e/data_types/chunk.rs | Updates chunk addressing docs and known-hash test vectors to BLAKE3. |
| src/upgrade/rollout.rs | Uses BLAKE3 for deterministic rollout hashing. |
| src/storage/lmdb.rs | Updates docs/tests to reflect BLAKE3-derived addresses. |
| src/storage/handler.rs | Updates address verification comment to BLAKE3. |
| src/node.rs | Minor comment punctuation tweak. |
| src/devnet.rs | Refactors devnet node labeling/peer id typing and loads identity for message signing. |
| src/client/quantum.rs | Updates docs to state BLAKE3 addressing. |
| src/client/mod.rs | Updates module docs to state BLAKE3 addressing. |
| src/client/data_types.rs | Implements compute_address using BLAKE3. |
| src/client/chunk_protocol.rs | Removes redundant PeerId copy; compares source against *target_peer. |
| src/ant_protocol/chunk.rs | Updates docs to state BLAKE3 addressing. |
| Cargo.toml | Replaces sha2 dependency with blake3. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
Complete the Section 18 test matrix with the remaining scenarios: - #3: Fresh replication stores chunk + updates PaidForList on remote nodes - #9: Fetch retry rotates to alternate source - #10: Fetch retry exhaustion with single source - #11: Repeated ApplicationFailure events decrease peer trust score - #12: Bootstrap node discovers keys stored on multiple peers - #14: Hint construction covers all locally stored keys - #15: Data and PaidForList survive node shutdown (partition) - #17: Neighbor sync request returns valid response (admission test) - #21: Paid-list majority confirmed from multiple peers via verification - #24: PaidNotify propagates paid-list entries after fresh replication - #25: Paid-list convergence verified via majority peer queries - #44: PaidForList persists across restart (cold-start recovery) - #45: PaidForList lost in fresh directory (unrecoverable scenario) All 56 Section 18 scenarios now have test coverage. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- #3: Add proper unit test in scheduling.rs exercising full pipeline (PendingVerify → QueuedForFetch → Fetching → Stored); rename mislabeled e2e test to scenario_1_and_24 - #12: Rewrite e2e test to send verification requests to 4 holders and assert quorum-level presence + paid confirmations - #13: Rename mislabeled bootstrap drain test in types.rs; add proper unit test in paid_list.rs covering range shrink, hysteresis retention, and new key acceptance - #14: Rewrite e2e test to send NeighborSyncRequest and assert response hints cover all locally stored keys - #15: Rewrite e2e test to store on 2 nodes, partition one, then verify paid-list authorization confirmable via verification request - #17: Rewrite e2e test to store data on receiver, send sync, and assert outbound replica hints returned (proving bidirectional exchange) - #55: Replace weak enum-distinctness check with full audit failure flow: compute digests, identify mismatches, filter by responsibility, verify empty confirmed failure set produces no evidence Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Complete the Section 18 test matrix with the remaining scenarios: - #3: Fresh replication stores chunk + updates PaidForList on remote nodes - #9: Fetch retry rotates to alternate source - #10: Fetch retry exhaustion with single source - #11: Repeated ApplicationFailure events decrease peer trust score - #12: Bootstrap node discovers keys stored on multiple peers - #14: Hint construction covers all locally stored keys - #15: Data and PaidForList survive node shutdown (partition) - #17: Neighbor sync request returns valid response (admission test) - #21: Paid-list majority confirmed from multiple peers via verification - #24: PaidNotify propagates paid-list entries after fresh replication - #25: Paid-list convergence verified via majority peer queries - #44: PaidForList persists across restart (cold-start recovery) - #45: PaidForList lost in fresh directory (unrecoverable scenario) All 56 Section 18 scenarios now have test coverage. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- #3: Add proper unit test in scheduling.rs exercising full pipeline (PendingVerify → QueuedForFetch → Fetching → Stored); rename mislabeled e2e test to scenario_1_and_24 - #12: Rewrite e2e test to send verification requests to 4 holders and assert quorum-level presence + paid confirmations - #13: Rename mislabeled bootstrap drain test in types.rs; add proper unit test in paid_list.rs covering range shrink, hysteresis retention, and new key acceptance - #14: Rewrite e2e test to send NeighborSyncRequest and assert response hints cover all locally stored keys - #15: Rewrite e2e test to store on 2 nodes, partition one, then verify paid-list authorization confirmable via verification request - #17: Rewrite e2e test to store data on receiver, send sync, and assert outbound replica hints returned (proving bidirectional exchange) - #55: Replace weak enum-distinctness check with full audit failure flow: compute digests, identify mismatches, filter by responsibility, verify empty confirmed failure set produces no evidence Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
No description provided.