feat: self-encryption, batched payments, and saorsa-core compatibility#26
feat: self-encryption, batched payments, and saorsa-core compatibility#26mickvandijke merged 26 commits intomainfrom
Conversation
…modes Local devnets and testnets run all nodes on 127.0.0.1, which requires the transport layer to accept loopback connections. Set the env var in Devnet::start(), TestNetwork::start(), and NodeBuilder::build_core_config() for Development and Testnet network modes. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace SAORSA_TRANSPORT_ALLOW_LOOPBACK env var with the new NodeConfig.allow_loopback config field from saorsa-core. Set it on the core config in devnet start_node(), testnet start_node(), and NodeBuilder::build_core_config() for Development/Testnet modes. Remove the now-unused TRANSPORT_ALLOW_LOOPBACK_ENV constant. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
saorsa-core changed bootstrap_peers from Vec<SocketAddr> to Vec<Multiaddr>. Use .iter().map(Into::into).collect() to convert at each assignment site. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Enables loopback connections for devnet/local testing when using saorsa-cli for uploads and downloads. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Update all references to bootstrap addresses to use `MultiAddr` instead of `SocketAddr` following updates in `saorsa-core`. Adjust imports, configuration, and mappings accordingly.
Update configurations to use `listen_addrs` with `MultiAddr::quic()` exclusively, simplifying node setup and aligning with recent changes in `saorsa-core`.
Update configuration to use `ListenMode` for node setup, simplifying initialization and reducing redundancy. Adjust imports and mappings to reflect changes in `saorsa-core` API.
Replace plaintext file chunking with convergent encryption via the self_encryption crate (quantum-safe: ChaCha20-Poly1305 + BLAKE3). - Add src/client/self_encrypt.rs with streaming encrypt/decrypt, DataMap serialization, and public/private data mode support - Update CLI with --public upload flag and --datamap download option - Set plaintext chunk size to 4MB-4KB via .cargo/config.toml to leave headroom for AEAD tag and compression overhead - Point self_encryption dep at grumbach/post_quatum branch
- Propagate I/O errors during file read instead of silently treating them as EOF - Remove DATAMAP_HEX from stdout in private mode (security leak) - Write decryption output to temp file, rename atomically on success - Fix misleading low memory footprint docs - Remove test-only functions from public API re-exports
…drain uploads on failure - Move read_error check to after chunk iteration (not just after stream_encrypt) so mid-stream I/O failures are caught instead of silently truncating data - Add randomness to temp file names to prevent collisions in concurrent calls - Reject CurrentThread runtime explicitly instead of deadlocking on block_on - Drain in-flight upload futures on failure, collecting tx_hashes from succeeded uploads and logging them before returning the error - Collect tx_hashes on hash-mismatch path too (was previously discarded) - Log temp file cleanup errors instead of silently swallowing them
- Remove dead .cargo/config.toml (MAX_CHUNK_SIZE env var had no effect) - Extract open_encrypt_stream() to deduplicate file-read + encrypt setup - Extract write_stream_to_file() to deduplicate atomic-write-and-rename - Fix succeeded counter: track uploaded chunks, not tx hashes - Capture tx_hashes before hash-mismatch early return - Add READ_BUFFER_SIZE constant, reuse buffer across iterator calls - Handle Windows rename-over-existing-file edge case - Pin self_encryption dep to immutable commit rev - Add backward-compat comment on legacy file_ops re-exports
Self-encryption fully replaces the legacy FileManifest/split_file API. Remove file_ops module, its exports from mod.rs and lib.rs, and update CLI help text to no longer reference manifest addresses.
…nConfig) - Convert bootstrap_peers from Vec<SocketAddr> to Vec<MultiAddr> in node, devnet, and CLI - Remove ProductionConfig import and field usage (removed from saorsa-core) - Replace get_stats() with stats() and update field names (total_peers, average_quality) - Fix pay_for_quotes destructuring to match evmlib 0.4.7 return type (BTreeMap, not tuple) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Update configurations to align with saorsa-core simplifications. Replace `ListenMode` usage with `local` flag and optional `allow_loopback` field in devnet, testnet, and CLI. Adjust related imports and tests to reflect API changes.
Replace references to `NetworkCoordinator`, `EigenTrustEngine`, and `SecurityManager` with `P2PNode`, `TrustEngine`, and `IPDiversityConfig` to align with saorsa-core updates. Adjust documentation, configurations, and imports accordingly.
Replace per-chunk payment transactions with a single batch payment for all chunks, reducing on-chain transactions and eliminating nonce collision risks. Update `encrypt_and_upload_file` to include three distinct phases: encrypt, quote, pay + store. Add `PreparedChunk` struct and methods for quote collection and batch processing.
Replace single-phase chunk upload with a multi-wave pipeline using `PAYMENT_WAVE_SIZE` for optimized EVM payments and bounded memory. Introduce `PaidChunk` struct for decoupling payment and storage. Update `encrypt_and_upload_file` with wave-based streaming for improved performance and responsiveness.
Track and avoid duplicate payments for already processed chunks using `HashSet`. Update logging to reflect skipped duplicate chunks, ensuring more efficient and accurate upload flow.
…ergence Quote collection and chunk storage previously used independent DHT lookups, which could return different peer sets under churn. This meant the peer that received payment might never see the chunk, and the peer that stored the chunk might not have been paid — leading to rejected stores after gas was already spent. Now the closest peer is captured during quote collection and threaded through PreparedChunk → PaidChunk → put_chunk_with_proof, eliminating the second DHT round-trip and guaranteeing the storage target is always one of the paid peers. Also fixes a pre-existing build error (max_nodes_per_64 → max_nodes_per_ipv6_64) from a saorsa-core API rename. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
This PR upgrades saorsa-node to support streaming self-encryption for file uploads/downloads, introduces batched (wave-based) EVM payments with a pinned storage target, and updates the codebase/tests/CLI to be compatible with newer saorsa-core APIs (notably MultiAddr and NodeConfig::builder).
Changes:
- Add
client::self_encryptwith streaming encrypt/upload and download/decrypt plusDataMap(de)serialization and public/private modes. - Add batched payment primitives (
PreparedChunk/PaidChunk,batch_pay*) and update storage flows/tests to pass an explicittarget_peer. - Update node/devnet/testnet/CLI configuration for
saorsa-coreAPI changes (builder config,MultiAddr, updated stats APIs), and remove legacy plaintextfile_ops.
Reviewed changes
Copilot reviewed 17 out of 19 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
src/client/self_encrypt.rs |
New self-encryption streaming upload/download + DataMap helpers and tests. |
src/client/quantum.rs |
Adds batch payment types/APIs and changes quoting to return a pinned target_peer; changes put_chunk_with_proof to require an explicit target peer. |
src/client/mod.rs |
Removes legacy file_ops module export; exposes self_encrypt. |
src/client/file_ops.rs |
Removes plaintext chunking/manifest logic (replaced by self-encryption). |
src/bin/saorsa-cli/main.rs |
Updates CLI upload/download flows to use DataMap + self-encryption; adds loopback option and input-size guarding. |
src/bin/saorsa-cli/cli.rs |
Adds --allow-loopback, --public, and --datamap flags; updates parsing tests. |
src/node.rs |
Migrates saorsa-core config creation to builder API, MultiAddr, and updated bootstrap stats. |
src/devnet.rs |
Updates devnet bootstrap addressing/config to MultiAddr + builder API. |
src/storage/handler.rs |
Uses shared compute_address import and simplifies error mapping. |
src/payment/single_node.rs |
Adapts to updated pay_for_quotes return type. |
src/lib.rs |
Re-exports self-encryption helpers and removes legacy manifest APIs from public exports. |
tests/e2e/*.rs |
Updates E2E tests for new quote return type and explicit target_peer storage calls; disables live testnet tests pending rewrite. |
docs/DESIGN.md |
Updates architecture references to match newer saorsa-core terminology. |
Cargo.toml |
Updates dependencies for saorsa-core compatibility and adds self_encryption git dependency. |
Comments suppressed due to low confidence (1)
src/client/quantum.rs:805
- The doc comment for
get_quotes_from_dht()still describes returning only a vector of(peer_id, quote, price), but the function now returns(target_peer, Vec<...>). Please update the# Returnssection to reflect the new tuple return value and clarify whattarget_peerrepresents.
/// # Returns
///
/// A vector of (`peer_id`, `PaymentQuote`, `Amount`) tuples containing the quoting peer's ID,
/// the quote, and its price.
///
/// # Errors
///
/// Returns an error if:
/// - DHT lookup fails
/// - Failed to collect enough quotes
/// - Quote deserialization fails
pub async fn get_quotes_from_dht(
&self,
content: &[u8],
) -> Result<(PeerId, Vec<(PeerId, PaymentQuote, Amount)>)> {
let address = compute_address(content);
let data_size = u64::try_from(content.len())
.map_err(|e| Error::Network(format!("Content size too large: {e}")))?;
self.get_quotes_from_dht_for_address(&address, data_size)
.await
}
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
| let handle = Handle::current(); | ||
|
|
||
| let stream = self_encryption::streaming_decrypt(data_map, |batch: &[(usize, XorName)]| { | ||
| let batch_owned: Vec<(usize, XorName)> = batch.to_vec(); | ||
|
|
||
| // block_in_place panics on current_thread runtime, and handle.block_on | ||
| // deadlocks there. Reject unsupported runtime flavors explicitly. | ||
| if handle.runtime_flavor() == tokio::runtime::RuntimeFlavor::CurrentThread { | ||
| return Err(self_encryption::Error::Generic( | ||
| "download_and_decrypt_file requires a multi_thread tokio runtime".into(), | ||
| )); | ||
| } | ||
| tokio::task::block_in_place(|| { |
| if let Ok(guard) = read_error.lock() { | ||
| if let Some(ref e) = *guard { | ||
| return Err(Error::Io(std::io::Error::new(e.kind(), format!("{e}")))); | ||
| } | ||
| } |
| } | ||
|
|
||
| Ok(quotes_with_peers) | ||
| Ok((closest_peer, quotes_with_peers)) |
| [dependencies] | ||
| # Core (provides EVERYTHING: networking, DHT, security, trust, storage) | ||
| saorsa-core = "0.14.1" | ||
| saorsa-core = { path = "../saorsa-core" } |
Greptile SummaryThis PR is a substantial and well-scoped feature upgrade that replaces plaintext file chunking ( Key highlights:
Confidence Score: 2/5
|
| Filename | Overview |
|---|---|
| Cargo.toml | Replaces versioned saorsa-core = "0.14.1" with a local path dependency { path = "../saorsa-core" } — this breaks any build that doesn't have the adjacent sibling directory, and cannot be published to crates.io. Also adds self_encryption from a git branch. |
| src/client/self_encrypt.rs | New file implementing streaming self-encryption with wave-based pipelined EVM payments. Well-structured with thorough unit tests; async/sync boundary in download_and_decrypt_file handled carefully with runtime flavor check. Minor: mutex poisoning in check_read_error is silently ignored. |
| src/client/quantum.rs | Adds PreparedChunk, PaidChunk, batch_pay, batch_pay_and_store, and prepare_chunk_payment. Breaking: put_chunk_with_proof now takes an explicit target_peer parameter; get_quotes_from_dht returns a tuple (PeerId, Vec<...>). The payment/storage peer-pinning fix is correct and well-documented. |
| src/node.rs | Updates to saorsa-core 0.16 builder API (NodeConfig::builder()), removes ProductionConfig, renames max_nodes_per_64 to max_nodes_per_ipv6_64, switches manager.get_stats() to manager.stats(). Clean migration; testnet mode now correctly sets allow_loopback. |
| src/bin/saorsa-cli/main.rs | Major rewrite of upload/download handlers to use self_encrypt API; adds --public upload mode and --datamap download option; guards wallet creation behind needs_wallet. Stdin/file size limits now enforced via MAX_CHUNK_SIZE. The default download path falls back to the plain name "downloaded_file" with no extension. |
| tests/e2e/live_testnet.rs | Three large test functions (load, verify, comprehensive) replaced with single-line unimplemented!() bodies, marked #[ignore]. These will panic rather than skip if explicitly invoked with cargo test -- --ignored. |
Sequence Diagram
sequenceDiagram
participant CLI
participant SelfEncrypt as self_encrypt.rs
participant QuantumClient
participant EVM as EVM (evmlib)
participant Network as P2P Network
CLI->>SelfEncrypt: encrypt_and_upload_file(path, client)
SelfEncrypt->>SelfEncrypt: open_encrypt_stream(path, file_size)
loop Wave loop (PAYMENT_WAVE_SIZE=64 chunks)
SelfEncrypt->>SelfEncrypt: Pull wave of encrypted chunks from stream
par Quote wave concurrently (pipelined with prior stores)
SelfEncrypt->>QuantumClient: prepare_chunk_payment(content) ×N
QuantumClient->>Network: DHT lookup → get quotes from peers
Network-->>QuantumClient: quotes + target_peer (pinned)
QuantumClient-->>SelfEncrypt: PreparedChunk { content, peer_quotes, target_peer }
and Drain completed stores from prior wave
SelfEncrypt->>SelfEncrypt: store_futs.next() — await completed PUTs
end
SelfEncrypt->>QuantumClient: batch_pay(prepared_chunks)
QuantumClient->>EVM: pay_for_quotes(all_quotes in one tx)
EVM-->>QuantumClient: BTreeMap<QuoteHash, TxHash>
QuantumClient-->>SelfEncrypt: Vec<PaidChunk>
loop For each PaidChunk (non-blocking launch)
SelfEncrypt->>QuantumClient: put_chunk_with_proof(content, proof, target_peer)
QuantumClient->>Network: ChunkPutRequest → target_peer (pinned during quoting)
end
end
SelfEncrypt->>SelfEncrypt: Drain remaining store futures
SelfEncrypt->>SelfEncrypt: check_read_error()
SelfEncrypt->>SelfEncrypt: stream.into_datamap()
SelfEncrypt-->>CLI: (DataMap, tx_hashes)
alt Public mode
CLI->>SelfEncrypt: store_data_map_public(data_map, client)
SelfEncrypt->>QuantumClient: put_chunk_with_payment(serialized_data_map)
QuantumClient-->>CLI: FILE_ADDRESS=<hex>
else Private mode
CLI->>CLI: write data_map to <filename>.datamap
CLI-->>CLI: DATAMAP_FILE=<path>
end
Comments Outside Diff (3)
-
Cargo.toml, line 9-10 (link)Local path dependency should not be merged to main
saorsa-core = { path = "../saorsa-core" }is a development-only path dependency that will break any build that doesn't have thesaorsa-corerepo checked out at exactly../saorsa-corerelative to this project. This will fail CI, contributor builds, and cannot be published to crates.io.The PR description references saorsa-core 0.15/0.16 APIs — this should be pinned to a released version before merging:
Similarly, the
self_encryptiongit-branch dependency (branch = "post_quatum") introduces the same instability risk: the branch could be force-pushed or deleted. Consider locking to a specific commit SHA or waiting for the crate to be published:self_encryption = { git = "https://github.com/grumbach/self_encryption.git", rev = "<commit-sha>" }
Prompt To Fix With AI
This is a comment left during a code review. Path: Cargo.toml Line: 9-10 Comment: **Local path dependency should not be merged to main** `saorsa-core = { path = "../saorsa-core" }` is a development-only path dependency that will break any build that doesn't have the `saorsa-core` repo checked out at exactly `../saorsa-core` relative to this project. This will fail CI, contributor builds, and cannot be published to crates.io. The PR description references saorsa-core 0.15/0.16 APIs — this should be pinned to a released version before merging: Similarly, the `self_encryption` git-branch dependency (`branch = "post_quatum"`) introduces the same instability risk: the branch could be force-pushed or deleted. Consider locking to a specific commit SHA or waiting for the crate to be published: ```toml self_encryption = { git = "https://github.com/grumbach/self_encryption.git", rev = "<commit-sha>" } ``` How can I resolve this? If you propose a fix, please make it concise.
-
tests/e2e/live_testnet.rs, line 2697-2812 (link)#[ignore]tests callunimplemented!()— will panic if explicitly invokedAll three test bodies (
run_load_test,run_verify_chunks,run_comprehensive_data_tests) now end with:let _node = create_testnet_client().await; unimplemented!("rewrite with QuantumClient");
While
#[ignore]prevents them from running in the defaultcargo testpass, runningcargo test -- --ignored(common in CI pipelines that separately exercise ignored tests) will cause a panic instead of a clean skip. This is surprising behaviour for any future contributor.A cleaner approach that still communicates the TODO without panicking:
#[tokio::test] #[ignore = "needs rewrite: dht_put/dht_get removed in saorsa-core 0.16"] async fn run_load_test() { // TODO: rewrite using QuantumClient once the batch-payment API is stable. }
The same applies to
run_verify_chunksandrun_comprehensive_data_tests.Prompt To Fix With AI
This is a comment left during a code review. Path: tests/e2e/live_testnet.rs Line: 2697-2812 Comment: **`#[ignore]` tests call `unimplemented!()` — will panic if explicitly invoked** All three test bodies (`run_load_test`, `run_verify_chunks`, `run_comprehensive_data_tests`) now end with: ```rust let _node = create_testnet_client().await; unimplemented!("rewrite with QuantumClient"); ``` While `#[ignore]` prevents them from running in the default `cargo test` pass, running `cargo test -- --ignored` (common in CI pipelines that separately exercise ignored tests) will cause a panic instead of a clean skip. This is surprising behaviour for any future contributor. A cleaner approach that still communicates the TODO without panicking: ```rust #[tokio::test] #[ignore = "needs rewrite: dht_put/dht_get removed in saorsa-core 0.16"] async fn run_load_test() { // TODO: rewrite using QuantumClient once the batch-payment API is stable. } ``` The same applies to `run_verify_chunks` and `run_comprehensive_data_tests`. How can I resolve this? If you propose a fix, please make it concise.
-
src/client/self_encrypt.rs, line 1356-1363 (link)Poisoned mutex is silently ignored — read errors could be swallowed
If the iterator closure panics while holding the
read_error_writerlock,read_error.lock()will returnErr(PoisonError). The current code treats that as "no error" and returnsOk(()), potentially hiding the fact that file reading was interrupted abnormally.fn check_read_error(read_error: &ReadErrorCapture) -> Result<()> { if let Ok(guard) = read_error.lock() { // ← PoisonError silently dropped if let Some(ref e) = *guard { return Err(Error::Io(std::io::Error::new(e.kind(), format!("{e}")))); } } Ok(()) }
Consider surfacing the poison case:
fn check_read_error(read_error: &ReadErrorCapture) -> Result<()> { match read_error.lock() { Ok(guard) => { if let Some(ref e) = *guard { return Err(Error::Io(std::io::Error::new(e.kind(), format!("{e}")))); } } Err(_) => { return Err(Error::Io(std::io::Error::new( std::io::ErrorKind::Other, "Read error capture mutex was poisoned", ))); } } Ok(()) }
Prompt To Fix With AI
This is a comment left during a code review. Path: src/client/self_encrypt.rs Line: 1356-1363 Comment: **Poisoned mutex is silently ignored — read errors could be swallowed** If the iterator closure panics while holding the `read_error_writer` lock, `read_error.lock()` will return `Err(PoisonError)`. The current code treats that as "no error" and returns `Ok(())`, potentially hiding the fact that file reading was interrupted abnormally. ```rust fn check_read_error(read_error: &ReadErrorCapture) -> Result<()> { if let Ok(guard) = read_error.lock() { // ← PoisonError silently dropped if let Some(ref e) = *guard { return Err(Error::Io(std::io::Error::new(e.kind(), format!("{e}")))); } } Ok(()) } ``` Consider surfacing the poison case: ```rust fn check_read_error(read_error: &ReadErrorCapture) -> Result<()> { match read_error.lock() { Ok(guard) => { if let Some(ref e) = *guard { return Err(Error::Io(std::io::Error::new(e.kind(), format!("{e}")))); } } Err(_) => { return Err(Error::Io(std::io::Error::new( std::io::ErrorKind::Other, "Read error capture mutex was poisoned", ))); } } Ok(()) } ``` How can I resolve this? If you propose a fix, please make it concise.
Prompt To Fix All With AI
This is a comment left during a code review.
Path: Cargo.toml
Line: 9-10
Comment:
**Local path dependency should not be merged to main**
`saorsa-core = { path = "../saorsa-core" }` is a development-only path dependency that will break any build that doesn't have the `saorsa-core` repo checked out at exactly `../saorsa-core` relative to this project. This will fail CI, contributor builds, and cannot be published to crates.io.
The PR description references saorsa-core 0.15/0.16 APIs — this should be pinned to a released version before merging:
```suggestion
saorsa-core = "0.16"
```
Similarly, the `self_encryption` git-branch dependency (`branch = "post_quatum"`) introduces the same instability risk: the branch could be force-pushed or deleted. Consider locking to a specific commit SHA or waiting for the crate to be published:
```toml
self_encryption = { git = "https://github.com/grumbach/self_encryption.git", rev = "<commit-sha>" }
```
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: tests/e2e/live_testnet.rs
Line: 2697-2812
Comment:
**`#[ignore]` tests call `unimplemented!()` — will panic if explicitly invoked**
All three test bodies (`run_load_test`, `run_verify_chunks`, `run_comprehensive_data_tests`) now end with:
```rust
let _node = create_testnet_client().await;
unimplemented!("rewrite with QuantumClient");
```
While `#[ignore]` prevents them from running in the default `cargo test` pass, running `cargo test -- --ignored` (common in CI pipelines that separately exercise ignored tests) will cause a panic instead of a clean skip. This is surprising behaviour for any future contributor.
A cleaner approach that still communicates the TODO without panicking:
```rust
#[tokio::test]
#[ignore = "needs rewrite: dht_put/dht_get removed in saorsa-core 0.16"]
async fn run_load_test() {
// TODO: rewrite using QuantumClient once the batch-payment API is stable.
}
```
The same applies to `run_verify_chunks` and `run_comprehensive_data_tests`.
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: src/client/self_encrypt.rs
Line: 1356-1363
Comment:
**Poisoned mutex is silently ignored — read errors could be swallowed**
If the iterator closure panics while holding the `read_error_writer` lock, `read_error.lock()` will return `Err(PoisonError)`. The current code treats that as "no error" and returns `Ok(())`, potentially hiding the fact that file reading was interrupted abnormally.
```rust
fn check_read_error(read_error: &ReadErrorCapture) -> Result<()> {
if let Ok(guard) = read_error.lock() { // ← PoisonError silently dropped
if let Some(ref e) = *guard {
return Err(Error::Io(std::io::Error::new(e.kind(), format!("{e}"))));
}
}
Ok(())
}
```
Consider surfacing the poison case:
```rust
fn check_read_error(read_error: &ReadErrorCapture) -> Result<()> {
match read_error.lock() {
Ok(guard) => {
if let Some(ref e) = *guard {
return Err(Error::Io(std::io::Error::new(e.kind(), format!("{e}"))));
}
}
Err(_) => {
return Err(Error::Io(std::io::Error::new(
std::io::ErrorKind::Other,
"Read error capture mutex was poisoned",
)));
}
}
Ok(())
}
```
How can I resolve this? If you propose a fix, please make it concise.Last reviewed commit: dc07561
…yments Resolved add/add conflict between development and main branches. Kept the wave-based pipelined payment implementation (development) over the simpler UPLOAD_CONCURRENCY approach (main), as it provides batched EVM payments, pipeline overlap, and duplicate chunk deduplication. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
This PR modernizes saorsa-node’s client upload path by integrating streaming self-encryption and introducing batched/wave-based EVM payments, while also updating the codebase to newer saorsa-core APIs (MultiAddr bootstrap peers, config builder changes, updated stats APIs). It also updates the CLI and E2E tests to reflect the new quoting/payment + pinned storage target workflow.
Changes:
- Add wave-based pipelined encrypt→quote→batch-pay→store flow for bounded-memory file uploads.
- Introduce
PreparedChunk/PaidChunkand batch payment APIs inQuantumClient, and pin storage to a quoted peer. - Update node/devnet/CLI/testnet code to saorsa-core’s MultiAddr +
NodeConfig::builder()configuration style.
Reviewed changes
Copilot reviewed 14 out of 16 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
src/client/self_encrypt.rs |
Implements wave-based pipelined quoting/payment/storage during streaming self-encrypted file upload. |
src/client/quantum.rs |
Adds PreparedChunk/PaidChunk, batch payment APIs, returns pinned target_peer from quoting, and updates put_chunk_with_proof signature. |
src/payment/single_node.rs |
Adjusts handling of wallet.pay_for_quotes return type in the payment flow. |
src/node.rs |
Updates core config construction to saorsa-core builder + MultiAddr bootstrap peers; adapts bootstrap stats API changes. |
src/devnet.rs |
Migrates devnet bootstrapping/config to MultiAddr and builder-based core config. |
src/lib.rs |
Updates crate-level architecture docs to match saorsa-core naming/API changes. |
src/bin/saorsa-cli/cli.rs |
Adds --allow-loopback flag to control local/loopback connectivity behavior. |
src/bin/saorsa-cli/main.rs |
Updates bootstrap resolution to MultiAddr and client node creation to use saorsa-core config builder. |
tests/e2e/testnet.rs |
Migrates bootstrap addresses to MultiAddr and updates proof-based storage calls to include pinned target peer. |
tests/e2e/security_attacks.rs |
Updates quote helper and proof-based puts to include pinned target peer. |
tests/e2e/payment_flow.rs |
Updates quote collection callsite for new (target_peer, quotes) return type. |
tests/e2e/integration_tests.rs |
Updates proof-based put call to pass an explicit target peer. |
tests/e2e/complete_payment_e2e.rs |
Updates quote collection and proof-based puts to include pinned target peer. |
tests/e2e/live_testnet.rs |
Disables legacy live-testnet tests pending rewrite for QuantumClient (core API removed raw DHT ops). |
docs/DESIGN.md |
Refreshes design doc terminology to match newer saorsa-core components. |
Cargo.toml |
Bumps saorsa-core dependency version. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
| // Pin the closest peer as the storage target. This peer is always | ||
| // among the quoted set, so the payment proof will include it. | ||
| let closest_peer = remote_peers[0]; | ||
|
|
| for chunk_result in chunks_iter.by_ref() { | ||
| let (hash, content) = chunk_result | ||
| .map_err(|e| Error::Crypto(format!("Self-encryption failed: {e}")))?; | ||
| if !paid_addresses.insert(hash) { | ||
| duplicates_skipped += 1; |
| // Track chunk addresses already paid for to avoid duplicate payments | ||
| // across waves. Content-addressed chunks (BLAKE3) with identical content | ||
| // share the same address, so we only need to pay once. | ||
| let mut paid_addresses: HashSet<XorName> = HashSet::new(); |
| //! - Networking via `P2PNode` | ||
| //! - DHT via `AdaptiveDHT` | ||
| //! - Trust via `TrustEngine` | ||
| //! - Security via `IPDiversityConfig` |
evmlib 0.4.9 changed pay_for_quotes to return (BTreeMap, GasInfo) instead of just BTreeMap. Destructure the tuple at both call sites. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
This PR modernizes saorsa-node’s client upload path by integrating streaming self-encryption and introducing batched, wave-based EVM payments, while also updating the node/devnet/CLI code to be compatible with newer saorsa-core APIs (notably MultiAddr bootstraps and NodeConfig builder usage).
Changes:
- Add streaming self-encryption upload with wave-based quote→batch-pay→store pipelining.
- Introduce batched payment primitives (
PreparedChunk/PaidChunk) and pin storage to the peer selected during quoting to ensure the paid peer is the one storing. - Update node/devnet/CLI/test code to
saorsa-coreconfig + bootstrap (MultiAddr) API changes.
Reviewed changes
Copilot reviewed 14 out of 16 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
src/client/self_encrypt.rs |
Implements wave-based pipelined encrypt+quote+batch-pay+store flow and updates upload logic. |
src/client/quantum.rs |
Adds PreparedChunk/PaidChunk, batch payment APIs, and quote API returning a pinned target_peer. |
src/client/mod.rs |
Re-exports newly introduced batch-payment types. |
src/node.rs |
Migrates core config building to builder API, converts bootstrap peers to MultiAddr, updates bootstrap stats callsites. |
src/devnet.rs |
Updates devnet manifest/bootstrap handling to MultiAddr and core builder config construction. |
src/bin/saorsa-cli/cli.rs |
Adds CLI flag for loopback allowance in local/dev scenarios. |
src/bin/saorsa-cli/main.rs |
Updates bootstrap resolution to MultiAddr and core config creation to builder API. |
tests/e2e/testnet.rs |
Updates testnet harness to use MultiAddr bootstraps and the new pinned-target store API. |
tests/e2e/security_attacks.rs |
Updates quote helper and proof-based PUT calls to pass the pinned target_peer. |
tests/e2e/payment_flow.rs |
Adjusts for updated get_quotes_from_dht return type. |
tests/e2e/integration_tests.rs |
Updates proof-based PUT signature to include a target peer. |
tests/e2e/complete_payment_e2e.rs |
Adapts quote collection / proof PUT to the pinned-target peer API. |
tests/e2e/live_testnet.rs |
Marks removed raw DHT ops tests as ignored pending rewrite for QuantumClient. |
src/lib.rs |
Updates crate-level architecture doc bullets to reflect new saorsa-core naming/components. |
docs/DESIGN.md |
Updates design doc terminology and saorsa-core integration references. |
Cargo.toml |
Bumps saorsa-core and evmlib versions to match updated APIs. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
| /// 1. **Stream** encrypted chunks lazily from the file — at most one wave | ||
| /// of chunks lives in memory at a time. | ||
| /// 2. **Wave loop** — for each wave of `PAYMENT_WAVE_SIZE` chunks: | ||
| /// - **Quote** the wave concurrently, while draining completed stores | ||
| /// from the previous wave via `select!`. | ||
| /// - **Pay** the wave in a single EVM transaction. | ||
| /// - **Launch stores** for the wave (non-blocking, added to the shared | ||
| /// store pool). | ||
| /// 3. **Drain** — await any remaining in-flight stores. | ||
| /// 4. **`DataMap`** — extract the `DataMap` after the encryption stream is | ||
| /// exhausted. | ||
| /// | ||
| /// Chunks are streamed lazily from the encryption iterator and uploaded with | ||
| /// bounded parallelism (`UPLOAD_CONCURRENCY` uploads in flight at once). | ||
| /// Peak memory is bounded by the concurrency limit, not the file size. | ||
| /// This gives us batched payments (no nonce collisions, fewer on-chain txs), | ||
| /// pipelining (stores from wave N overlap with quotes for wave N+1), and | ||
| /// bounded memory (only one wave of chunks buffered at a time). |
| // Pull the next wave of chunks from the encryption stream, | ||
| // skipping any chunk whose address was already paid in a prior wave. | ||
| let mut wave: Vec<Bytes> = Vec::with_capacity(PAYMENT_WAVE_SIZE); | ||
| for chunk_result in chunks_iter.by_ref() { | ||
| let (hash, content) = chunk_result | ||
| .map_err(|e| Error::Crypto(format!("Self-encryption failed: {e}")))?; | ||
| if !paid_addresses.insert(hash) { | ||
| duplicates_skipped += 1; | ||
| continue; | ||
| } | ||
| wave.push(content); | ||
| if wave.len() >= PAYMENT_WAVE_SIZE { | ||
| break; | ||
| } | ||
| } | ||
|
|
||
| if in_flight.is_empty() { | ||
| if wave.is_empty() { | ||
| break; | ||
| } | ||
|
|
||
| // Await the next completed upload | ||
| let (num, hash, result) = in_flight | ||
| .next() | ||
| .await | ||
| .ok_or_else(|| Error::Crypto("Upload stream unexpectedly empty".into()))?; | ||
| match result { | ||
| Ok((address, tx_hashes)) => { | ||
| // Always capture tx hashes, even on mismatch | ||
| all_tx_hashes.extend(tx_hashes.iter().map(|tx| format!("{tx:?}"))); | ||
| uploaded_chunks += 1; | ||
| if address != hash.0 { | ||
| // Drain remaining in-flight futures before returning | ||
| while let Some((_, _, res)) = in_flight.next().await { | ||
| if let Ok((_, txs)) = res { | ||
| all_tx_hashes.extend(txs.iter().map(|tx| format!("{tx:?}"))); | ||
| uploaded_chunks += 1; | ||
| } | ||
| } | ||
| if uploaded_chunks > 0 { | ||
| warn!( | ||
| "{uploaded_chunks} chunk(s) already uploaded before hash mismatch on chunk {num}; \ | ||
| tx_hashes so far: {all_tx_hashes:?}" | ||
| ); | ||
| } | ||
| return Err(Error::Crypto(format!( | ||
| "Hash mismatch for chunk {num}: self_encryption={} network={}", | ||
| hex::encode(hash.0), | ||
| hex::encode(address) | ||
| ))); | ||
| } | ||
| } | ||
| Err(e) => { | ||
| // Drain remaining in-flight futures so we don't lose paid chunks | ||
| while let Some((_, _, res)) = in_flight.next().await { | ||
| if let Ok((_, txs)) = res { | ||
| all_tx_hashes.extend(txs.iter().map(|tx| format!("{tx:?}"))); | ||
| uploaded_chunks += 1; | ||
| } | ||
| } | ||
| if uploaded_chunks > 0 { | ||
| warn!( | ||
| "{uploaded_chunks} chunk(s) already uploaded successfully before failure on chunk {num}; \ | ||
| tx_hashes so far: {all_tx_hashes:?}" | ||
| ); | ||
| } | ||
| return Err(e); | ||
| } | ||
| let wave_size = wave.len(); | ||
| chunk_count += wave_size; | ||
| info!( | ||
| "Wave {wave_idx}: quoting {wave_size} chunks ({} stores in flight)", | ||
| store_futs.len() | ||
| ); | ||
|
|
||
| // Quote this wave concurrently, draining completed stores in parallel. | ||
| let prepared = quote_wave_pipelined(&wave, client, &mut store_futs).await?; | ||
|
|
||
| // Pay for this wave (single EVM transaction). | ||
| let paid = client.batch_pay(prepared).await?; | ||
|
|
||
| // Launch stores for this wave — content moves into the futures, | ||
| // freeing the wave buffer for the next iteration. | ||
| for paid_chunk in paid { | ||
| all_tx_hashes.extend(paid_chunk.tx_hashes.iter().map(|tx| format!("{tx:?}"))); | ||
| store_futs.push(Box::pin(store_paid_chunk(client, paid_chunk))); | ||
| } |
| @@ -396,7 +434,7 @@ impl QuantumClient { | |||
|
|
|||
| Self::send_put_and_await( | |||
| node, | |||
| &target_peer, | |||
| target_peer, | |||
| message_bytes, | |||
| request_id, | |||
| self.config.timeout_secs, | |||
| @@ -406,6 +444,193 @@ impl QuantumClient { | |||
| .await | |||
Add unit and e2e tests covering the remaining Section 18 scenarios: Unit tests (32 new): - Quorum: #4 fail→abandoned, #16 timeout→inconclusive, #27 single-round dual-evidence, #28 dynamic threshold undersized, #33 batched per-key, #34 partial response unresolved, #42 quorum-derived paid-list auth - Admission: #5 unauthorized peer, #7 out-of-range rejected - Config: #18 invalid config rejected, #26 dynamic paid threshold - Scheduling: #8 dedup safety, #8 replica/paid collapse - Neighbor sync: #35 round-robin cooldown skip, #36 cycle completion, #38 snapshot stability mid-join, #39 unreachable removal + slot fill, #40 cooldown peer removed, #41 cycle termination guarantee, consecutive rounds, cycle preserves sync times - Pruning: #50 hysteresis prevents premature delete, #51 timestamp reset on heal, #52 paid/record timestamps independent, #23 entry removal - Audit: #19/#53 partial failure mixed responsibility, #54 all pass, #55 empty failure discard, #56 repair opportunity filter, response count validation, digest uses full record bytes - Types: #13 bootstrap drain, repair opportunity edge cases, terminal state variants - Bootstrap claims: #46 first-seen recorded, #49 cleared on normal E2e tests (4 new): - #2 fresh offer with empty PoP rejected - #5/#37 neighbor sync request returns response - #11 audit challenge multi-key (present + absent) - Fetch not-found for non-existent key Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add unit and e2e tests covering the remaining Section 18 scenarios: Unit tests (32 new): - Quorum: #4 fail→abandoned, #16 timeout→inconclusive, #27 single-round dual-evidence, #28 dynamic threshold undersized, #33 batched per-key, #34 partial response unresolved, #42 quorum-derived paid-list auth - Admission: #5 unauthorized peer, #7 out-of-range rejected - Config: #18 invalid config rejected, #26 dynamic paid threshold - Scheduling: #8 dedup safety, #8 replica/paid collapse - Neighbor sync: #35 round-robin cooldown skip, #36 cycle completion, #38 snapshot stability mid-join, #39 unreachable removal + slot fill, #40 cooldown peer removed, #41 cycle termination guarantee, consecutive rounds, cycle preserves sync times - Pruning: #50 hysteresis prevents premature delete, #51 timestamp reset on heal, #52 paid/record timestamps independent, #23 entry removal - Audit: #19/#53 partial failure mixed responsibility, #54 all pass, #55 empty failure discard, #56 repair opportunity filter, response count validation, digest uses full record bytes - Types: #13 bootstrap drain, repair opportunity edge cases, terminal state variants - Bootstrap claims: #46 first-seen recorded, #49 cleared on normal E2e tests (4 new): - #2 fresh offer with empty PoP rejected - #5/#37 neighbor sync request returns response - #11 audit challenge multi-key (present + absent) - Fetch not-found for non-existent key Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
self_encryptioncrate (ChaCha20-Poly1305 + BLAKE3). Adds streaming encrypt/decrypt, DataMap serialization, and public/private data mode support.ListenMode→local/allow_loopback, renamed stats methods, removedProductionConfig.file_opschunking module, now fully replaced by self-encryption.Key changes
src/client/self_encrypt.rs(new),src/client/quantum.rs,src/client/mod.rssrc/client/self_encrypt.rs,src/client/quantum.rssrc/client/self_encrypt.rs,src/storage/handler.rssrc/node.rs,src/devnet.rs,src/lib.rs,Cargo.tomlsrc/bin/saorsa-cli/cli.rs,src/bin/saorsa-cli/main.rstests/e2e/*.rsTest plan
cargo build --releasesucceedscargo testpasses all unit and integration testscargo clippy --all-featuresclean🤖 Generated with Claude Code