Skip to content

fix: configure max message size for 4 MiB chunk transport#12

Merged
mickvandijke merged 2 commits intomainfrom
fix/message-size
Feb 26, 2026
Merged

fix: configure max message size for 4 MiB chunk transport#12
mickvandijke merged 2 commits intomainfrom
fix/message-size

Conversation

@mickvandijke
Copy link
Copy Markdown
Collaborator

Summary

  • Bump saorsa-core to 0.12.0 which exposes max_message_size on CoreConfig
  • Add configurable max_message_size to NodeConfig (default: 5 MiB via MAX_WIRE_MESSAGE_SIZE)
  • Forward the setting to the QUIC transport layer so receive windows accommodate full-size (4 MiB) chunks plus serialization overhead
  • Update e2e test to store and retrieve a max-size 4 MiB chunk, exercising QUIC stream flow-control limits end-to-end

Test plan

  • cargo test passes (including the updated test_chunk_store_on_remote_node with 4 MiB payloads)
  • cargo clippy --all-features -- -D clippy::panic -D clippy::unwrap_used -D clippy::expect_used clean
  • Deploy to testnet and verify large chunk put/get works across nodes

🤖 Generated with Claude Code

Bump saorsa-core to 0.12.0 which exposes max_message_size on CoreConfig,
then wire it through NodeConfig so the QUIC receive window accommodates
full-size (4 MiB) chunks plus serialization overhead (5 MiB wire).
Update e2e tests to exercise max-size chunk round-trips.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings February 24, 2026 09:14
@greptile-apps
Copy link
Copy Markdown

greptile-apps Bot commented Feb 24, 2026

Greptile Summary

Configures QUIC transport layer to handle maximum-size (4 MiB) chunk transfers by setting max_message_size to 5 MiB (accommodating serialization overhead). The change bumps saorsa-core to 0.12.0, adds the configurable max_message_size field to NodeConfig, forwards it to the transport layer, and updates the e2e test to exercise full-size chunk transfers across nodes.

Key changes:

  • Dependency bump to saorsa-core 0.12.0 exposes max_message_size on CoreConfig
  • New max_message_size configuration field (default: 5 MiB via MAX_WIRE_MESSAGE_SIZE)
  • QUIC stream receive windows now accommodate 4 MiB chunks plus protocol overhead
  • E2E test validates max-size chunk storage/retrieval across nodes under QUIC flow-control limits

Confidence Score: 5/5

  • Safe to merge - clean implementation with comprehensive test coverage
  • The PR makes focused, well-documented changes to support max-size chunk transfers. Code follows project standards (no unwrap/expect/panic in production code), includes clear documentation, and adds an e2e test that exercises the full 4 MiB payload path. The implementation correctly propagates the configuration from NodeConfig through to the QUIC transport layer.
  • No files require special attention

Important Files Changed

Filename Overview
Cargo.toml Bumps saorsa-core from 0.11.1 to 0.12.0 to access max_message_size configuration
src/ant_protocol/mod.rs Exports MAX_WIRE_MESSAGE_SIZE constant for use in node configuration
src/config.rs Adds max_message_size field to NodeConfig with default of 5 MiB for 4 MiB chunks plus overhead
src/node.rs Forwards max_message_size from NodeConfig to CoreConfig for QUIC transport tuning
tests/e2e/data_types/chunk.rs Updates cross-node test to use max-size 4 MiB chunks, exercising QUIC flow control limits
tests/e2e/testnet.rs Configures test network nodes with MAX_WIRE_MESSAGE_SIZE to support max-size chunk tests

Last reviewed commit: 07e85df

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR configures the QUIC transport layer to properly handle maximum-size (4 MiB) data chunks by bumping the saorsa-core dependency and adding a configurable max_message_size setting. The change ensures that QUIC stream receive windows accommodate full-size chunks plus serialization overhead, preventing potential message truncation issues during transmission.

Changes:

  • Bump saorsa-core from 0.11.1 to 0.12.0 which exposes max_message_size configuration option
  • Add max_message_size field to NodeConfig with a default of 5 MiB (MAX_WIRE_MESSAGE_SIZE)
  • Forward the setting from NodeConfig to saorsa-core's CoreNodeConfig in both production code and tests
  • Update end-to-end test to use 4 MiB chunks instead of small chunks to exercise QUIC flow-control limits

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated no comments.

Show a summary per file
File Description
Cargo.toml Bump saorsa-core dependency from 0.11.1 to 0.12.0
src/ant_protocol/mod.rs Export MAX_WIRE_MESSAGE_SIZE constant for external use
src/config.rs Add max_message_size field to NodeConfig with default implementation
src/node.rs Forward max_message_size from NodeConfig to CoreNodeConfig
tests/e2e/testnet.rs Configure max_message_size in test setup to accommodate max-size chunks
tests/e2e/data_types/chunk.rs Update test to use 4 MiB chunks instead of small chunks to validate QUIC stream limits

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@mickvandijke mickvandijke merged commit 5ab7477 into main Feb 26, 2026
12 of 22 checks passed
@mickvandijke mickvandijke deleted the fix/message-size branch February 26, 2026 09:29
mickvandijke added a commit that referenced this pull request Apr 1, 2026
Complete the Section 18 test matrix with the remaining scenarios:

- #3: Fresh replication stores chunk + updates PaidForList on remote nodes
- #9: Fetch retry rotates to alternate source
- #10: Fetch retry exhaustion with single source
- #11: Repeated ApplicationFailure events decrease peer trust score
- #12: Bootstrap node discovers keys stored on multiple peers
- #14: Hint construction covers all locally stored keys
- #15: Data and PaidForList survive node shutdown (partition)
- #17: Neighbor sync request returns valid response (admission test)
- #21: Paid-list majority confirmed from multiple peers via verification
- #24: PaidNotify propagates paid-list entries after fresh replication
- #25: Paid-list convergence verified via majority peer queries
- #44: PaidForList persists across restart (cold-start recovery)
- #45: PaidForList lost in fresh directory (unrecoverable scenario)

All 56 Section 18 scenarios now have test coverage.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
mickvandijke added a commit that referenced this pull request Apr 1, 2026
- #3: Add proper unit test in scheduling.rs exercising full pipeline
  (PendingVerify → QueuedForFetch → Fetching → Stored); rename
  mislabeled e2e test to scenario_1_and_24
- #12: Rewrite e2e test to send verification requests to 4 holders
  and assert quorum-level presence + paid confirmations
- #13: Rename mislabeled bootstrap drain test in types.rs; add proper
  unit test in paid_list.rs covering range shrink, hysteresis retention,
  and new key acceptance
- #14: Rewrite e2e test to send NeighborSyncRequest and assert response
  hints cover all locally stored keys
- #15: Rewrite e2e test to store on 2 nodes, partition one, then verify
  paid-list authorization confirmable via verification request
- #17: Rewrite e2e test to store data on receiver, send sync, and assert
  outbound replica hints returned (proving bidirectional exchange)
- #55: Replace weak enum-distinctness check with full audit failure flow:
  compute digests, identify mismatches, filter by responsibility, verify
  empty confirmed failure set produces no evidence

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
mickvandijke added a commit that referenced this pull request Apr 1, 2026
Complete the Section 18 test matrix with the remaining scenarios:

- #3: Fresh replication stores chunk + updates PaidForList on remote nodes
- #9: Fetch retry rotates to alternate source
- #10: Fetch retry exhaustion with single source
- #11: Repeated ApplicationFailure events decrease peer trust score
- #12: Bootstrap node discovers keys stored on multiple peers
- #14: Hint construction covers all locally stored keys
- #15: Data and PaidForList survive node shutdown (partition)
- #17: Neighbor sync request returns valid response (admission test)
- #21: Paid-list majority confirmed from multiple peers via verification
- #24: PaidNotify propagates paid-list entries after fresh replication
- #25: Paid-list convergence verified via majority peer queries
- #44: PaidForList persists across restart (cold-start recovery)
- #45: PaidForList lost in fresh directory (unrecoverable scenario)

All 56 Section 18 scenarios now have test coverage.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
mickvandijke added a commit that referenced this pull request Apr 1, 2026
- #3: Add proper unit test in scheduling.rs exercising full pipeline
  (PendingVerify → QueuedForFetch → Fetching → Stored); rename
  mislabeled e2e test to scenario_1_and_24
- #12: Rewrite e2e test to send verification requests to 4 holders
  and assert quorum-level presence + paid confirmations
- #13: Rename mislabeled bootstrap drain test in types.rs; add proper
  unit test in paid_list.rs covering range shrink, hysteresis retention,
  and new key acceptance
- #14: Rewrite e2e test to send NeighborSyncRequest and assert response
  hints cover all locally stored keys
- #15: Rewrite e2e test to store on 2 nodes, partition one, then verify
  paid-list authorization confirmable via verification request
- #17: Rewrite e2e test to store data on receiver, send sync, and assert
  outbound replica hints returned (proving bidirectional exchange)
- #55: Replace weak enum-distinctness check with full audit failure flow:
  compute digests, identify mismatches, filter by responsibility, verify
  empty confirmed failure set produces no evidence

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants