[fix](move-memtable) check segment num when closing each tablet#36753
[fix](move-memtable) check segment num when closing each tablet#36753dataroaring merged 6 commits intoapache:masterfrom
Conversation
|
Thank you for your contribution to Apache Doris. Since 2024-03-18, the Document has been moved to doris-website. |
|
run buildall |
|
run buildall |
|
clang-tidy review says "All clean, LGTM! 👍" |
1 similar comment
|
clang-tidy review says "All clean, LGTM! 👍" |
TPC-H: Total hot run time: 40062 ms |
TPC-DS: Total hot run time: 174365 ms |
ClickBench: Total hot run time: 31.24 s |
|
run buildall |
|
clang-tidy review says "All clean, LGTM! 👍" |
TPC-H: Total hot run time: 40390 ms |
TPC-DS: Total hot run time: 174086 ms |
ClickBench: Total hot run time: 30.49 s |
|
run buildall |
|
clang-tidy review says "All clean, LGTM! 👍" |
|
run buildall |
|
clang-tidy review says "All clean, LGTM! 👍" |
TPC-H: Total hot run time: 39941 ms |
TPC-DS: Total hot run time: 171429 ms |
ClickBench: Total hot run time: 30.8 s |
|
Please add an injection regression case. |
|
run buildall |
|
clang-tidy review says "All clean, LGTM! 👍" |
TPC-H: Total hot run time: 40192 ms |
TPC-DS: Total hot run time: 172934 ms |
|
clang-tidy review says "All clean, LGTM! 👍" |
TPC-H: Total hot run time: 39965 ms |
TPC-DS: Total hot run time: 174301 ms |
ClickBench: Total hot run time: 30.63 s |
|
run buildall |
|
clang-tidy review says "All clean, LGTM! 👍" |
TPC-H: Total hot run time: 40547 ms |
TPC-DS: Total hot run time: 173391 ms |
ClickBench: Total hot run time: 30.21 s |
|
PR approved by at least one committer and no changes requested. |
|
PR approved by anyone and no changes requested. |
…he#36753) Previously, there is chance that sender failed to send some data while the receiver being unaware of. This will cause lost data if some segments are skipped. This PR fixes the problem by including checks in both sender and receiver. When sender failed to send rpc, LoadStreamStub will mark the involved tablets failed. Each sender will send segment num for each tablet in CLOSE_LOAD, and receivers (LoadStream) will sum up and check total segment nums.
…he#36753) Previously, there is chance that sender failed to send some data while the receiver being unaware of. This will cause lost data if some segments are skipped. This PR fixes the problem by including checks in both sender and receiver. When sender failed to send rpc, LoadStreamStub will mark the involved tablets failed. Each sender will send segment num for each tablet in CLOSE_LOAD, and receivers (LoadStream) will sum up and check total segment nums.
## Proposed changes fix load stream test after #36753
) ## Proposed changes fix load stream test after apache#36753
## Proposed changes Previously, there is chance that sender failed to send some data while the receiver being unaware of. This will cause lost data if some segments are skipped. This PR fixes the problem by including checks in both sender and receiver. When sender failed to send rpc, LoadStreamStub will mark the involved tablets failed. Each sender will send segment num for each tablet in CLOSE_LOAD, and receivers (LoadStream) will sum up and check total segment nums.
## Proposed changes fix load stream test after #36753
Proposed changes
Previously, there is chance that sender failed to send some data while the receiver being unaware of.
This will cause lost data if some segments are skipped.
This PR fixes the problem by including checks in both sender and receiver.
When sender failed to send rpc, LoadStreamStub will mark the involved tablets failed.
Each sender will send segment num for each tablet in CLOSE_LOAD,
and receivers (LoadStream) will sum up and check total segment nums.