-
Notifications
You must be signed in to change notification settings - Fork 1.7k
feat(net): add P2P message deduplication and length validation #6712
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -3,6 +3,7 @@ | |
| import com.google.common.cache.Cache; | ||
| import com.google.common.cache.CacheBuilder; | ||
| import com.google.common.collect.Lists; | ||
| import java.util.HashSet; | ||
| import java.util.List; | ||
| import java.util.concurrent.TimeUnit; | ||
| import lombok.extern.slf4j.Slf4j; | ||
|
|
@@ -153,6 +154,12 @@ public boolean isAdvInv(PeerConnection peer, FetchInvDataMessage msg) { | |
|
|
||
| private void check(PeerConnection peer, FetchInvDataMessage fetchInvDataMsg, | ||
| boolean isAdv) throws P2pException { | ||
| List<Sha256Hash> hashList = fetchInvDataMsg.getHashList(); | ||
| if (hashList.size() != new HashSet<>(hashList).size()) { | ||
| throw new P2pException(TypeEnum.BAD_MESSAGE, | ||
| "FetchInvData contains duplicate hashes, size: " + hashList.size()); | ||
| } | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Good check. Validation has already been added for
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Inventory Messages have rate limits and perform deduplication during processing, and they don't consume CPU, so there's no need to perform a second check before processing. ChainInventoryMessages have stricter checks than deduplication, while also ensuring that there are no duplicate lists.
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ChainInventoryMessages's block number's order must be increasing, we can ignore it. But although inventory messages don't need consume CPU, deduplication checks are still needed to keep consistency with other message handling logic and to defend against malicious peers.
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. InventoryMessages does not constitute an attack, and the lack of duplicate checks has no impact on the system. |
||
|
|
||
| MessageTypes type = fetchInvDataMsg.getInvMessageType(); | ||
|
|
||
| if (type == MessageTypes.TRX) { | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice to have a named constant here instead of a magic number! 👍
Quick question — how was
30chosen? Observed from real traffic, or estimated fromgetBlockChainSummary()'s log-step algorithm?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on the chain digest algorithm, using a binary search from the solidified block to the head block, 30 blocks can generate 2^30 block digests, which is theoretically large enough, and 30 blocks will not put pressure on the system.