-
Notifications
You must be signed in to change notification settings - Fork 697
Move blobs payload extration out of ParseSequencerMessage #3969
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
bragaigor
wants to merge
9
commits into
master
Choose a base branch
from
braga/move-blobs-payload-out
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
❌ 2 Tests Failed:
View the top 2 failed tests by shortest run time
📣 Thoughts on this report? Let Codecov know! | Powered by Codecov |
939d8c9 to
745a335
Compare
bragaigor
commented
Nov 7, 2025
bragaigor
commented
Nov 8, 2025
bragaigor
commented
Nov 8, 2025
bragaigor
commented
Nov 8, 2025
4ff3e5e to
3850000
Compare
bragaigor
commented
Nov 9, 2025
bragaigor
commented
Nov 9, 2025
Signed-off-by: Igor Braga <5835477+bragaigor@users.noreply.github.com>
Signed-off-by: Igor Braga <5835477+bragaigor@users.noreply.github.com>
Signed-off-by: Igor Braga <5835477+bragaigor@users.noreply.github.com>
Signed-off-by: Igor Braga <5835477+bragaigor@users.noreply.github.com>
Signed-off-by: Igor Braga <5835477+bragaigor@users.noreply.github.com>
Signed-off-by: Igor Braga <5835477+bragaigor@users.noreply.github.com>
Open questions: - Should we keep DA payload map the way it is or make its key to be seqMsg + batchHash? Might need that for batch poster - When are we deleting old entries from such map? We can't just defer deletion on same iteration as we might need the same payload later on Signed-off-by: Igor Braga <5835477+bragaigor@users.noreply.github.com>
Signed-off-by: Igor Braga <5835477+bragaigor@users.noreply.github.com>
bb7c95d to
431a8ca
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes NIT-4065
The second solution attempt to fix the above issue was to bring blobs payload read to an earlier point. More specifically this part:
which is responsible for "recovering" DA payload. Instead of just bringing the specific
GetBlobsfunction to an earlier part I thought to bring the usage ofdapReader.RecoverPayloadto earlier instead. That's because the call toGetBlobsdepends and assumes other logic, so to avoid code duplication I thought it would look more ergonomic to bringdapReader.RecoverPayloadforward and cache its payload. Also note that insideAddSequencerBatcheswe have a loop that handles messages from the sequencer and removes them from the queue, the interesting thing to notice here is that every timepop()is called we first check if we have ar.cachedSequencerMessageand that only happens the very first time we callpop()on a multiplexer instance since we cacher.cachedSequencerMessageright away. That to say, from all the calls topop()from that loop, we only callParseSequencerMessage()once; and that's where we try to read the blob. With that I thought it would be safe to extract blob reading fromParseSequencerMessageand do it outside. So now, before we callAddDelayedMessagesandAddSequencerBatchesfrom inside addMessages we call CacheBlobs, this function eventually creates a multiplexer as well and calls intodapReader.RecoverPayload, something like:and the caller to
HandleBlobswhich isCacheBlobsstores such payloads in a mapmap[common.Hash]daprovider.PayloadResult. With this map instead ofParseSequencerMessageunconditionally reading payload from DA it first checks thecachedPayloadmap, and if payload information regarding that specificbatchBlockHashis present we don't need to refetch payload. Then it's up to us if we want to fail if we don't find payload in the map or fallback to reading payload from the DA provider. With that we can "just" add a call toCacheBlobs(can call itCachePayloadas well) just before we callAddDelayedMessagesinaddMessages:Update:
Instead of caching DA payload (e.g. blobs) in the tracker we're now caching it in the
backend, optimally we would only want to cache DA payload as part ofSequencerInboxBatch; however, there are other parts of the code that calls intomultiplexer.pop(ctx)that also needs DA payload caching to happen. The common way that all thesemultiplexer's have is abackendso it felt like a natural spot to place such field. We're also caching DA payload in the form of a mapmap[common.Hash]daprovider.PayloadResultwhere the key is batch hash and the value is the actual payload. Tried using just a single value but we were running into overloading problems, which is why also introduced the following functions toInboxBackendinterface:For
ImboxReadercaching happens inside insideInboxReader.runafter we callLookupBatchesInRangeand before we calladdMEssages. ForBatchPosterit happens inBatchPoster.MaybePostSequencerBatchafter we create a new multiplexer which is similar toreplay.wasmwhere we cache it right after creating theinboxMultiplexer.