Skip to content

Conversation

@bragaigor
Copy link
Contributor

@bragaigor bragaigor commented Nov 7, 2025

Fixes NIT-4065

The second solution attempt to fix the above issue was to bring blobs payload read to an earlier point. More specifically this part:

			promise := dapReader.RecoverPayload(batchNum, batchBlockHash, data)
			result, err := promise.Await(ctx)

which is responsible for "recovering" DA payload. Instead of just bringing the specific GetBlobs function to an earlier part I thought to bring the usage of dapReader.RecoverPayload to earlier instead. That's because the call to GetBlobs depends and assumes other logic, so to avoid code duplication I thought it would look more ergonomic to bring dapReader.RecoverPayload forward and cache its payload. Also note that inside AddSequencerBatches we have a loop that handles messages from the sequencer and removes them from the queue, the interesting thing to notice here is that every time pop() is called we first check if we have a r.cachedSequencerMessage and that only happens the very first time we call pop() on a multiplexer instance since we cache r.cachedSequencerMessage right away. That to say, from all the calls to pop() from that loop, we only call ParseSequencerMessage() once; and that's where we try to read the blob. With that I thought it would be safe to extract blob reading from ParseSequencerMessage and do it outside. So now, before we call AddDelayedMessages and AddSequencerBatches from inside addMessages we call CacheBlobs, this function eventually creates a multiplexer as well and calls into dapReader.RecoverPayload, something like:

func HandleBlobs(... batchBlockHash common.Hash, data []byte, dapReaders *daprovider.ReaderRegistry, ...) (daprovider.PayloadResult, error) {
	var payload daprovider.PayloadResult
	if dapReader, found := dapReaders.GetByHeaderByte(payload[0]); found {
		promise := dapReader.RecoverPayload(batchNum, batchBlockHash, data) // <<<<<<<<
		payload, err := promise.Await(ctx) // <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
	}
	return payload, nil
}

and the caller to HandleBlobs which is CacheBlobs stores such payloads in a map map[common.Hash]daprovider.PayloadResult. With this map instead of ParseSequencerMessage unconditionally reading payload from DA it first checks the cachedPayload map, and if payload information regarding that specific batchBlockHash is present we don't need to refetch payload. Then it's up to us if we want to fail if we don't find payload in the map or fallback to reading payload from the DA provider. With that we can "just" add a call to CacheBlobs (can call it CachePayload as well) just before we call AddDelayedMessages in addMessages:

	err := r.tracker.CacheBlobs(ctx, r.client, sequencerBatches)
	err = r.tracker.AddDelayedMessages(delayedMessages)
	err = r.tracker.AddSequencerBatches(ctx, r.client, sequencerBatches)

Update:

Instead of caching DA payload (e.g. blobs) in the tracker we're now caching it in the backend, optimally we would only want to cache DA payload as part of SequencerInboxBatch; however, there are other parts of the code that calls into multiplexer.pop(ctx) that also needs DA payload caching to happen. The common way that all these multiplexer's have is a backend so it felt like a natural spot to place such field. We're also caching DA payload in the form of a map map[common.Hash]daprovider.PayloadResult where the key is batch hash and the value is the actual payload. Tried using just a single value but we were running into overloading problems, which is why also introduced the following functions to InboxBackend interface:

	GetDAPayload(common.Hash) (*daprovider.PayloadResult, error)
	SetDAPayload(common.Hash, *daprovider.PayloadResult)
	DeleteDAPayload(common.Hash)

For ImboxReader caching happens inside inside InboxReader.run after we call LookupBatchesInRange and before we call addMEssages. For BatchPoster it happens in BatchPoster.MaybePostSequencerBatch after we create a new multiplexer which is similar to replay.wasm where we cache it right after creating the inboxMultiplexer.

@github-actions
Copy link

github-actions bot commented Nov 7, 2025

❌ 2 Tests Failed:

Tests completed Failed Passed Skipped
2164 2 2162 0
View the top 2 failed tests by shortest run time
TestVersion40
Stack Traces | 6.210s run time
... [CONTENT TRUNCATED: Keeping last 20 lines]
�[90mhostio-test: deployed to 0x18CD8466c59C5d64bF8a280141d23fFfC667e20F�[0;0m
�[90mTime to activate hostio-test: 234.244514ms�[0;0m
    precompile_inclusion_test.go:90: goroutine 455215 [running]:
        runtime/debug.Stack()
        	/opt/hostedtoolcache/go/1.25.3/x64/src/runtime/debug/stack.go:26 +0x5e
        github.com/offchainlabs/nitro/util/testhelpers.RequireImpl({0x40c2b50, 0xc0afe62c40}, {0x4080540, 0xc077ca8d20}, {0x0, 0x0, 0x0})
        	/home/runner/work/nitro/nitro/util/testhelpers/testhelpers.go:29 +0x55
        github.com/offchainlabs/nitro/system_tests.Require(0xc0afe62c40, {0x4080540, 0xc077ca8d20}, {0x0, 0x0, 0x0})
        	/home/runner/work/nitro/nitro/system_tests/common_test.go:1759 +0x5d
        github.com/offchainlabs/nitro/system_tests.testPrecompiles(0xc0afe62c40, 0x28, {0xc05ef65df8, 0x5, 0x39?})
        	/home/runner/work/nitro/nitro/system_tests/precompile_inclusion_test.go:90 +0x371
        github.com/offchainlabs/nitro/system_tests.TestVersion40(0xc0afe62c40?)
        	/home/runner/work/nitro/nitro/system_tests/precompile_inclusion_test.go:71 +0x64b
        testing.tRunner(0xc0afe62c40, 0x3d08e80)
        	/opt/hostedtoolcache/go/1.25.3/x64/src/testing/testing.go:1934 +0xea
        created by testing.(*T).Run in goroutine 1
        	/opt/hostedtoolcache/go/1.25.3/x64/src/testing/testing.go:1997 +0x465
        
    precompile_inclusion_test.go:90: �[31;1m [] execution aborted (timeout = 5s) �[0;0m
--- FAIL: TestVersion40 (6.21s)
TestSequencerPriceAdjustsFrom25Gwei
Stack Traces | 80.340s run time
=== RUN   TestSequencerPriceAdjustsFrom25Gwei
=== PAUSE TestSequencerPriceAdjustsFrom25Gwei
=== CONT  TestSequencerPriceAdjustsFrom25Gwei
    fees_test.go:223: �[31;1m [L1 gas price estimate should tend toward the basefee] �[0;0m
ERROR[11-10|23:39:23.000] Dangling trie nodes after full cleanup
--- FAIL: TestSequencerPriceAdjustsFrom25Gwei (80.34s)

📣 Thoughts on this report? Let Codecov know! | Powered by Codecov

@bragaigor bragaigor force-pushed the braga/move-blobs-payload-out branch from 939d8c9 to 745a335 Compare November 7, 2025 15:23
@bragaigor bragaigor force-pushed the braga/move-blobs-payload-out branch from 4ff3e5e to 3850000 Compare November 9, 2025 14:41
Signed-off-by: Igor Braga <5835477+bragaigor@users.noreply.github.com>
Signed-off-by: Igor Braga <5835477+bragaigor@users.noreply.github.com>
Signed-off-by: Igor Braga <5835477+bragaigor@users.noreply.github.com>
Signed-off-by: Igor Braga <5835477+bragaigor@users.noreply.github.com>
Signed-off-by: Igor Braga <5835477+bragaigor@users.noreply.github.com>
Signed-off-by: Igor Braga <5835477+bragaigor@users.noreply.github.com>
Signed-off-by: Igor Braga <5835477+bragaigor@users.noreply.github.com>
Open questions:
- Should we keep DA payload map the way it is or make its
  key to be seqMsg + batchHash? Might need that for batch
  poster
- When are we deleting old entries from such map? We can't
  just defer deletion on same iteration as we might need
  the same payload later on

Signed-off-by: Igor Braga <5835477+bragaigor@users.noreply.github.com>
Signed-off-by: Igor Braga <5835477+bragaigor@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants