Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion yarn-project/archiver/src/archiver-sync.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,6 @@ describe('Archiver Sync', () => {
publicClient,
rollupContract,
inboxContract,
contractAddresses,
archiverStore,
config,
blobClient,
Expand Down
1 change: 0 additions & 1 deletion yarn-project/archiver/src/factory.ts
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,6 @@ export async function createArchiver(
debugClient,
rollup,
inbox,
{ ...config.l1Contracts, slashingProposerAddress },
archiverStore,
archiverConfig,
deps.blobClient,
Expand Down
93 changes: 25 additions & 68 deletions yarn-project/archiver/src/l1/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,94 +5,51 @@ Modules and classes to handle data retrieval from L1 for the archiver.
## Calldata Retriever

The sequencer publisher bundles multiple operations into a single multicall3 transaction for gas
efficiency. A typical transaction includes:
efficiency. The archiver needs to extract the `propose` calldata from these bundled transactions
to reconstruct L2 blocks.

1. Attestation invalidations (if needed): `invalidateBadAttestation`, `invalidateInsufficientAttestations`
2. Block proposal: `propose` (exactly one per transaction to the rollup contract)
3. Governance and slashing (if needed): votes, payload creation/execution
The retriever uses hash matching against `attestationsHash` and `payloadDigest` from the
`CheckpointProposed` L1 event to verify it has found the correct propose calldata. These hashes
are always required.

The archiver needs to extract the `propose` calldata from these bundled transactions to reconstruct
L2 blocks. This class needs to handle scenarios where the transaction was submitted via multicall3,
as well as alternative ways for submitting the `propose` call that other clients might use.
### Multicall3 Decoding with Hash Matching

### Multicall3 Validation and Decoding

First attempt to decode the transaction as a multicall3 `aggregate3` call with validation:
First attempt to decode the transaction as a multicall3 `aggregate3` call:

- Check if transaction is to multicall3 address (`0xcA11bde05977b3631167028862bE2a173976CA11`)
- Decode as `aggregate3(Call3[] calldata calls)`
- Allow calls to known addresses and methods (rollup, governance, slashing contracts, etc.)
- Find the single `propose` call to the rollup contract
- Verify exactly one `propose` call exists
- Extract and return the propose calldata
- Find all calls matching the rollup contract address and the `propose` function selector
- Verify each candidate by computing `attestationsHash` (keccak256 of ABI-encoded attestations)
and `payloadDigest` (keccak256 of the consensus payload signing hash) and comparing against
expected values from the `CheckpointProposed` event
- Return the verified candidate (if multiple verify, return the first with a warning)

This step handles the common case efficiently without requiring expensive trace or debug RPC calls.
Any validation failure triggers fallback to the next step.
This approach works regardless of what other calls are in the multicall3 bundle, because hash
matching identifies the correct propose call without needing an allowlist.

### Direct Propose Call

Second attempt to decode the transaction as a direct `propose` call to the rollup contract:

- Check if transaction is to the rollup address
- Decode as `propose` function call
- Verify the function is indeed `propose`
- Verify against expected hashes
- Return the transaction input as the propose calldata

This handles scenarios where clients submit transactions directly to the rollup contract without
using multicall3 for bundling. Any validation failure triggers fallback to the next step.

### Spire Proposer Call

Given existing attempts to route the call via the Spire proposer, we also check if the tx is `to` the
proposer known address, and if so, we try decoding it as either a multicall3 or a direct call to the
rollup contract.

Similar as with the multicall3 check, we check that there are no other calls in the Spire proposer, so
we are absolutely sure that the only call is the successful one to the rollup. Any extraneous call would
imply an unexpected path to calling `propose` in the rollup contract, and since we cannot verify if the
calldata arguments we extracted are the correct ones (see the section below), we cannot know for sure which
one is the call that succeeded, so we don't know which calldata to process.

Furthermore, since the Spire proposer is upgradeable, we check if the implementation has not changed in
order to decode. As usual, any validation failure triggers fallback to the next step.

### Verifying Multicall3 Arguments

**This is NOT implemented for simplicity's sake**

If the checks above don't hold, such as when there are multiple calls to `propose`, then we cannot
reliably extract the `propose` calldata from the multicall3 arguments alone. We can try a best-effort
where we try all `propose` calls we see and validate them against on-chain data. Note that we can use these
same strategies if we were to obtain the calldata from another source.

#### TempBlockLog Verification

Read the stored `TempBlockLog` for the L2 block number from L1 and verify it matches our decoded header hash,
since the `TempBlockLog` stores the hash of the proposed block header, the payload commitment, and the attestations.

However, `TempBlockLog` is only stored temporarily and deleted after proven, so this method only works for recent
blocks, not for historical data syncing.

#### Archive Verification

Verify that the archive root in the decoded propose is correct with regard to the block header. This requires
hashing the block header we have retrieved, inserting it into the archive tree, and checking the resulting root
against the one we got from L1.

However, this requires that the archive keeps a reference to world-state, which is not the case in the current
system.

#### Emit Commitments in Rollup Contract

Modify rollup contract to emit commitments to the block header in the `L2BlockProposed` event, allowing us to easily
verify the calldata we obtained vs the emitted event.
Given existing attempts to route the call via the Spire proposer, we also check if the tx is
`to` the proposer known address. If so, we extract all wrapped calls and try each as either
a multicall3 or direct propose call, using hash matching to find and verify the correct one.

However, modifying the rollup contract is out of scope for this change. But we can implement this approach in `v2`.
Since the Spire proposer is upgradeable, we check that the implementation has not changed in
order to decode. Any validation failure triggers fallback to the next step.

### Debug and Trace Transaction Fallback

Last, we use L1 node's trace/debug RPC methods to definitively identify the one successful `propose` call within the tx.
We can then extract the exact calldata that hit the `propose` function in the rollup contract.
Last, we use L1 node's trace/debug RPC methods to definitively identify the one successful
`propose` call within the tx. We can then extract the exact calldata that hit the `propose`
function in the rollup contract.

This approach requires access to a debug-enabled L1 node, which may be more resource-intensive, so we only
use it as a fallback when the first step fails, which should be rare in practice.
This approach requires access to a debug-enabled L1 node, which may be more resource-intensive,
so we only use it as a fallback when earlier steps fail, which should be rare in practice.
67 changes: 40 additions & 27 deletions yarn-project/archiver/src/l1/bin/retrieve-calldata.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ import { EthAddress } from '@aztec/foundation/eth-address';
import { createLogger } from '@aztec/foundation/log';
import { RollupAbi } from '@aztec/l1-artifacts/RollupAbi';

import { type Hex, createPublicClient, getAbiItem, http, toEventSelector } from 'viem';
import { type Hex, createPublicClient, decodeEventLog, getAbiItem, http, toEventSelector } from 'viem';
import { mainnet } from 'viem/chains';

import { CalldataRetriever } from '../calldata_retriever.js';
Expand Down Expand Up @@ -89,61 +89,74 @@ async function main() {

logger.info(`Transaction found in block ${tx.blockNumber}`);

// For simplicity, use zero addresses for optional contract addresses
// In production, these would be fetched from the rollup contract or configuration
const slashingProposerAddress = EthAddress.ZERO;
const governanceProposerAddress = EthAddress.ZERO;
const slashFactoryAddress = undefined;

logger.info('Using zero addresses for governance/slashing (can be configured if needed)');

// Create CalldataRetriever
const retriever = new CalldataRetriever(
publicClient as unknown as ViemPublicClient,
publicClient as unknown as ViemPublicDebugClient,
targetCommitteeSize,
undefined,
logger,
{
rollupAddress,
governanceProposerAddress,
slashingProposerAddress,
slashFactoryAddress,
},
rollupAddress,
);

// Extract checkpoint number from transaction logs
logger.info('Decoding transaction to extract checkpoint number...');
// Extract checkpoint number and hashes from transaction logs
logger.info('Decoding transaction to extract checkpoint number and hashes...');
const receipt = await publicClient.getTransactionReceipt({ hash: txHash });

// Look for CheckpointProposed event (emitted when a checkpoint is proposed to the rollup)
// Event signature: CheckpointProposed(uint256 indexed checkpointNumber, bytes32 indexed archive, bytes32[], bytes32, bytes32)
// Hash: keccak256("CheckpointProposed(uint256,bytes32,bytes32[],bytes32,bytes32)")
const checkpointProposedEvent = receipt.logs.find(log => {
// Look for CheckpointProposed event
const checkpointProposedEventAbi = getAbiItem({ abi: RollupAbi, name: 'CheckpointProposed' });
const checkpointProposedLog = receipt.logs.find(log => {
try {
return (
log.address.toLowerCase() === rollupAddress.toString().toLowerCase() &&
log.topics[0] === toEventSelector(getAbiItem({ abi: RollupAbi, name: 'CheckpointProposed' }))
log.topics[0] === toEventSelector(checkpointProposedEventAbi)
);
} catch {
return false;
}
});

if (!checkpointProposedEvent || checkpointProposedEvent.topics[1] === undefined) {
if (!checkpointProposedLog || checkpointProposedLog.topics[1] === undefined) {
throw new Error(`Checkpoint proposed event not found`);
}

const checkpointNumber = CheckpointNumber.fromBigInt(BigInt(checkpointProposedEvent.topics[1]));
const checkpointNumber = CheckpointNumber.fromBigInt(BigInt(checkpointProposedLog.topics[1]));

// Decode the full event to extract attestationsHash and payloadDigest
const decodedEvent = decodeEventLog({
abi: RollupAbi,
data: checkpointProposedLog.data,
topics: checkpointProposedLog.topics,
});

const eventArgs = decodedEvent.args as {
checkpointNumber: bigint;
archive: Hex;
versionedBlobHashes: Hex[];
attestationsHash: Hex;
payloadDigest: Hex;
};

if (!eventArgs.attestationsHash || !eventArgs.payloadDigest) {
throw new Error(`CheckpointProposed event missing attestationsHash or payloadDigest`);
}

const expectedHashes = {
attestationsHash: eventArgs.attestationsHash,
payloadDigest: eventArgs.payloadDigest,
};

logger.info(`Checkpoint Number: ${checkpointNumber}`);
logger.info(`Attestations Hash: ${expectedHashes.attestationsHash}`);
logger.info(`Payload Digest: ${expectedHashes.payloadDigest}`);

logger.info('');
logger.info('Retrieving checkpoint from rollup transaction...');
logger.info('');

// For this script, we don't have blob hashes or expected hashes, so pass empty arrays/objects
const result = await retriever.getCheckpointFromRollupTx(txHash, [], checkpointNumber, {});
const result = await retriever.getCheckpointFromRollupTx(txHash, [], checkpointNumber, expectedHashes);

logger.info(' Successfully retrieved block header!');
logger.info(' Successfully retrieved block header!');
logger.info('');
logger.info('Block Header Details:');
logger.info('====================');
Expand Down
Loading