-
Notifications
You must be signed in to change notification settings - Fork 25
EVM Backend Demo for Ethscriptions #100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Demonstrates proof-of-concept integration between Ruby indexer and on-chain EVM contract for Ethscriptions storage. Ruby identifies protocol operations from L1 transactions and translates to contract calls instead of PostgreSQL writes. All validation happens in the EVM contract. Note: Requires Engine API integration for production use. Currently logs intended contract calls.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR demonstrates a proof-of-concept integration between the Ruby indexer and an on-chain EVM contract for Ethscriptions storage. The Ruby indexer identifies protocol-relevant L1 transactions and translates user intent into contract calls instead of PostgreSQL writes, with all validation happening in the EVM contract using Facet-style deposit transactions.
Key changes:
- Introduction of
EvmEthscriptionProcessormodule to replace direct DB writes with contract calls EthscriptionsParamMapperservice to translate L1 transaction data to contract parametersEthscriptions.solcontract for storage and validation using SSTORE2 for efficiency
Reviewed Changes
Copilot reviewed 13 out of 14 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| contracts/src/Ethscriptions.sol | Core ERC-721 contract implementing on-chain Ethscriptions storage with SSTORE2 |
| app/models/concerns/evm_ethscription_processor.rb | Module replacing direct DB operations with contract calls |
| app/services/ethscriptions_param_mapper.rb | Service to map Ruby transaction objects to contract method parameters |
| contracts/test/EthscriptionsJson.t.sol | Comprehensive test suite including gas optimization tests |
| contracts/test/EthscriptionsTransferForPreviousOwner.t.sol | Tests for ESIP-2 transfer validation |
| app/models/eth_transaction.rb | Updated to use EVM processor instead of direct DB operations |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| { | ||
| transactionHash: eth_transaction.transaction_hash, | ||
| initialOwner: initial_owner.downcase, | ||
| contentUri: content_uri.force_encoding('BINARY'), # Send as bytes |
Copilot
AI
Sep 9, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using force_encoding('BINARY') could potentially corrupt UTF-8 content if not handled carefully. Consider adding validation to ensure the content_uri is valid UTF-8 before encoding conversion, or add comments explaining why this approach is safe in this context.
Implements token support for Ethscriptions with ERC-20 contracts that shadow NFT ownership. Key features: - TokenManager contract deploys individual ERC-20 contracts for each token using CREATE2 - Deterministic addresses based on protocol+tick for predictability - Token deploys are validated - duplicate deploys properly revert with error - Token mints must match the configured limit (amt must equal lim) - Token balances automatically track NFT transfers via shadow transfers - No direct ERC-20 transfers allowed - only via NFT ownership changes - Uses OpenZeppelin upgradeable contracts for the ERC-20 template - Efficient cloning pattern with minimal proxy contracts Implementation details: - TokenParams struct for clean parameter passing between contracts - TokenItem struct combines deploy hash and amount in single mapping - Helper function _getTickKey() centralizes tickKey computation - ERC20Capped enforces max supply constraints - Standard 18 decimals - user amounts interpreted as ether units (1000 = 1000e18) - Solady's LibString for efficient string operations Ruby integration: - Detects token operations (deploy/mint) from JSON content - Passes structured TokenParams to contract - All validation happens on-chain in the EVM Tests: - 20 tests covering all token functionality - Verifies deploy, mint, transfer, and supply enforcement - Validates mint amount matching and duplicate deploy rejection - Gas optimization tests included
| return nil unless content_uri.start_with?('data:,') | ||
|
|
||
| begin | ||
| json_str = content_uri.sub('data:,', '') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
noticed it only support data:, based tokens? there may be with application/json contemt type, for sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's more complicated to recognize multiple (unlimited?) ways of expressing the same idea. It's nice to have tick and protocol be unique. Tokens are just a "view" anyway; anyone can index whatever protocol they want (just as what happens today)
- Introduce EthscriptionsProver contract that uses L2ToL1MessagePasser to prove ownership and token balances on L1 - Restructure to use pre-deployed system contracts at known addresses (similar to OP Stack pattern) - Move all proof generation to automatic hooks in _update methods for both NFTs and tokens - Remove redundant proofType field from proof structs (struct type itself indicates proof kind) - Split proof events into specific types (TokenBalanceProofSent and EthscriptionDataProofSent) - Ensure prover never reverts to prevent breaking token transfers - Add comprehensive test coverage for proof generation
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
- Implement FastLZ compression/decompression in Solidity contract - Add isCompressed flag to Ethscription struct and creation params - Store compressed content when beneficial (>10% reduction) - Automatically decompress in tokenURI() for retrieval - Add compression infrastructure to Ruby indexer (pending implementation) - Include comprehensive test suite for compression scenarios - Achieve ~28% gas savings on typical base64 PNG ethscriptions
- Set up L2Genesis script to generate genesis.json with proper predeploy structure - Include L1Block, L2ToL1MessagePasser, and ProxyAdmin implementations - Configure 2048 proxy slots with proper EIP-1967 storage layout - Set all predeploy nonces to 1 to avoid OP Stack call issues - Use Soldeer for dependency management instead of git submodules - Output formatted genesis-allocs.json for L2 chain initialization
- Add Ethscriptions system contracts to genesis state (0x33 namespace) - Create unified Predeploys library for all predeploy addresses - Set up genesis deployment for Ethscriptions, TokenManager, EthscriptionsProver, and ERC20 template - Remove constructors from Ethscriptions contracts since genesis etching doesn't call them - Add helper to disable initializers on template contracts - Replace mock L2ToL1MessagePasser with real implementation - Remove redundant SystemAddresses in favor of Predeploys
- Override transferFrom to remove address(0) check, enabling burns through standard transfers - Simplify L1Block access to use public state variables directly - Add comprehensive test suite for burn functionality including authorization and balance checks - Update test setup to include L1Block predeploy for realistic testing environment
contracts/src/Ethscriptions.sol
Outdated
| bool isCompressed; // True if content is FastLZ compressed | ||
| // New fields for block tracking | ||
| uint256 createdAt; // Timestamp when created | ||
| uint64 l1BlockNumber; // L1 block number when created |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should also have put transaction_index somewhere. ;)
|
|
||
| // Check balance after burn | ||
| assertEq(ethscriptions.balanceOf(alice), 0); | ||
| // Note: We can't check balanceOf(address(0)) as OpenZeppelin prevents that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would be cool to be able to count the burned ones tho.
also, there are already a bunch that are burned to dead addresses.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes!
| ethscriptions.ownerOf(uint256(simpleTxHash)); | ||
|
|
||
| // The burn should have called TokenManager.handleTokenTransfer with to=address(0) | ||
| // This ensures TokenManager is notified of burns |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how it's ensured that TokenManager is notified? There's nothing token-related in that test?
|
LGTM, but check out the recent comments above @RogerPodacter |
Significant progress toward L2 implementation with EVM compatibility: Core Infrastructure: - Add EthBlockImporter for L1 block syncing and import pipeline - Add GethDriver for geth node state management - Implement BlockValidator for block validation framework - Add EthscriptionsBlock model for L2 block structure - Simplify EthBlock/EthTransaction models for L2 requirements Ethscriptions Processing: - Add EthscriptionDetector for identifying ethscriptions in txs - Add EthscriptionTransaction/Builder for L2 transaction creation - Implement L1AttributesTxCalldata for block attributes encoding - Add block derivation configuration and logic Integration Layer: - Add RPC clients for L1/L2 communication - Add EthscriptionsApiClient for external data fetching - Implement StorageReader for contract state access - Add EventDecoder for contract event parsing Smart Contracts (WIP): - Extend Ethscriptions.sol with multi-transfer and burn support - Add protocol-level transfer events - Update genesis deployment scripts - Expand test coverage for new features Supporting Infrastructure: - Add type-safe value objects (ByteString, Hash32, Address20) - Add configuration management modules - Add transaction construction helpers - Add genesis generation tooling - Add validation and verification scripts This establishes the foundational components for the L2 EVM backend. Additional work needed for production readiness.
Major changes: - Switch from using transaction hash as token ID to using ethscription number - Implement custom ERC721EthscriptionsUpgradeable with null ownership support - Update eth.rb to v0.5.16 with new cryptographic dependencies - Refactor event detection for better ordering and deduplication - Add parallel validation with configurable thread pool - Improve import scripts with better error handling and progress tracking - Add comprehensive tests for null ownership scenarios This change enables proper null ownership (address(0) as valid owner) which is required for Ethscriptions protocol compatibility, and improves validation performance through parallelization.
Major architectural changes to separate content storage from inscription logic: Storage Optimization: - Store raw decoded bytes instead of base64-encoded strings (33% storage savings) - Implement two-level deduplication: contentUriHash for protocol uniqueness, contentSha for storage - Remove compression complexity in favor of simpler raw byte storage Stack Too Deep Fix: - Add nested ContentInfo struct to avoid compilation errors without via_ir - Reduce main Ethscription struct from 15+ to 9 fields - Maintain all functionality while improving compilation efficiency Contract Changes: - Add contentUriHash parameter to createEthscription for protocol uniqueness - Allow empty content (data:,) for compatibility with mainnet history - Track wasBase64 flag to preserve original encoding on output - Update tokenURI to include "Was Base64" attribute Ruby/Node Integration: - Update ethscription_transaction_builder.rb to compute contentUriHash and decode base64 - Pre-process genesis JSON with decoded content Test Infrastructure: - Add createTestParams helper in TestSetup.sol for simplified test creation - Fix base64 decoding in test helper to match production behavior - Use SHA-256 for contentUriHash to match production (not keccak256) - Fix createTokenParams to use SHA-256 for consistency - Use startPrank/stopPrank to ensure correct creator in tests - Pre-compute params before vm.expectRevert to avoid call depth issues - Update all tests to use helper function and expect raw content Breaking changes: - CreateEthscriptionParams now requires contentUriHash field - Content field now expects raw decoded bytes, not base64 strings All 52 tests passing.
…improve tokenURI rendering - Change predeploy address from 0x3300...0001 to 0xEeeee...EEeE for better memorability - Remove wasBase64 field from ContentInfo struct (always use base64 for safety) - Make contentBySha and contentUriExists mappings public for transparency - Enhance tokenURI to use SVG wrapper for images and HTML viewer for text/JSON - Add token parameter extraction for erc-20 protocol operations - Update test suite to match new tokenURI format with animation_url field
Refactor ERC20FixedDenomination and Manager for Hybrid NFT Support
…nerCappedUpgradeable contracts - Updated the `_transferERC721` and `tokenURI` functions to utilize a new `_normalizeTokenId` method for improved token ID validation and handling. - Introduced `_encodeMintId` and `_decodeTokenId` methods to streamline the encoding and decoding of token IDs, enhancing clarity and maintainability. - Replaced direct ID manipulations with the new methods to ensure consistent token ID processing across the contract.
| # Return nil to indicate we can't determine the owner | ||
| raise ValidationError, "Cannot determine initial owner without transaction context" | ||
| end | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Import fails without transaction context for collections
The build_metadata_object method raises a ValidationError when eth_transaction is nil or lacks from_address, but build_import_encoded_params calls this method with an optional eth_transaction parameter that can be nil. For collections without should_renounce or explicit initial_owner in the metadata, the import will fail with "Cannot determine initial owner without transaction context" even though the comment claims "For import, we always have the transaction". This breaks the import fallback path for historical collection data when transaction context isn't provided.
…edDenomination and ERC404NullOwnerCappedUpgradeable contracts - Simplified token ID management by removing the encoding and decoding methods, directly using the mint ID in relevant functions. - Enhanced the `tokenURI` function to validate token IDs and streamline metadata generation. - Updated tests to reflect changes in token ID handling and ownership assertions, ensuring consistency with the new implementation.
Refactor token ID handling in ERC20FixedDenomination and ERC404NullOw…
- Deleted the `WordDomainsParser` class and its references from the `ProtocolParser`, streamlining protocol handling. - Removed the `name_registry` contract and its related tests, eliminating legacy word-domain registration functionality. - Updated the `L2Genesis` contract to exclude the `NameRegistry` from proxied contract checks and registration. - Adjusted tests to reflect the removal of word-domains, ensuring consistency across the codebase.
Remove legacy word-domains protocol support and associated parser
…okenURI function in ERC20FixedDenomination contract. The tokenURI function no longer validates token IDs, streamlining its implementation.
Remove unused ERC20NullOwnerCappedUpgradeable contract and simplify t…
…flect new timestamp
| ) | ||
|
|
||
| @transactions << transaction | ||
| end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Unconditional nil appends to transaction array
The try_calldata_creation and process_create_event methods unconditionally append the result of EthscriptionTransaction.build_create_ethscription to @transactions, but the factory method can return nil when DataUri.valid?(content_uri) is false (line 48 of ethscription_transaction.rb). While .compact removes these nils later, the nil values should be prevented at the source by checking before appending.
Additional Locations (1)
…ment and add integration tests - Updated `ERC721EthscriptionsCollectionManager` to enforce merkle proof validation based on sender address. - Introduced a new internal function `_shouldEnforceMerkleProof` to determine when to enforce merkle proof requirements. - Added comprehensive integration tests for the header-based collections protocol, covering successful item additions and various failure scenarios, including proof validation and collection state checks. - Created a new spec file for integration tests to ensure robust coverage of the new functionality.
| encoded = Eth::Abi.encode(['address', 'bytes32[]'], [to_bin, ids_bin]) | ||
|
|
||
| (function_sig + encoded).b | ||
| end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Nil values passed to Eth::Abi.encode in calldata builders
The address_to_bin and hex_to_bin helper methods return nil when given nil input, but these nil values are passed directly to Eth::Abi.encode which expects binary data. This occurs when normalize_address in eth_transaction.rb returns nil (e.g., when to_address is nil from a transfer event), which then gets stored in transfer_to_address and later passed to the encoding functions. The Eth::Abi.encode call will fail with a type mismatch error instead of gracefully handling the invalid input.
Additional Locations (1)
e62cd31 to
e9175b8
Compare
… interfaces and dependencies - Deleted the `IERC404` interface and the `DoubleEndedQueue` library, simplifying the contract structure. - Updated function signatures to remove references to the deleted interface, ensuring compliance with ERC20 and ERC721 standards. - Streamlined the contract by eliminating unnecessary state variables and constants, enhancing readability and maintainability. - Adjusted event emissions and error handling to align with the new implementation.
| content_uri_sha = [content_uri_sha_hex].pack('H*') | ||
|
|
||
| # Convert hex strings to binary for ABI encoding | ||
| tx_hash_bin = hex_to_bin(eth_transaction.transaction_hash) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Nil reference when accessing transaction hash in calldata builder
build_create_calldata calls hex_to_bin(eth_transaction.transaction_hash) without null safety, but eth_transaction can be nil since it's an optional property. When nil, this causes NoMethodError. The same issue appears in build_transfer_calldata and build_transfer_with_previous_owner_calldata where transfer_ids array is accessed. These methods should either validate that required properties are non-nil before use or use safe navigation (&.) operators like the code at line 186 does.
Additional Locations (2)
Refactor ERC404NullOwnerCappedUpgradeable contract by removing unused…
| return unless to_address.present? | ||
|
|
||
| try_calldata_creation | ||
| try_calldata_transfer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Both calldata creation and transfer processing executed simultaneously
The process_calldata method calls both try_calldata_creation and try_calldata_transfer unconditionally. Since both methods may succeed on the same transaction (e.g., a valid data URI that also happens to be 32 bytes long), this creates duplicate ethscription transactions from a single L1 transaction calldata. The methods should be mutually exclusive - if calldata is recognized as a creation, it should not be processed as a transfer.
Additional Locations (1)
… trait labels in EthscriptionsRendererLib - Improved JSON metadata construction by escaping special characters in token name and symbol. - Added new trait for "Content URI SHA" in EthscriptionsRendererLib to provide additional metadata. - Updated trait labels for clarity, changing "Protocol" to "Protocol Name" and "Operation" to "Protocol Operation".
| source_tag_hash + # 32 bytes (hashed source tag) | ||
| function_selector + # 4 bytes (function selector) | ||
| Eth::Util.zpad_int(source_index, 32) # 32 bytes (source_index) | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Nil pointer access on optional eth_transaction field
The eth_transaction property is declared as T.nilable(Object) on line 12 but is accessed without nil checks in both source_hash (line 119) and build_create_calldata (line 194). When to_deposit_payload is called on an EthscriptionTransaction instance with a nil eth_transaction, it will crash with NoMethodError when trying to access eth_transaction.block_hash or eth_transaction.transaction_hash. The source_hash method validates source_type and source_index but not eth_transaction itself.
Additional Locations (2)
| ORDER BY n | ||
| SQL | ||
|
|
||
| connection.execute(sql, [start_block, end_block]).map { |row| row['missing_block'] } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: SQL bind parameters not substituted in validation_gaps query
The validation_gaps method uses connection.execute(sql, [start_block, end_block]) to execute a SQL query with ? placeholders. However, ActiveRecord::ConnectionAdapters::AbstractAdapter#execute has the signature execute(sql, name = nil) where the second argument is a name/label for logging purposes, not bind parameters. The [start_block, end_block] array will be coerced to a string and used as the query name, while the ? placeholders in the SQL remain unsubstituted. This will cause the query to fail with a SQL syntax error or return incorrect results. The fix requires using sanitize_sql_array to interpolate the parameters before execution.
| ORDER BY n | ||
| SQL | ||
|
|
||
| connection.execute(sql, [start_block, end_block]).map { |row| row['missing_block'] } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: SQL bind parameters not applied in validation_gaps query
The validation_gaps method calls connection.execute(sql, [start_block, end_block]) expecting the array to be used as bind parameters for the ? placeholders in the SQL. However, ActiveRecord's execute method signature is execute(sql, name = nil) - the second argument is a query name for logging, not bind parameters. The [start_block, end_block] array will be ignored and the SQL will contain literal ? characters, causing the query to fail or return incorrect results. This breaks the entire validation gap detection feature. To properly bind parameters, use sanitize_sql_array with interpolation or exec_query with bind objects.
| ORDER BY n | ||
| SQL | ||
|
|
||
| connection.execute(sql, [start_block, end_block]).map { |row| row['missing_block'] } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: SQL bind parameters not supported by connection.execute
The validation_gaps method uses connection.execute(sql, [start_block, end_block]) to pass bind parameters, but ActiveRecord's execute method doesn't accept bind parameters in this format. The second argument is silently ignored, leaving the ? placeholders unsubstituted, which causes an SQL syntax error. The code needs to use sanitize_sql_array or exec_query with proper bind parameter handling instead.
| from_address: Address20.from_hex(tx['from']), | ||
| to_address: tx['to'] ? Address20.from_hex(tx['to']) : nil, | ||
| status: current_receipt['status'].to_i(16), | ||
| logs: current_receipt['logs'], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Nil receipt causes NoMethodError when accessing status
In from_rpc_result, current_receipt is obtained via hash lookup and may be nil if no matching receipt exists for a transaction hash. Lines 53-54 then access current_receipt['status'] and current_receipt['logs'] without a nil check, which will raise a NoMethodError when the receipt is missing. This could occur if the RPC results have mismatched transaction/receipt data.
| private | ||
|
|
||
| def validation_enabled? | ||
| ENV.fetch('VALIDATION_ENABLED').casecmp?('true') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: ENV.fetch without default crashes on missing env var
Multiple locations use ENV.fetch('VALIDATION_ENABLED') without a default value. If the environment variable is not set, this raises a KeyError and crashes the application. Unlike other ENV.fetch calls in the codebase that provide defaults (like VALIDATION_RETRY_WAIT_SECONDS), these calls assume the variable is always configured, which may not be true in all deployment scenarios.
Additional Locations (2)
| ORDER BY n | ||
| SQL | ||
|
|
||
| connection.execute(sql, [start_block, end_block]).map { |row| row['missing_block'] } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: SQL bind parameters ignored in validation_gaps method
The connection.execute method in ActiveRecord does not support bound parameters as a second array argument. The [start_block, end_block] array is silently ignored, leaving literal ? placeholders in the SQL query. This causes a database syntax error when validation_gaps is called. The correct approach would be to use exec_query with properly formatted bind parameters or sanitize_sql_array to safely interpolate the values.
- Added a Table of Contents for easier navigation. - Reorganized sections for better flow, including a new Overview section. - Clarified the pipeline workings and user experience consistency. - Expanded the Run with Docker Compose section with prerequisites and quick start instructions. - Included detailed environment variable references and performance tuning options. - Enhanced explanations for Ethscriptions creation and transfer methods.
| ORDER BY n | ||
| SQL | ||
|
|
||
| connection.execute(sql, [start_block, end_block]).map { |row| row['missing_block'] } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: SQL parameters not bound in validation_gaps query
The connection.execute(sql, [start_block, end_block]) call doesn't bind the SQL parameters as expected. In ActiveRecord, the second argument to execute is name (for logging), not bind parameters. The ? placeholders in the SQL will remain unsubstituted, causing the query to fail or behave unexpectedly with literal ? characters instead of the actual block numbers. The fix requires using sanitize_sql_array or exec_query with proper binds.
Proposal: Ethscriptions App Chain
Summary
It's time to put Ethscriptions on the right technical foundation for long-term growth. We propose evolving Ethscriptions into a Stage 2 App Chain ("app chain" = rollup purpose-built for Ethscriptions, doesn't support arbitrary smart contracts) in a way that preserves full backward compatibility for users and indexers, but gives us provability, client diversity, and the ability to better attract support from partners like OpenSea.
What This Actually Means
Think of this as replacing the current official indexer's Postgres database backend with a blockchain backend. From the outside, everything works exactly the same: same inscription creation, same transfer mechanics, same API endpoints. But internally, instead of storing state in Postgres, we're running an actual EVM chain that:
For users: Zero changes. Create and transfer inscriptions exactly as before.
For alternate indexers: Can continue operating normally, or optionally upgrade to verify state roots.
For the ecosystem: We become a "real" blockchain with provability, better tooling, and Stage 2 status.
Current Architecture Limitations
The current Postgres-based indexer architecture has served us well but has inherent limitations:
Technical Limitations
No cryptographic state verification. Because Ethscriptions lacks state roots, it's costly and slow for indexers to confirm they're applying the same rules. In practice, unless everyone runs the exact same indexer code, there's no way to detect divergence. This undermines client diversity and increases the risk of hidden bugs.
Not provable to L1. Ethscriptions doesn't have a proof system. That means no smart contract on Ethereum can make decisions based on Ethscriptions state. This makes use-cases like wrapping an ethscription into an L1 ERC-721 or bridging an ethscription to another L2 impossible without a trusted operator. As we learned from Ordex this is a major risk.
Limited tooling compatibility. Marketplaces and developers want to run EVM nodes, use EVM RPCs, interact with EVM contracts, use EVM indexers, etc. Custom indexers make it harder for marketplaces like OpenSea and others to support Ethscriptions.
Complex feature development. Modeling a blockchain (with reorgs and rollbacks) in Postgres is awkward. Simple things like ERC-20s have to be re-implemented outside the EVM, leading to bugs, incompatibilities, and "shadow" versions of what the EVM already solves.
Perception challenges. The above creates a branding problem: without cryptographic proofs and standard tooling, Ethscriptions can appear less robust than L2s.
Proposed Solution
Deploy an EVM-based "reference indexer" that internally uses a blockchain instead of Postgres
This alternate implementation processes the exact same Ethscriptions data but stores it in an EVM blockchain instead of a traditional database. This gives us:
Same behavior, better foundation: All existing Ethscriptions functionality works identically, but now with state roots and cryptographic proofs
L1 provability: The rollup can prove its state to Ethereum, enabling trustless bridges and wrapping
Indexer consensus: Different indexers can verify they have the same state by comparing state roots (no more silent divergence)
EVM compatibility: Standard EVM tools (nodes, RPCs, indexers) can read Ethscriptions as ERC-721s
Stage 2 recognition: L2Beat can list us as a Stage 2 App Chain
Implementation details:
What stays the same:
What's new (all optional):
FAQ
Q: Do I need to change my indexer?
A: No. This is backward compatible. Existing indexers continue to work exactly as they do now.
Q: Is this related to Facet?
A: Under the hood it would use a fork of Facet's stack which has already been used to create a provable Stage 2 rollup. However, Ethscriptions would continue to have no protocol-level relationship to the Facet Chain.
Q: Is this a migration or a replacement?
A: It's an alternate implementation that can run alongside existing infrastructure. Think of it as offering a blockchain-backed reference indexer as an option, not a requirement.
Q: What about future smart contract functionality?
A: The initial implementation focuses solely on maintaining current functionality with better infrastructure. Later, if there's demand from marketplaces like OpenSea, limited smart contract functionality (DEX, non-custodial marketplaces) could be added. This would be a breaking change for current indexers but would only happen if there's clear value.
Q: How does this affect users?
A: It doesn't. Users continue creating and transferring inscriptions exactly as they do today. The benefits (provability, bridges, etc.) become available without any user-facing changes.
This PR
EVM Backend Demo for Ethscriptions
This PR demonstrates a proof-of-concept integration between the Ruby indexer and an on-chain EVM contract for Ethscriptions storage.
How it works:
Key changes:
EvmEthscriptionProcessormodule replaces direct DB writes with contract callsEthscriptionsParamMappertranslates L1 transaction data to contract parametersEthscriptions.solcontract handles all storage and validation using SSTORE2 for efficiencyNote:
This won't work in production yet - needs actual Engine API integration to send transactions to the EVM. Currently just logs the intended contract calls.
Note
Introduce an EVM-backed Ethscriptions stack with on-chain readers/validators, comprehensive tests, and Dockerized geth/node runtime.
EthRpcClient), chain/network management (ChainIdManager,SysConfig).StorageReader,CollectionsReader,Erc20FixedDenominationReaderand event decoder.BlockValidator,L1RpcPrefetcher,ImportProfiler).GenesisGenerator) and task hooks.TestSetupwith predeploys and local geth orchestration (GethTestHelper).validation_resultstable and queue schema.Written by Cursor Bugbot for commit 6e160cc. This will update automatically on new commits. Configure here.