
Vitalik's latest article: The Verge – Future Development of the Ethereum Protocol
TechFlow Selected TechFlow Selected

Vitalik's latest article: The Verge – Future Development of the Ethereum Protocol
It will take us years to obtain validity proofs for Ethereum's consensus.
Author: Vitalik Buterin
Translation: Mensh, ChainCatcher
Special thanks to Justin Drake, Hsia-wei Wanp, Guillaume Ballet, Icinacio, Rosh Rudolf, Lev Soukhanov, Ryan Sean Adams, and Uma Roy for their feedback and review.
One of the most powerful features of blockchains is that anyone can run a node on their own computer and independently verify the correctness of the blockchain. Even if all 9596 nodes running chain consensus (PoW, PoS) immediately agreed to change the rules and started producing blocks under new rules, every user running a fully validating node would reject the chain. Non-conspiring block producers would automatically converge onto a chain continuing under the old rules, and fully verifying users would follow that chain.
This is a key distinction between blockchains and centralized systems. However, for this property to hold, running a fully validating node must be practically feasible for a sufficiently large number of people. This applies both to block producers (since if producers do not validate, they contribute nothing to enforcing protocol rules) and to ordinary users. Today, it's possible to run a node on consumer laptops (including the one used to write this article), but it remains difficult. The Verge aims to change this by making full validation computationally cheap enough that every mobile wallet, browser wallet, and even smartwatch could perform validation by default.

The Verge 2023 Roadmap
Originally, "Verge" referred specifically to migrating Ethereum's state storage to Verkle trees—a tree structure enabling more compact proofs, allowing stateless verification of Ethereum blocks. Nodes could validate an Ethereum block without storing any Ethereum state (account balances, contract code, storage...) on their hard drives, at the cost of downloading a few hundred KB of proof data and spending a few hundred milliseconds verifying a proof. Today, Verge represents a broader vision focused on maximizing resource efficiency in Ethereum chain validation, encompassing not only stateless verification techniques but also using SNARKs to verify all Ethereum execution.
Besides the long-term focus on full-chain SNARK verification, another emerging question concerns whether Verkle trees are the optimal technology. Verkle trees are vulnerable to quantum computers, so if we replace the current KECCAK Merkle Patricia Tree with Verkle trees now, we'd likely need to replace them again later. An alternative approach is to skip Merkle branches entirely and instead use STARKs directly over binary hash trees. Historically, this was considered infeasible due to overhead and technical complexity. However, recently Starkware demonstrated proving 1.7 million Poseidon hashes per second on a laptop using cycle-efficient STARKs, and with advances like GKR, proof times for more "traditional" hash functions are rapidly improving. As a result, over the past year, "Verge" has become more open-ended, with several possibilities.
The Verge: Key Goals
-
Stateless clients: The storage required by fully validating clients and light nodes should not exceed a few GB.
-
(Long-term) Fully verify the chain (consensus and execution) on a smartwatch. Download some data, verify a SNARK, done.
In this chapter
-
Stateless clients: Verkle or STARKs?
-
Validity proofs for EVM execution
-
Validity proofs for consensus
Stateless Verification: Verkle or STARKs?
What problem are we solving?
Today, Ethereum clients require hundreds of gigabytes of state data to validate blocks, and this amount increases annually. Raw state data grows by about 30GB per year, and individual clients must store additional data on top to efficiently update the trie.

This reduces the number of users capable of running fully validating Ethereum nodes: while large hard drives sufficient to store all Ethereum state and even years of history are common, computers people typically buy come with only hundreds of gigabytes of storage. State size also creates significant friction when setting up a node for the first time: nodes need to download the entire state, which can take hours or days. This creates various ripple effects. For example, it greatly increases the difficulty for node operators to upgrade their setups. Technically, upgrades can be done without downtime—start a new client, wait for synchronization, then shut down the old client and transfer keys—but in practice, this is technically complex.
How does it work?
Stateless verification is a technique allowing nodes to validate blocks without holding the full state. Instead, each block comes with a witness containing: (i) values, code, balances, and storage from specific locations in the state accessed by that block; (ii) cryptographic proofs that these values are correct.
In practice, implementing stateless verification requires changing Ethereum's state tree structure. This is because the current Merkle Patricia Tree is extremely unfriendly to implementing any cryptographic proof scheme, especially in worst-case scenarios—whether using "raw" Merkle branches or wrapping them into STARKs. The main difficulties stem from weaknesses in the MPT:
1. It is a 16-ary tree (each node has 16 children). This means that in a tree of size N, a proof requires on average 32*(16-1)*log₁₆(N) = 120*log₂(N) bytes, or approximately 3840 bytes in a tree with 2³² entries. A binary tree would require only 32*(2-1)*log₂(N) = 32*log₂(N) bytes, or about 1024 bytes.
2. Code is not Merkleized. This means that to prove access to account code, the entire code must be provided, up to 24000 bytes.

We can calculate the worst case as follows:
30,000,000 gas / 2400 (cold account read cost) * (5 * 488 + 24000) = 330,000,000 bytes
The branch cost is slightly reduced (using 5*480 instead of 8*480) because overlapping parts appear at the top when there are many branches. Still, the data volume to download within a single slot is completely unrealistic. If we try to wrap this with a STARK, we face two issues: (i) KECCAK is relatively unfriendly to STARKs; (ii) 330MB of data means we must prove 5 million calls to the KECCAK round function, which may be unprovable for all but the most powerful consumer hardware, even if we make STARK proofs of KECCAK more efficient.
If we instead replace the hexary tree with a binary tree and additionally Merkleize code, the worst case becomes roughly 30,000,000 / 2400 * 32 * (32 - 14 + 8) = 10,400,000 bytes (14 subtracts redundancy bits for 2¹⁴ branches, and 8 is the proof length into leaf code chunks). Note this requires changing gas costs to charge for accessing each individual code chunk; EIP-4762 does exactly this. 10.4 MB is much better, but still too much data to download within one slot for many nodes. Therefore, we need stronger techniques. Here, two leading solutions emerge: Verkle trees and STARKed binary hash trees.
Verkle Trees
Verkle trees use elliptic curve-based vector commitments to achieve shorter proofs. The key insight is that each parent-child relationship in the proof contributes only 32 bytes, regardless of the tree’s width. The only limitation on tree width is that wider trees reduce computational efficiency of proofs. The proposed implementation for Ethereum uses a width of 256.

Thus, the size per branch in the proof becomes 32 * log₂₅₆(N) = 4 * log₂(N) bytes. So the theoretical maximum proof size is roughly 30,000,000 / 2400 * 32 * (32 - 14 + 8) / 8 = 130,000 bytes (actual calculations differ slightly due to uneven distribution of state chunks, but this is a good first approximation).
Also note that in all the above examples, this "worst case" isn't truly the worst: a worse scenario involves an attacker deliberately "mining" two addresses with long common prefixes in the tree and reading from one, potentially doubling the worst-case branch length. Even accounting for this, Verkle trees have a worst-case proof length of 2.6MB, roughly matching current worst-case witness sizes.
We also leverage this consideration in another way: we make accessing "adjacent" storage very cheap—either many code chunks from the same contract or adjacent storage slots. EIP-4762 defines adjacency and charges only 200 gas for adjacent accesses. In such cases, the worst-case proof size becomes 30,000,000 / 200 * 32 = 4,800,800 bytes, which remains roughly within tolerance. If we want to reduce this further for safety, we can slightly increase the cost of adjacent accesses.
STARKed Binary Hash Trees
The principle of this technique is straightforward: simply build a binary tree, take the up-to-10.4MB witness proving values in the block, and replace the witness with a STARK proof of it. The resulting proof contains only the proven data plus a fixed overhead of 100–300kB from the actual STARK.
The main challenge here is verification time. We can perform similar calculations as above, except measuring hash operations instead of bytes. A 10.4MB block implies 330,000 hashes. Adding the possibility of attackers "mining" address trees with long common prefixes pushes the worst-case hash count to about 660,000. Thus, if we can prove 200,000 hashes per second, we’re fine.
These numbers are already achievable on consumer laptops using the Poseidon hash function, which is specifically designed for STARK-friendliness. However, Poseidon is relatively immature, so many remain skeptical of its security. Hence, two realistic paths forward exist:
-
Rapidly conduct extensive security analysis on Poseidon and gain enough confidence to deploy it on L1
-
Use more "conservative" hash functions like SHA256 or BLAKE
For conservative hash functions, Starkware's STARK stack currently proves only 10–30k hashes per second on consumer laptops. However, STARK technology is rapidly improving. Even today, GKR-based techniques show potential to boost this speed into the 100–200k range.
Other Use Cases for Witnesses Beyond Block Validation
Beyond block validation, three other key use cases require more efficient stateless verification:
-
Mempool: When transactions are broadcast, nodes in the P2P network must validate transaction validity before rebroadcasting. Today, validation includes signature checks and verifying sufficient balance and correct nonce. In the future (e.g., with native account abstraction like EIP-7701), this might involve running some EVM code that performs state accesses. If nodes are stateless, transactions need to include proofs of accessed state objects.
-
Inclusion lists: A proposed feature allowing (possibly small and simple) proof-of-stake validators to force the next block to include transactions, regardless of (possibly large and complex) block builders’ preferences. This would weaken powerful actors' ability to manipulate the blockchain by delaying transactions. However, this requires validators to be able to verify the validity of transactions in inclusion lists.
-
Light clients: If we want users to access the chain via wallets (e.g., Metamask, Rainbow, Rabby), they’ll need to run light clients (e.g., Helios). The core module of Helios provides users with a verified state root. For a fully trustless experience, users need proofs for each RPC call (e.g., for eth_call requests, proving all state accessed during the call).
All these use cases share a common trait: they require quite a few proofs, but each proof is small. Thus, STARK proofs aren’t practical for them; instead, the most realistic approach is to directly use Merkle branches. Another advantage of Merkle branches is updatability: given a proof of a state object X at block B, if you receive a subsequent block B2 and its witness, you can update the proof to root at B2. Verkle proofs are also natively updatable.
Connections to Existing Research:
-
Verkle trees
-
John Kuszmaul’s original paper on Verkle trees
-
Starkware
-
Poseidon2 paper
-
Ajtai (lattice-hardness-based alternative fast hash function)
-
Verkle.info
What else needs to be done?
The remaining major work includes:
1. Further analysis of the consequences of EIP-4762 (changes to stateless gas costs)
2. More work completing and testing transition procedures—the main source of complexity in any stateless implementation
3. More security analysis of Poseidon, Ajtai, and other "STARK-friendly" hash functions
4. Further development of ultra-efficient STARK protocols for "conservative" (or "traditional") hash functions, e.g., based on Binius or GKR.
Additionally, we will soon decide among three options: (i) Verkle trees, (ii) STARK-friendly hash functions, and (iii) conservative hash functions. Their characteristics can be roughly summarized in the table below:

Besides these headline figures, several other important considerations exist:
-
Verkle tree code is already quite mature today. Using anything besides Verkle would delay deployment, likely postponing a hard fork. This may not matter, especially if we need extra time for hash function analysis or validator implementations, or if we have other important features we want to integrate into Ethereum earlier.
-
Updating state roots using hashes is faster than with Verkle trees. This means hash-based approaches can reduce full node sync times.
-
Verkle trees have interesting witness update properties—Verkle witnesses are updatable. This property is useful for mempool, inclusion lists, and other use cases, and may also improve implementation efficiency: if a state object updates, you can update the penultimate layer of the witness without reading the final layer.
-
Verkle trees are harder to SNARK-prove. If we want to reduce proof size down to a few kilobytes, Verkle proofs introduce challenges. This is because Verkle proof verification introduces many 256-bit operations, requiring either significant overhead in the proof system or a custom internal structure including 256-bit Verkle proof components. This isn't a problem for statelessness itself, but adds difficulty.
If we want Verkle witness updatability in a quantum-safe and reasonably efficient way, another possible path is lattice-based Merkle trees.
If the proof system's efficiency in worst cases proves insufficient, we can also use multidimensional gas—an unexpected tool—to compensate: set separate gas limits for (i) calldata; (ii) computation; (iii) state access; and possibly other distinct resources. Multidimensional gas increases complexity but in return tightly constrains the ratio between average and worst-case usage. With multidimensional gas, the theoretically maximum number of branches to prove might drop from 12,500 to, say, 3,000. This would make BLAKE3 barely usable even today.

Multidimensional gas allows block resource limits to more closely match underlying hardware resource limits
Another unexpected tool is delaying state root computation to after the block's slot. This gives us a full 12 seconds to compute the state root, meaning proof generation requiring only 60,000 hashes per second—even in extreme cases—would suffice, again suggesting BLAKE3 could barely meet requirements.
The downside is increased light client latency by one slot, though cleverer techniques can reduce this latency to just proof generation delay. For example, proofs could be broadcast across the network immediately after any node generates them, rather than waiting for the next block.
How does it interact with other parts of the roadmap?
Solving statelessness greatly increases the feasibility of solo staking. Technologies that lower the minimum hardware threshold for solo staking, such as Orbit SSF or application-layer strategies like squad staking, would make this even more viable.
If EOF is introduced simultaneously, multidimensional gas analysis becomes easier. This is because the main source of execution complexity in multidimensional gas comes from handling subcalls that don't pass all gas from the parent call, whereas EOF can make such subcalls illegal, trivializing the issue (and native account abstraction will provide a protocol-level alternative for the primary current use of partial gas).
There's also an important synergy between stateless verification and historical expiry. Today, clients must store nearly 1TB of historical data—many times more than state data. Even if clients are stateless, unless we relieve them of storing historical data, the dream of near-zero-storage clients cannot be realized. The first step here is EIP-4444, which also implies storing historical data in torrents or the Portal Network.
Validity Proofs for EVM Execution
What problem are we solving?
The long-term goal for Ethereum block validation is clear—it should be possible to validate an Ethereum block by: (i) downloading the block, or even just a small portion of its data availability sampling; (ii) verifying a small proof of the block’s validity. This would be an extremely low-resource operation, feasible on mobile clients, browser wallets, or even on another chain (excluding data availability).
To reach this point, we need SNARKs or STARKs to prove both (i) the consensus layer (i.e., proof-of-stake) and (ii) the execution layer (i.e., EVM). The former is challenging in itself and should be addressed through ongoing improvements to the consensus layer (e.g., single-slot finality). The latter requires proofs of EVM execution.
What is it, and how does it work?
Formally, in the Ethereum specification, the EVM is defined as a state transition function: you have a pre-state S, a block B, and you compute a post-state S' = STF(S, B). If a user runs a light client, they don't fully possess S and S', or even B; instead, they have a pre-state root R, a post-state root R', and a block hash H.
-
Public inputs: pre-state root R, post-state root R', block hash H
-
Private inputs: block body B, state objects Q accessed by block B, same objects Q' after executing block B, state proofs (e.g., Merkle branches) P
-
Claim 1: P is a valid proof showing certain parts of the state represented by R are contained in Q
-
Claim 2: If STF is run on Q, (i) execution accesses only objects within Q, (ii) the block is valid, (iii) the result is Q'
-
Claim 3: Recomputing the new state root using Q' and P yields R'
If such a proof exists, a light client can fully verify Ethereum EVM execution. This makes client resource usage quite low. To achieve a truly fully verifying Ethereum client, we need to do the same for consensus.
Implementations of validity proofs for EVM computation already exist and are heavily used by L2s. Making EVM validity proofs viable on L1, however, requires much more work.
Connections to Existing Research:
-
EFPSE ZK-EVM (now deprecated due to better alternatives)
-
Zeth, which works by compiling EVM into the RISC-0 ZK-VM
-
ZK-EVM formal verification project
What else needs to be done?
Today, validity proofs for EVM fall short in two areas: security and verification time.
A secure validity proof must guarantee that the SNARK indeed verifies EVM computation with no vulnerabilities. Two main techniques enhance security: multi-prover and formal verification. Multi-prover means having multiple independently implemented validity proof systems, analogous to multiple clients; if a block is proven by a sufficiently large subset of these implementations, clients accept it. Formal verification involves using tools typically employed to prove mathematical theorems, like Lean4, to prove that the validity proof only accepts executions that correctly follow the underlying EVM specification (e.g., EVM K semantics or the Python-implemented Ethereum execution layer spec (EELS)).
Sufficiently fast verification time means any Ethereum block can be verified in under 4 seconds. Today, we're far from this goal, though much closer than imagined two years ago. Achieving this requires progress in three directions:
-
Parallelization—Currently, the fastest EVM prover takes about 15 seconds on average to prove an Ethereum block. It achieves this by parallelizing work across hundreds of GPUs and aggregating results at the end. Theoretically, we know how to build an EVM prover that can prove computations in O(log(N)) time: let one GPU handle each step, then build an "aggregation tree":

There are challenges in implementation. Even in worst cases—where a single massive transaction fills the entire block—computation must be split not by steps but by opcodes (of EVM or lower-level VMs like RISC-V). Ensuring virtual machine "memory" remains consistent across different proof segments is a key challenge. However, if we achieve such recursive proving, we know that—at minimum—the prover latency issue is solved, even without other improvements.
-
Proof system optimization—New proof systems like Orion, Binius, GKR, and others will likely significantly reduce verification time for general computation.
-
Other changes to EVM gas costs—Many aspects of the EVM can be optimized to favor provers, especially in worst cases. If an attacker can create a block that stalls the prover for ten minutes, being able to prove a normal Ethereum block in 4 seconds is insufficient. Required EVM changes fall roughly into these categories:
-
Gas cost changes—if an operation takes a long time to prove, it should have high gas costs even if computation is relatively fast. EIP-7667 is an EIP proposed to address the worst issues: it greatly increases gas costs for (conventional) hash functions, whose opcodes and precompiles are relatively cheap. To offset these increased gas costs, we can reduce gas costs for EVM opcodes with relatively low proof costs, keeping average throughput unchanged.
-
Data structure replacements—Besides replacing the state tree with STARK-friendlier methods, we also need to replace transaction lists, receipt trees, and other costly structures. Etan Kissling’s EIP moving transactions and receipts to SSZ is a step in this direction.
Beyond this, the two tools mentioned in the previous section (multidimensional gas and delayed state roots) can help here too. However, unlike stateless verification, using these tools means we already have sufficient technology for our current needs, while full ZK-EVM verification still requires more work—just less of it.
One point not mentioned above is prover hardware: using GPUs, FPGAs, and ASICs to generate proofs faster. Fabric Cryptography, Cysic, and Accseal are three companies making progress here. This is highly valuable for L2s but unlikely to be decisive for L1, as there's strong desire for L1 to remain highly decentralized, meaning proof generation must stay within reasonable reach of Ethereum users, not bottlenecked by single-company hardware. L2s can make more aggressive trade-offs.
More work remains in these areas:
-
Parallelized proving requires different parts of the proof system to "share memory" (e.g., lookup tables). We know the techniques, but need to implement them.
-
We need more analysis to identify the ideal set of gas cost changes to minimize worst-case verification time.
-
We need more work on proof systems.
Possible trade-offs:
-
Security vs. prover time: Choosing more aggressive hash functions, more complex proof systems, or bolder security assumptions could shorten prover time.
-
Decentralization vs. prover time: The community needs to agree on the target "spec" for prover hardware. Is it acceptable for provers to be large entities? Should high-end consumer laptops prove an Ethereum block within 4 seconds? Somewhere in between?
-
Degree of breaking backward compatibility: Shortfalls in other areas can be compensated by more aggressive gas cost changes, but this disproportionately increases costs for certain applications, forcing developers to rewrite and redeploy code to maintain economic viability. Likewise, the two tools have their own complexities and drawbacks.
How does it interact with other parts of the roadmap?
The core technologies needed to achieve L1 EVM validity proofs largely overlap with two other areas:
-
L2 validity proofs (i.e., "ZK rollups")
-
Stateless "STARK binary hash proof" approach
Successfully implementing validity proofs on L1 enables easy solo staking: even the weakest computers (including phones or smartwatches) could stake. This further increases the value of solving other solo staking limitations (like the 32ETH minimum requirement).
Additionally, L1 EVM validity proofs could greatly increase L1's gas limit.
Consensus Validity Proofs
What problem are we solving?
If we want to fully SNARK-verify an Ethereum block, EVM execution isn't the only part we need to prove. We also need to prove consensus—the part of the system handling deposits, withdrawals, signatures, validator balance updates, and other elements of Ethereum's proof-of-stake.
Consensus is much simpler than EVM, but faces the challenge that we don't have L2-style EVM rollups, so most of the work must be done from scratch. Thus, any implementation proving Ethereum consensus needs to start "from the ground up," though the proof system itself can build on shared foundational work.
What is it, and how does it work?
The beacon chain, like the EVM, is defined as a state transition function. The state transition function mainly consists of three parts:
-
ECADD (for verifying BLS signatures)
-
Pairings (for verifying BLS signatures)
-
SHA256 hashes (for reading and updating state)
In each block, we need to prove 1–16 BLS12-381 ECADDs per validator (possibly more than one, as signatures may be included in multiple sets). This can be mitigated with subset precomputation techniques, so we can say each validator needs only one BLS12-381 ECADD. Currently, there are 30,000 validator signatures per slot. In the future, with single-slot finality, this could change in two directions: if we take the "brute force" route, the number of validators per slot could grow to 1 million. Meanwhile, with Orbit SSF, the number could stay at 32,768 or even reduce to 8,192.

How BLS aggregation works: Verifying the aggregate signature requires only one ECADD per participant, not an ECMUL. But 30,000 ECADDs still represent a large proof burden.
Regarding pairings, currently up to 128 attestations per slot mean 128 pairings to verify. With EIP-7549 and further modifications, this could reduce to 16 per slot. The number of pairings is small but extremely costly: each pairing takes thousands of times longer to run (or prove) than an ECADD.
A major challenge in proving BLS12-381 operations is that there's no convenient curve with field size equal to the curve order of BLS12-381, adding significant overhead to any proof system. On the other hand, the Verkle trees proposed for Ethereum use the Bandersnatch curve, making BLS12-381 itself the native curve in SNARK systems for proving Verkle branches. A basic implementation can prove 100 G1 additions per second; achieving sufficient speed will almost certainly require clever techniques like GKR.
For SHA256 hashes, the current worst case is epoch boundary blocks, where the entire validator short-balance tree and large portions of validator balances are updated. Each validator’s short-balance tree is one byte, so 1MB of data gets re-hashed. This equals 32,768 SHA256 calls. If balances of a thousand validators cross a threshold, requiring updates to effective balances in validator records, that’s a thousand Merkle branches, possibly requiring ten thousand hashes. The shuffling mechanism requires 90 bits per validator (thus 11MB of data), but this can be computed at any time during an epoch. Under single-slot finality, these numbers may vary depending on specifics. Shuffling becomes unnecessary, though Orbit might partially restore this need.
Another challenge is needing to re-fetch all validator states, including public keys, to verify a block. For a million validators, just reading public keys requires 48 million bytes, plus Merkle branches. This demands millions of hashes per epoch. If we must prove PoS validity, a realistic approach is some form of incremental verifiable computation: storing a separate data structure inside the proof system, optimized for efficient lookups, and proving updates to this structure.
In summary, the challenges are numerous. Most effectively addressing them will likely require deep redesign of the beacon chain, possibly coinciding with the shift to single-slot finality. Such a redesign might feature:
-
Hash function changes: Currently, the "full" SHA256 hash is used, so each call corresponds to two underlying compression function calls due to padding. Switching to the SHA256 compression function alone would give at least a 2x improvement. Switching to Poseidon could yield 100x gains, potentially solving all our problems (at least regarding hashing): at 1.7 million hashes per second (54MB), even a million validator records could be "read" into a proof within seconds.
-
With Orbit, directly store shuffled validator records: If a fixed number of validators (e.g., 8,192 or 32,768) are selected as the committee for a given slot, place them directly adjacent in state, minimizing hashes needed to load all validator public keys into the proof. This also enables efficient balance updates.
-
Signature aggregation: Any high-performance signature aggregation scheme would involve some form of recursive proof, where different network nodes generate intermediate proofs for subsets of signatures. This naturally distributes proof work across multiple nodes, greatly reducing the workload for the "final prover."
-
Other signature schemes: For Lamport+Merkle signatures, we need 256 + 32 hashes to verify a signature; multiplied by 32,768 signers, that's 9,437,184 hashes. Optimizing the signature scheme could improve this by a small constant factor. With Poseidon, this could be provable within a single slot. But in practice, recursive aggregation schemes would be faster.
Connections to Existing Research:
-
Concise Ethereum Consensus Proofs (only sync committees)
-
Helios in concise SP1
-
Concise BLS12-381 precompile
-
Halo2-based BLS aggregate signature verification
What work remains, and what trade-offs?
In practice, we'll need years to achieve validity proofs for Ethereum consensus. This roughly matches the timeline for achieving single-slot finality, Orbit, modifying signature algorithms, and conducting security analyses sufficient to confidently deploy "aggressive" hash functions like Poseidon. Thus, the wisest course is to solve these other problems while designing them with STARK-friendliness in mind.
The main trade-off will likely be in sequencing—between a more gradual reform of Ethereum's consensus layer versus a more radical "change many things at once" approach. For EVM, a gradual approach makes sense as it minimizes disruption to backward compatibility. For the consensus layer, backward compatibility matters less, and a more comprehensive rethinking of beacon chain design details offers benefits in optimally tuning for SNARK-friendliness.
How does it interact with other parts of the roadmap?
When long-term redesigning Ethereum PoS, STARK-friendliness must be a primary consideration—especially regarding single-slot finality, Orbit, signature scheme changes, and signature aggregation.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News













