
Envisioning the World After Parallel EVM: Reshaping the Landscape of dApps and User Experience
TechFlow Selected TechFlow Selected

Envisioning the World After Parallel EVM: Reshaping the Landscape of dApps and User Experience
Parallelization is a means, not an end.
Author: Reforge Research
Translation: TechFlow
Benjamin Franklin once famously said, "In this world nothing can be said to be certain, except death and taxes."
The original title of this article is “Death, Taxes, and Parallel EVM.”
As parallel EVM becomes an unavoidable trend in the crypto world, what would a crypto ecosystem leveraging parallel EVM look like?
Reforge Research explores this vision from technical and application perspectives. Below is the full translation.
Introduction
In modern computing systems, making things faster and more efficient often means performing tasks in parallel rather than sequentially. This phenomenon, known as parallelization, has been catalyzed by the emergence of multi-core processor architectures. Tasks traditionally executed step-by-step are now approached through concurrency, maximizing processor utilization. Similarly, in blockchain networks, the principle of executing multiple operations simultaneously applies at the transaction level—not by leveraging multiple processors, but by harnessing the collective validation power of numerous validators across the network. Early implementations include:
-
In 2015, Nano (XNO) implemented a block-lattice structure where each account has its own blockchain, enabling parallel processing and eliminating the need for network-wide transaction confirmations.
-
In 2018, a research paper on Block-STM (Software Transactional Memory), a parallel execution engine for blockchain networks, was published. Polkadot approached parallelization via a multi-chain architecture, while EOS launched its multithreaded processing engine.
-
In 2020, Avalanche introduced parallel processing into its consensus (as opposed to the serialized EVM C-Chain), and Solana introduced a similar innovation called Sealevel.
For the EVM, transactions and smart contract execution have always been sequential since its inception. This single-threaded design limits overall system throughput and scalability—especially evident during peak network demand. As validators face increasing workloads, the network inevitably slows down, users encounter higher costs, and must bid competitively in congested environments to prioritize their transactions.
The Ethereum community has long explored parallel processing as a solution, initially starting with Vitalik’s EIP in 2017. The original goal was to achieve parallelism through traditional sharding or shard chains. However, the rapid development and adoption of Layer 2 rollups—which offer simpler and more immediate scalability benefits—shifted Ethereum’s focus away from sharding toward what is now known as danksharding. With danksharding, shards primarily serve as data availability layers rather than for parallel transaction execution. Since full implementation of danksharding remains pending, attention has turned to several key alternative parallelized L1 networks that emphasize EVM compatibility—particularly Monad, Neon EVM, and Sei.
Given the traditional evolution of software engineering and the demonstrated scalability successes of other networks, parallel execution for the EVM is inevitable. While we are confident in this transition, the future beyond it remains uncertain yet highly promising. The impact of today's largest smart contract developer ecosystem—with over $80 billion in total value locked—is significant. What happens when gas prices plummet to fractions of a cent due to optimized state access? How expansive will the design space become for application-layer developers? Below is our perspective on what a post-parallel EVM world might look like.
Parallelization Is a Means, Not an End
Scaling blockchains is a multidimensional challenge. Parallel execution paves the way for developing critical infrastructure such as blockchain state storage.
For projects pursuing parallel EVMs, the primary challenge isn't just enabling simultaneous computation—it's ensuring optimized state access and modification within a parallelized environment. At the core lie two major issues:
-
Ethereum clients and Ethereum itself use different data structures for storage (B-trees/LSM-trees vs. Merkle Patricia Trie). Embedding one data structure inside another leads to suboptimal performance.
-
With parallel execution, the ability to perform asynchronous input/output (async I/O) for transaction reads and updates is crucial; processes may deadlock waiting for each other, wasting any speed gains.
Compared to the cost of retrieving or setting storage values, additional computational tasks like extra SHA-3 hashing are secondary. To reduce transaction processing time and gas prices, the database infrastructure itself must improve. This goes beyond simply replacing raw key-value stores with traditional database architectures (e.g., SQL databases). Implementing EVM state using relational models introduces unnecessary complexity and overhead, resulting in higher 'sload' and 'sstore' operation costs compared to basic key-value storage. EVM state does not require features like sorting, range scans, or transaction semantics, as it only performs point reads and writes, with writes occurring separately at the end of each block. Therefore, improvements should focus on addressing key considerations such as scalability, low-latency read/write operations, efficient concurrency control, state pruning and archiving, and seamless integration with the EVM. For example, Monad is building a custom state database from scratch called MonadDB. It will leverage the latest kernel support for asynchronous operations while natively implementing the Merkle Patricia Trie data structure both in memory and on disk.
We expect further reengineering of underlying key-value databases and significant enhancements to third-party infrastructure supporting most blockchain storage capabilities.
Make Programmable Central Limit Order Books (pCLOBs) Great Again
As DeFi evolves toward higher-fidelity states, CLOBs will become the dominant design paradigm.
Since their debut in 2017, Automated Market Makers (AMMs) have become foundational to DeFi, offering simplicity and unique liquidity bootstrapping capabilities. By leveraging liquidity pools and pricing algorithms, AMMs revolutionized DeFi as the optimal alternative to traditional trading systems like order books. Although Central Limit Order Books (CLOBs) are fundamental building blocks in traditional finance, they encountered blockchain scalability limitations when introduced to Ethereum. They require numerous transactions, as every order submission, execution, cancellation, or modification demands a new on-chain transaction. Given immature scalability efforts on Ethereum, associated costs rendered CLOBs impractical in early DeFi, leading to failures of early attempts like EtherDelta. Yet even as AMMs gained popularity, their inherent limitations became increasingly apparent—especially as DeFi attracted more sophisticated traders and institutions over the years.
Recognizing the superiority of CLOBs, efforts to integrate CLOB-based exchanges into DeFi began growing on alternative, more scalable blockchain networks. Protocols such as Kujira, Serum (RIP), Demex, dYdX, Dexalot, and more recently Aori and Hyperliquid, aim to deliver better on-chain trading experiences compared to their AMM counterparts. However, beyond niche-focused projects (like dYdX and Hyperliquid for perpetual contracts), CLOBs on these alternative networks face challenges beyond scalability:
-
Fragmented liquidity: The network effects achieved by highly composable and seamlessly integrated DeFi protocols on Ethereum make it difficult for CLOBs on other chains to attract sufficient liquidity and trading volume, hindering adoption and usability.
-
Meme coins: Bootstrapping liquidity for new and obscure assets like meme coins requires limit orders—an even harder chicken-and-egg problem for chain-based CLOBs.
CLOBs with Blobs

But what about Layer 2s? Existing Ethereum L2 stacks show significant improvements in transaction throughput and gas costs relative to mainnet—especially after the recent Dencun hard fork. By replacing gas-intensive calldata with lightweight binary large objects (blobs), fees have dropped dramatically. According to growthepie, as of April 1, Arbitrum and OP fees were $0.028 and $0.064 respectively, with Mantle being the cheapest at $0.015. This marks a stark contrast to pre-Cancun upgrade levels, where calldata accounted for 70%-90% of costs. Unfortunately, this still isn’t cheap enough—$0.01 per follow-up/cancellation fee is considered expensive. For instance, institutional traders and market makers typically have high order-to-trade ratios—they place many more orders than actual executed trades. Even under current L2 pricing, paying for order submissions and then modifying or canceling them across multiple ledgers significantly impacts profitability and strategic decisions for institutional participants. Consider the following example:
Company A: Standard hourly benchmark is 10,000 order submissions, 1,000 executions, and 9,000 cancellations or modifications. If the company operates across 100 order books in a day, total activity could easily result in over $150,000 in fees—even if individual transaction costs are below $0.01.
pCLOB

With the advent of parallel EVMs, we anticipate a surge in DeFi activity—primarily driven by the feasibility of on-chain CLOBs. But not just any CLOB—programmable Central Limit Order Books (pCLOBs). Given DeFi’s inherent composability and infinite protocol interactions, vast combinations of trading logic become possible. Leveraging this, pCLOBs can enable custom logic during order submission—executable before or after submission. For example, a pCLOB smart contract could include custom logic to:
-
Validate order parameters (e.g., price and quantity) based on predefined rules or market conditions
-
Perform real-time risk checks (e.g., ensure sufficient margin or collateral for leveraged trades)
-
Apply dynamic fee calculations based on any parameter (e.g., order type, volume, market volatility)
-
Execute conditional orders based on specified trigger conditions
A significant leap forward in cost efficiency compared to existing trading designs.
The concept of Just-In-Time (JIT) liquidity illustrates this well. Liquidity no longer sits idle on any single exchange—it earns yield elsewhere until the moment an order matches and extracts liquidity from the base platform. Who wouldn’t want to squeeze out every last bit of yield from MakerDAO before sourcing liquidity for a trade? Mangrove Exchange’s innovative “offer-is-code” approach hints at the potential. When a bid in an order is matched, embedded code executes solely to locate the requested liquidity. That said, L2 scalability and cost challenges remain.
Parallel EVM also greatly enhances the pCLOB matching engine. pCLOBs can now implement a parallel matching engine, utilizing multiple “channels” to process incoming orders and execute matching computations simultaneously. Each channel handles a subset of the order book, removing price-time priority constraints and only executing upon finding a match. Reduced latency between order submission, execution, and modification enables optimally efficient order book updates.
Keone Hon, Co-founder and CEO of Monad, stated: “Given AMMs’ ability to continuously provide market-making even under illiquid conditions, AMMs will likely continue to dominate for long-tail assets. However, for ‘blue-chip’ assets, pCLOBs will dominate.”
In a discussion with Keone, Co-founder and CEO of Monad, he suggested we can expect multiple pCLOBs gaining traction across different high-throughput ecosystems. Keone emphasized that due to lower fees, these pCLOBs will significantly impact the broader DeFi ecosystem.
Even with just a few of these improvements, we expect pCLOBs to substantially increase capital efficiency and unlock new categories within DeFi.
We Need More Applications, But First...
Existing and new applications must be architecturally designed to fully leverage underlying parallelism.
Beyond pCLOBs, current decentralized applications are not parallel—they interact with blockchains sequentially. Yet history shows that technologies and applications naturally evolve to exploit new advancements, even if those weren’t originally anticipated.
Steven Landers, Blockchain Architect at Sei, noted: “When the first iPhone came out, apps built for it looked like bad desktop apps. It’s similar here—we’re adding multicore capabilities to blockchains, which will lead to better applications.”
The evolution from displaying magazine catalogs online to e-commerce platforms with robust two-sided markets is a classic example. With parallel EVMs, we’ll witness a similar transformation in decentralized applications. This underscores a critical limitation: applications not designed with parallelism in mind won’t benefit from the efficiency gains of parallel EVMs. Thus, having parallelism only at the infrastructure layer—without redesigning the application layer—is insufficient. They must align architecturally.
State Contention
Even without changing the applications themselves, we still expect modest performance improvements of 2–4x—but why stop there when much greater gains are possible? This shift introduces a key challenge: applications need fundamental redesigns to accommodate the nuances of parallel processing.
Steven Landers, Blockchain Architect at Sei, said: “If you want to leverage throughput, you need to limit contention between transactions.”
More specifically, conflicts arise when multiple transactions from decentralized applications simultaneously attempt to modify the same state. Resolving conflicts requires serializing conflicting transactions, which negates the benefits of parallelization.
There are various conflict resolution methods—we won’t discuss them now—but the number of potential conflicts during execution largely depends on application developers. Across decentralized applications, even the most popular protocols like Uniswap haven’t considered or implemented such limitations. Aori co-founder 0xTaker discussed extensively with us the primary state contentions expected in a parallel world. For an AMM, due to its peer-to-pool model, many participants may target a single pool simultaneously—anywhere from several to over 100 transactions competing for state. Thus, AMM designers must carefully consider how liquidity is distributed and managed within state to maximize pooling benefits.
Steven, a core developer at Sei, also emphasized the importance of considering contention in multithreaded development, noting that Sei is actively researching the implications of parallelization and how to fully capture resource utilization.
Performance Predictability
Yilong, Co-founder and CEO of MegaETH, also highlighted the importance of performance predictability for decentralized applications. Performance predictability refers to the ability of dApps to consistently execute transactions over time, unaffected by network congestion or other factors. One way to achieve this is through app-specific chains. While app-specific chains offer predictable performance, they sacrifice composability.
Aori co-founder 0xTaker said: “Parallelization offers ways to experiment with localized fee markets to minimize state contention.”
Advanced parallelism and multidimensional fee mechanisms can enable a single blockchain to provide more deterministic performance for each application while maintaining overall composability.
Solana has a solid fee market system that is localized—so if multiple users access the same state, they pay higher fees (peak pricing), rather than bidding against each other in a global fee market. This approach particularly benefits loosely coupled protocols requiring both performance predictability and composability. To illustrate, imagine a highway system with multiple lanes and dynamic tolling. During peak hours, dedicated express lanes can be allocated to vehicles willing to pay higher tolls. These express lanes ensure predictable and faster travel times for those prioritizing speed and willing to pay a premium. Meanwhile, regular lanes remain open to all, preserving the overall connectivity of the highway system.
Imagine All Possibilities
Although redesigning protocols to align with underlying parallelization may seem challenging, the design space expands significantly across DeFi and other verticals. We can expect a new generation of applications—more complex, efficient, and focused on use cases previously impractical due to performance constraints.

Keone Hon, Co-founder and CEO of Monad, said: “Go back to 1995, when internet plans charged $0.10 per MB downloaded—you’d choose websites carefully. Imagine going from that to unlimited data, and notice how behavior and possibilities changed.”
We might return to an early-Centralized Exchange-like scenario—a war for user acquisition—where DeFi applications, especially decentralized exchanges, will wield referral programs (points, airdrops) and superior user experiences as weapons. We envision a world where meaningful interactive volumes in on-chain gaming actually become feasible. Hybrid order book-AMMs already exist, but instead of having CLOB sequencers as independent nodes decentralized via governance, moving them on-chain enables improved decentralization, lower latency, and enhanced composability. Fully on-chain social interactions are now viable too. Frankly, anything involving large numbers of people or agents performing actions simultaneously is now within scope.
Beyond humans, intelligent agents will likely dominate on-chain transaction flows far more than today. AI involvement in trading—through arbitrage bots and autonomous execution—has existed for some time, but participation will multiply exponentially. Our thesis is that every form of on-chain engagement will be augmented by AI to some degree. Compared to today’s expectations, latency requirements for agent-driven trading will become even more critical.
Ultimately, technological progress is just an enabling foundation. The ultimate winners will be those who best attract users and bootstrap trading volume/liquidity. The difference is that now developers have more tools at their disposal.
Crypto UX Sucks—Now, It Won’t Be So Bad
User Experience Unification (UXU) is not only feasible but necessary—the industry will undoubtedly move toward achieving it.
Today’s blockchain user experience is fragmented and cumbersome—users navigate multiple blockchains, wallets, and protocols, wait for transactions to confirm, and risk security breaches or hacks. The ideal future allows users to securely and seamlessly interact with their assets without worrying about underlying blockchain infrastructure. We call this transition from today’s fragmented experience to a unified, simplified one User Experience Unification (UXU).
At its core, improving blockchain performance—especially reducing latency and fees—can significantly alleviate UX pain points. Historically, performance improvements positively impact digital user experiences. Faster internet speeds didn’t just enable seamless online interaction—they fueled demand for richer, more immersive digital content. The rise of broadband and fiber optics enabled low-latency HD video streaming and real-time online gaming, raising user expectations of digital platforms. This relentless pursuit of depth and quality drives continuous innovation—from advanced interactive web content to complex cloud-based services and VR/AR experiences. Faster internet didn’t just improve online experiences—it expanded the scope of user demands.
Similarly, blockchain performance gains will enhance UX both directly—by reducing latency—and indirectly—by enabling protocols that unify and elevate the overall experience. Performance is a prerequisite for their existence. In particular, these networks—especially parallel EVMs—with higher performance and lower gas fees mean onboarding and offboarding will be far smoother for end users, attracting more developers. In conversation with Sergey, co-founder of interoperability network Axelar, he envisioned a world that’s not just truly interoperable but more symbiotic.
Sergey said: “If you have complex logic on a high-throughput chain (e.g., parallel EVM), and due to its high performance, the chain itself can ‘absorb’ the complexity and throughput requirements of that logic, you can use interoperability solutions to effectively export that functionality to other chains.”
Felix Madutsa, Co-founder of Orb Labs, said: “As scalability issues are resolved and interoperability between ecosystems increases, we’ll see protocols emerge that bring Web3 user experience closer to Web2. Examples include second-generation intent-based protocols, advanced RPC infrastructure, chain abstraction capabilities, and AI-enhanced open computing infrastructures.”
Other Considerations
As performance demands grow, oracle markets will heat up.
Parallel EVM means increased performance demands on oracles—a sector that has lagged severely in recent years. Growing application-level demand will activate a market rife with subpar performance and security, ultimately improving DeFi composability. For instance, market depth and trading volume are strong indicators for many DeFi primitives, such as money markets. We expect established players like Chainlink and Pyth to adapt relatively quickly, as new entrants challenge their market share in this new era. After speaking with senior members at Chainlink, our understanding aligns: “Chainlink recognizes that if parallel EVM becomes dominant, we may want to refactor our contracts to capture value from it (e.g., reduce inter-contract dependencies so transactions/calls aren’t unnecessarily dependent and thus vulnerable to MEV). But since parallel EVM aims to improve transparency and throughput of applications already running on EVM, it shouldn’t affect network stability.”
This indicates Chainlink understands the impact of parallel execution on its products, and—as previously emphasized—must refactor its contracts to leverage parallelization.
It’s not just an L1 party—parallel EVM L2s want in too.
From a technical standpoint, creating high-performance parallel EVM L2 solutions is easier than developing L1s. This is because in L2s, the sequencer setup is simpler than the consensus-based mechanisms used in traditional L1 systems (e.g., Tendermint and variants). This simplicity arises because in a parallel EVM L2 setup, the sequencer only needs to maintain transaction order, whereas in consensus-based L1 systems, many nodes must agree on order.
More specifically, we expect optimistic parallel EVM L2s to dominate over their zero-knowledge peers in the short term. Ultimately, we anticipate a transition from OP-rollups to zk-rollups via general-purpose ZK frameworks (e.g., RISC0), rather than traditional approaches used in other zk-rollups—it’s only a matter of time.
Currently, does Rust take the lead?
Programming language choice will play a pivotal role in the development of these systems. We favor Rust—specifically Ethereum’s Rust implementation, Reth—over other alternatives. This preference isn’t arbitrary; Rust offers advantages including memory safety without garbage collection, zero-cost abstractions, and a rich type system.
In our view, the competition between Rust and C++ is becoming a defining battle in next-generation blockchain development languages. Though often overlooked, it cannot be ignored. Language choice matters—it affects the efficiency, security, and versatility of systems developers build.

Developers make these systems real—their preferences and expertise critically shape industry direction. We firmly believe Rust will ultimately prevail. However, migrating implementations is far from trivial—it demands substantial resources, time, and expertise, further underscoring the importance of choosing the right language from the start.
In the context of parallel execution, omitting Move would be remiss. While Rust and C++ dominate discussions, Move has several features making it equally suitable:
-
Move introduces “resources”—types that can only be created, moved, or destroyed, never copied. This ensures exclusive ownership, preventing common issues in parallel execution like race conditions and data races.
-
Formal verification and static typing: Move is a statically typed language emphasizing security. Features like type inference, ownership tracking, and overflow checks help prevent common programming errors and vulnerabilities. These are especially important in parallel execution, where bugs are harder to detect and reproduce. The language’s semantics and type system, based on linear logic (similar to Rust and Haskell), make reasoning about correctness easier, enabling formal verification to ensure concurrent operations are safe and correct.
-
Move promotes modular design—smart contracts composed of smaller, reusable modules. This modularity makes individual component behaviors easier to understand and facilitates parallel execution by allowing different modules to run concurrently.
Future Considerations: EVM Is Unsafe—Needs Improvement
While we paint an optimistic picture of the post-parallel-EVM on-chain universe, none of it matters without addressing EVM and smart contract security.

Unlike network economics and consensus security, hackers exploited smart contract vulnerabilities in Ethereum DeFi protocols, illegally obtaining over $1.3 billion in 2023 alone. As a result, users prefer walled-off CEXs or hybrid “decentralized” protocols with centralized validator sets, sacrificing decentralization for perceived safer (and better-performing) on-chain experiences.

The root cause lies in inherent security shortcomings in EVM design.
Drawing an analogy to the aerospace industry, where strict safety standards made air travel extremely safe, we see a stark contrast in blockchain’s security practices. Just as people value life above all, the security of financial assets is paramount. Key practices like comprehensive testing, redundancy, fault tolerance, and rigorous development standards underpin aviation safety. These critical characteristics are largely absent in EVM—and in most cases, in other VMs as well.
A potential solution is adopting a dual-VM setup, where a separate VM like CosmWasm monitors real-time execution of EVM smart contracts—akin to antivirus software operating within an OS. Such a structure enables advanced checks like call stack analysis, specifically designed to reduce hacking incidents. However, this requires significant upgrades to existing blockchain systems. We expect newer, better-positioned solutions like Arbitrum Stylus and Artela to successfully implement this architecture from day one.
Existing security primitives in the market often reactively scan mempools or audit smart contract code in response to imminent or attempted threats. While helpful, these fail to address underlying vulnerabilities in VM design. A more productive, proactive approach is needed to reshape and strengthen blockchain networks and their application-layer security.
We advocate a complete overhaul of blockchain VM architecture to embed real-time protection and other critical security features—potentially via dual-VM setups—to align with industries that have successfully adopted such practices (e.g., aerospace). Looking ahead, we hope to support infrastructure enhancements that emphasize preventive approaches, ensuring security advances keep pace with industry performance progress (i.e., parallel EVM).
Conclusion
The emergence of parallel EVM marks a pivotal turning point in blockchain technology’s evolution. By enabling simultaneous transaction execution and optimizing state access, parallel EVM opens a new era of possibilities for decentralized applications. From the resurgence of programmable CLOBs to the emergence of more complex and high-performance applications, parallel EVM lays the foundation for a more unified and user-friendly blockchain ecosystem. As the industry embraces this paradigm shift, we can expect waves of innovation pushing the boundaries of decentralized technology. Ultimately, success will depend on whether developers, infrastructure providers, and the broader community can adapt and align with the principles of parallel execution—ushering in a future where technology seamlessly integrates into daily life.
The rise of parallel EVM has the potential to reshape the landscape of decentralized applications and user experiences. By addressing long-standing scalability and performance bottlenecks that have hindered growth in key verticals like DeFi, it enables complex, high-throughput applications to thrive without sacrificing the trilemma—opening new doors.
Realizing this vision requires more than just infrastructure advances. Developers must fundamentally rethink their application architectures to align with parallel processing principles—minimizing state contention and maximizing performance predictability. Even as we look toward a bright future, we must emphasize that beyond scalability, security must remain a top priority.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














