
The race for scaling: OP, ZKRU, and new DA—who will be the king?
TechFlow Selected TechFlow Selected

The race for scaling: OP, ZKRU, and new DA—who will be the king?
In certain cases, L1 and L2 can perform arbitrary message passing, which is something OP cannot strictly achieve.
Author: IOSG Ventures
Preface
Over the past three years, the Ethereum network has made significant progress in scalability, aiming to enhance its capacity and performance to meet growing demands for transactions and applications. These efforts include Layer-2 solutions such as zkRollups and Optimistic Rollups, along with continuous improvements to network protocols. However, achieving broader adoption requires ongoing innovation and network enhancements...
As Devconnect—one of Ethereum’s annual flagship events—approaches, we observe leading projects within the Ethereum ecosystem becoming more active than ever, preparing technical conferences lasting from half a day to two days, sharing technological advancements and ecosystem developments, or engaging in industry discussions.
But before OFR Istanbul begins, in this article we will revisit the discussion on related topics held during the Friends Reunion Singapore event in September. We were honored to invite Celer Network & Brevis co-founder Mo Dong; Matter Labs and zkSync co-founder Alex Gluchowski; Arbitrum's co-founder and chief scientist Ed Felton; Scroll co-founder Ye Zhang; Polygon Infrastructure co-founder Mihailo Bjelic; and Celestia's CEO and COO Nick White, for an in-depth roundtable discussion centered around "Scaling Tomorrow: Ethereum’s Layer-2 Vanguard". Let's now look back at these exciting exchanges of insights!
Introduction
Moderator - Mo Dong (IOSG Ventures Partner & Co-founder @ Celer)
Hello everyone, I'm Mo Dong, co-founder of Celer. Our company focuses on blockchain interoperability protocols—some of you may have used our platform. Recently, we’ve started venturing into ZK Space, a ZK coprocessor. Today, representing IOSG Ventures, I’m hosting this exciting panel with experts in the scaling space. Please allow each guest to briefly introduce themselves and their projects.
Alex Gluchowski (Matter Labs & zkSync)
I'm Alex, co-founder of Matter Labs. Our mission is to scale Ethereum through zero-knowledge proofs, extending blockchain access so that anyone in the world can fully participate.
Ed Felton (Arbitrum)
I'm Ed Felton, co-founder and chief scientist at Arbitrum. Arbitrum is a Layer-2 scaling solution based on Optimistic Rollup technology.
Ye Zhang (Scroll)
Hi everyone, I'm Ye Zhang, co-founder of Scroll. Scroll is a general-purpose Ethereum scaling solution built on bytecode-level compatible zkEVM. For developers and users, it feels just like Ethereum—but cheaper, faster, and with higher throughput. Using ZK, we are building the future of crypto and aspire to become Ethereum’s trust layer of the future.
Nick White (Celestia)
I'm Nick, COO of Celestia. We're building a Layer-1 specifically designed for scalability, enabling all teams to build systems with abundant blockspace while ensuring data availability and consensus.
Mihailo Bjelic (Polygon Infrastructure)
Hello everyone. I'm Mihailo, co-founder of Polygon. Polygon is a multi-Layer-2 ecosystem and framework powered by ZK. We’re proud to have led Ethereum adoption. Over the past two to three years, Polygon has become the go-to platform for nearly every major web-native project, Web2 companies, enterprises, and more. At the same time, we've decided to be true technological leaders, pushing the frontier of innovation. I believe we’ve achieved this, especially with our latest technology, Polygon zkEVM, which launched on Ethereum mainnet in March.
Discussion
Moderator - Mo Dong (IOSG Ventures Partner & Co-founder @ Celer)
First question: What do you see as the biggest challenge in scaling? I believe we’ve made tremendous progress in Ethereum scaling over the past three to four years. Looking back, what achievements stand out? How much have we increased scalability compared to Ethereum Layer 1? Have we reached the scalability goals achievable with current blockchain stack designs?
Alex Gluchowski (Matter Labs & zkSync)
I think we’re still far from our ultimate goal. If you check dashboards like L2Beat, you’ll see that all Layer-2s combined currently offer about five times Ethereum’s capacity—or four times. With Ethereum, daily average transaction volume is around 50 per second, roughly fivefold. Part of the reason is that transaction costs remain relatively high. While Ethereum’s gas fees have somewhat stabilized, rollup costs range from half a dollar to several dollars, which is still too expensive for many use cases. Moreover, if we had to onboard a million new users into a single use case, the system would collapse. All crypto networks are competing for limited blockspace on Ethereum, so EIP-4844 isn’t a real solution for Ethereum. Although EIP-4844 might provide some uplift for Layer-2s, it won’t double Ethereum’s total capacity. Therefore, I believe only infinite data availability solutions can bring the breakthrough. Celestia is moving in this direction, and as an alternative Layer-1, it’s very interesting. But Ethereum itself could also evolve into a modular blockchain by integrating different rollups, validiums, and volitions with external data availability solutions. That’s where I see the future heading.
Ed Felton (Arbitrum)
I think we’re still far from our goal, but we will eventually get there. At the same time, we should celebrate the fact that Ethereum rollups have already scaled fivefold and significantly reduced costs from earlier levels. If we examine the barriers—the fundamental limitations to scalability—we can still do better. Of course, data availability cost is one such barrier. I’d argue it’s an accidental or even false bottleneck because Ethereum wasn’t originally designed for affordable data availability, though that’s changing. Other data availability layers are emerging, so we’ll reduce data availability costs down to the actual underlying cost of reliably storing data. Once that happens, I believe it will open up new possibilities. But bottlenecks will always exist. We must keep lowering costs, improving efficiency, and finding the simplest, most economical ways to build system components without compromising scalability. So we still have a lot of work ahead.
Ye Zhang (Scroll)
We’re building a zkEVM solution, and I think that’s worth celebrating. Two or three years ago, people couldn’t imagine EVM being verified via ZK—it was purely theoretical. Now it’s real, and people are actively advancing ZK technology, systems, and performance, truly reducing costs. Also, we want to strictly adhere to Ethereum security standards. We must publish our data on-chain. What we can do is push to the upper limits set by Ethereum.
Once we reach that limit, we can free up more resources to help Ethereum build its own data-related solutions. We predict that if we really scale to 10,000 TPS, we’ll need to tackle state growth and many other fundamental issues. After resetting the limits, we aim to focus on these deeper problems—research and resolution—which are our top priorities.
Nick White (Celestia)
I think there are two bottlenecks to scalability. One is execution—this is exactly what rollups solve. I believe we’ve all made huge progress in scaling execution. A few years ago, rollups were just research concepts; today, all these teams have implemented them. It’s incredible. The other bottleneck is data availability. I don’t think this one has been fully solved yet.
Celestia’s launch will mark the first Layer-1 to implement data availability sampling. This is a core technology that enables scalable data availability within Layer-1 blockspace. Hopefully, we’ll no longer be constrained by high gas fees. We can eliminate the problem Alex mentioned—adding a million users causing tenfold or higher fees in certain applications.
Even if many people feel current demand for blockspace isn’t that high, I believe no one wants to build on a system knowing that widespread adoption would cause it to collapse. So I’m excited to try solving this, and we hope to contribute to resolving the data availability bottleneck.
Mihailo Bjelic (Polygon Infrastructure)
Regarding our current state, there are two aspects. First is adoption. No matter how much we improve technically, adoption follows its own cycle. When next-generation technologies emerge, new use cases and waves of applications follow—like broadband internet enabling entirely new applications and expanding user bases. Then, when mobile internet combined with GPS and smartphones emerged, new use cases followed. So I think we still have a long way to go in adoption: you’ll see multiple iterations of infrastructure and technology improvements, followed by adoption and new use cases.
More interestingly, on the technical side, I see three key elements. I largely agree with what others here have said, but perhaps I can add a few thoughts.
The first is execution. This has always been a major challenge. I believe with zkEVM, we’ve truly expanded Ethereum’s blockspace, exponentially increasing its capacity without introducing almost any additional trust assumptions—as long as the cryptography remains sound. Thus, we gain cryptographic security, ensuring the expanded Ethereum blockspace is valid. In my view, launching Polygon zkEVM is something to celebrate—we believe this part of the problem is essentially solved.
Second is data availability (DA), as Ed mentioned. I personally believe Ethereum’s DA capability cannot support massive or global adoption—this might be controversial. I don’t think any single DA solution can achieve that. Nevertheless, we now have solutions like Celestia, other DA approaches, and native DA committees. Essentially, every Layer-2 has its own preferences and requirements regarding DA security. So at least in my view, the DA piece is solved because we have excellent teams working on external DA solutions, and Ethereum is continuously improving its DA throughput. I believe we’re on the right path.
Personally, the third component is interoperability. Clearly, if Ethereum sees mass adoption in the future, we’ll have numerous Layer-2s built atop it. We want these Layer-2s to interoperate seamlessly. We envision Layer-2s forming an interconnected network. There’s still much more to do. Our Polygon 2.0 interoperability framework is an exciting attempt, enabling ZK-powered interoperability between Layer-2s.
Mo Dong (IOSG Ventures Partner & Co-founder @ Celer)
Excellent insights! Everyone has touched on data availability. While the first wave of Ethereum Layer-2 solutions were mostly full-stack designs, recently there’s been much discussion in the community about modular blockchains. Nick’s team pioneered the concept of data availability as a generalized solution applicable to many different rollups and potentially other blockchains. But there are also interesting debates around shared sequencers, ZK rollups, and more.
Nick, let me ask you first: When creating Celestia, how did you envision the early future of modularity? You currently provide a DA layer for many blockchains—being a Layer-1, have you considered expanding into modernized components of blockchains?
Nick White (Celestia)
Celestia will remain focused solely on the DA layer. In my view, the value of modularity lies in building blockchain infrastructure that empowers powerful applications. Instead of trying to solve everything within a single protocol or framework, we can break it down into subproblems. Teams can design protocols targeting individual problems, allowing parallel development, then recombining solutions. This offers more experimentation paths and gives end developers greater choice. Interestingly, initially we envisioned maybe three or four layers: data availability, consensus, execution, and settlement.
Interestingly, you could use a protocol to provide decentralized services—shared sequencing, proof markets, or decentralized proving networks. As new needs emerge for building blockchain infrastructure in a decentralized, scalable way, we’ll learn more. New modular layers may appear to meet these needs. So I believe the evolution of modular stacks in new verticals isn’t over yet. I’m very excited about the coming years.
Moderator - Mo Dong (IOSG Ventures Partner & Co-founder @ Celer)
Exactly. Recently I’ve heard some criticisms of modular blockchains because, in certain cases, they introduce subtle trade-offs—for example, decentralized sequencers. How do you manage decentralized sequencers in terms of scalability, security, and added value? I’ve seen many people share thoughts on proving infrastructure and ZK rollups lately—anyone care to comment?
Ye Zhang (Scroll)
I strongly agree that “to achieve internet-like scale, we’ll definitely need alternative DA solutions.” But I’d say modularity is a highly idealized model—you still need to solve many real-world challenges. For instance, RPC providers, oracles—all these need to be on-chain and operate stably. You must consider third parties and how to integrate with external DA. When you’re not running a chain yourself, you might think: “Oh, this is execution, this is proving, this is verification.” But in reality, many other tasks are needed—monitoring systems to truly ensure rollup security. We’re operating a chain, so we know how difficult integration with other services can be. This is a potential issue. We need time to test all these DA solutions and understand how they fit into ecosystems and their impact.
Also, I’m concerned that immature standards might be aggressively pushed to market. Because from our perspective, if you lack ZK proofs and launch something directly, certain design choices may make it harder later to add new proving systems or components to your chain, making a truly modular system impractical. But if you actually want Arbitrum, Scroll, or other protocols to align with such a standard, you’d need major changes. Perhaps the design inherently requires a proof—which is very hard—and you’d need to upgrade the entire stack. Especially with a large network, think how hard Ethereum upgrades are due to backward compatibility.
So I see this as a risk point in the modular blockchain direction. Overall, I think the time may come when the community agrees on certain standards and specific domains adopt them. But until then, everyone should first build a complete end-to-end system and validate it in the real world before discussing next steps—shared sequencers, shared provers, etc. I believe these can solve some problems, but many issues remain unresolved.
That’s why we’re still in the research phase—people are discussing using shared sequencers for ordering, perhaps ten different ordering methods, with interoperability among different ordering chains. But the problem is: suppose 100 opportunities are ordered by the same node—this node would require a set of nodes to order all 100, demanding high centralization. High operational requirements for sequencers may increase centralization in your system, and it restricts design flexibility because you must delegate power to a shared sequencer group. Even with a shared sequencer, atomic composability across chains isn’t guaranteed.
Many problems remain unsolved, and different paths carry different economic incentives. Having a set of validators makes design more complex. I believe it’s valuable for long-tail, small rollups lacking the capacity to decentralize their own sequencers—they might join a well-intentioned shared sequencer. But I feel it’s premature to build a full system—let’s at least wait until Layer-2s meet certain criteria.
Because I don’t want a future where there are ten thousand different Layer-2 combinations—using this execution layer, that DA, multisig, instant upgrades. Growing Layer-2 adoption is good, but also bad—because everyone claims to be Layer-2 without meeting the real standards tools should uphold.
These are some of my concerns, though I remain optimistic—I just think we need to invest more effort in this area.
Moderator - Mo Dong (IOSG Ventures Partner & Co-founder @ Celer)
So you’re basically cautiously optimistic—you’d prefer to see a complete end-to-end working system first. Before we consider modernization, Ed, anything to add?
Ed Felton (Arbitrum)
Let me share some experience here—over a year ago, we modified Arbitrum to support multiple data availability services. Over the past year, we’ve launched two Arbitrum mainnet chains—Arbitrum One and Arbitrum Nova—with different DA mechanisms. We’ve gained practical experience. One insight is that data availability can be pluggable, and we now have a solid approach. But I also think the implementation details are subtler than expected—you need to integrate proofs of data availability (possibly from other chains like a DA chain) into the main proving mechanism and ensure no gaps exist between required proving mechanisms.
This means other DA chains must satisfy certain invariants and possess specific properties needed to generate end-to-end proofs. These details can be subtle. I believe this is feasible in cases like DA, but I don’t think every component in a secure Layer-2 design can be pluggable. DA can be pluggable, sequencing can be pluggable—but that’s even harder than DA. In this field, we should proceed cautiously—focus on areas like data availability where value is greatest and security trade-offs are clearest.
Alex Gluchowski (Matter Labs & zkSync)
I’d like to add a different perspective. On one hand, we should be cautious—some decisions are irreversible and hard to maintain. On the other hand, we should dare to experiment and push boundaries instead of staying static. Let’s follow Ethereum’s lead and reflect on our role as Layer-2 innovators. Then, let’s do things differently—we are doing so. Arbitrum leads in EVM and plugin languages, while we explore alternative DA solutions—we’re at a critical stage. We’re researching sequencer decentralization, a core part of our work. The essence of our mission is decentralization to scale blockchains.
I differ on the definition of Layer-2: it must inherit the most critical features of Layer-1—security, decentralization, and others. But you must ensure the most crucial parts, like permanence of all transactions—not just desired user transactions, but every single one—are preserved. You must ensure Ethereum enforces transaction validity. Even if you claim the same security as Ethereum, data availability must be enforced by Ethereum or your chosen Layer-1. From this view, we’re exploring decentralized consensus and other scaling schemes—validium, volition, using Rust as a pluggable execution layer, execution modules, etc.
Nick White (Celestia)
I’d quickly add that I appreciate Alex’s courage—Celestia believes in, and hopes to drive, experimentation within the modular ecosystem. The more experiments, the faster innovation and learning. Standards are similar—creating a standard carries risks: it might get adopted, then fail, requiring fixes. But look at Ethereum standards—they emerged organically, like ERC-20 or ERC-721. The community found what worked. I think we in the modular community have similar work ahead. It may take time, but I’m optimistic about its long-term impact.
Ed Felton (Arbitrum)
I acknowledge bold experimentation is important in the right context, but those of us running chains know: when using investors’ money, courage has limits.
Moderator - Mo Dong (IOSG Ventures Partner & Co-founder @ Celer)
In this discussion, everyone has made significant contributions—some launching highly successful projects, others rising stars. This dialogue deserves celebration. It’s also a great opportunity for the audience to learn from past experiences. Next, I’d like to ask each guest a few individual questions. Starting with you, Ed.
Arbitrum is currently the strongest and most promising rollup in terms of TVL and adoption, pushing Ethereum scalability to high technical standards. From early on, you launched full fraud proofs, which was vital for your business growth.
So I wonder, did your team ever discuss launching earlier without fraud proofs, or with partial fraud proofs? Why did you choose your current path? Based on your experience, has this high technical standard helped your ecosystem’s development?
Ed Felton (Arbitrum)
We never considered launching without fraud proofs. From the start, we understood an obvious truth: if you ultimately want security, it’s easiest to build it from day one. The longer you operate without security, the harder it becomes to rebuild it. Hence, fraud proofs and interactive fraud proofs were our first idea. They kicked off the entire project and formed the foundation of our innovation.
As you can see: three years ago, we provided viable fraud proofs on Testnet; two years ago, on Mainnet. Everything we’ve done builds on fraud proofs. Products like Stylus, offering fully interoperable and composable capabilities to write programs in popular languages alongside EVM.
Without battle-tested fraud prevention as a base, we could never have built such systems. Because the longer you lack these features, the harder it is to reintroduce them. Other parts of the system also need subtle properties to be provably fraudulent.
Thus, we had to consider these properties early and ensure we didn’t violate invariants—I believe this was key to successfully building fraud proofs. When we realized other system components needed certain attributes to be fraud-provable, we addressed them early. Internally, we have a checklist of invariants we must not violate if we want the system to be fraud-provable. Some of these properties are subtle.
Moderator - Mo Dong (IOSG Ventures Partner & Co-founder @ Celer)
Yes. Now Mihailo, Polygon has achieved tremendous success—can you walk us through Polygon’s explosive growth phase? People often say the Ethereum alignment strategy helped Polygon grow, but recently some question whether Polygon has drifted from the Ethereum community. So perhaps you can walk us through Polygon’s explosive growth phase and share some stories about that strategy?
Mihailo Bjelic (Polygon Infrastructure)
People now recognize through our actions and achievements that we’re proving our Ethereum alignment. We’ve invested heavily in ZK and building zkEVM—arguably the most advanced technology and scaling solution in the Ethereum ecosystem today. I believe we’re demonstrating our Ethereum alignment more clearly than ever. Recently, we proposed upgrading our PoS chain—a sidechain architecture—to a Validium powered by zkEVM. I think this is another strong signal and proof of our Ethereum alignment. Personally, people now understand and believe more than ever that we are genuinely Ethereum-aligned.
Ethereum alignment has served us well. Its inspiration comes from two sources—two components. One emotional, or irrational, because I’ve been deeply involved in Ethereum since 2017. It was Ethereum that brought me into this industry. Ethereum’s vision—the potential to create all these interesting use cases and truly build a global value network—impressed me deeply.
As I said, just as we created the internet, which profoundly changed the world, Ethereum proposes building a global value network. To me, this is at least the missing piece of the internet. And this global value network could change the world—even more profoundly than the internet. That idea is impressive, and the Ethereum community’s sincerity about it continues to inspire me and our team. So there’s an emotional component, plus a practical one.
If you think practically, leveraging everything the Ethereum community has achieved—EVM as the standard for programming languages, tools, users, capital—is extremely sensible. So that part is pragmatic. The Ethereum community welcomed us, and in the early days, you had to believe in us because of our sidechain-like architecture. You could say Ethereum’s value is too big, etc. People might doubt, but as I said, I think we’ve proven it to some extent—or at least so far, we’ve demonstrated it.
Looking back, it wasn’t always easy. Now, the Ethereum ecosystem is very lucky—Layer-2 is currently the hottest topic in the industry. VCs want to invest in Layer-2 startups; everyone is excited about Layer-2.
For example, in 2020 it wasn’t like this—Layer-1 and so-called "Ethereum killers" were hot, right? That’s where capital flowed. Those projects got market premiums, tied to Ethereum and its market cap. Many VCs approached us repeatedly asking: why don’t you go your own way? Why stay Ethereum-aligned? We’ve heard comments like this many times before: like you’re sacrificing your project, your market cap, etc.
So there were such challenges and comments, and skepticism from the Ethereum community too. On one hand, people asked: why aren’t you a Layer-1? Why not separate from Ethereum? On the other hand, some in the Ethereum community doubted our sincerity and commitment to Ethereum. Despite these difficulties at times, we persevered. This has undoubtedly been very beneficial for us.
Moderator - Mo Dong (IOSG Ventures Partner & Co-founder @ Celer)
Indeed, I think given your contributions to the Ethereum community and ecosystem, this should be unquestionable.
Alex, zkSync was one of the first projects to launch ZK Rollups on mainnet. While maintaining extreme developer experience compatibility with Ethereum, you’ve added exciting features like native account abstraction. How is that progressing? What’s the purpose behind adding extra features atop the existing Ethereum stack? Can you share?
Alex Gluchowski (Matter Labs & zkSync)
All design decisions we make at the core team for zkSync are driven by the desired end state. We have a clear vision, articulated in our zkCredo manifesto. Long-term, we aim for that vision, then reverse-engineer what technical architecture decisions or choices are needed.
One key vision is a user experience smoother and better than Web2. This means native account abstraction remains one of the true unlockers for mainstream adoption. You can’t expect most users to understand seed phrases, gas, or needing special tokens to pay gas. This is where we must innovate boldly. We collaborated with the Ethereum Foundation on EIP-4337. But Ethereum must be conservative—it can’t change its core protocol just for seamless UX.
So we decided from the start. Being the only chain with certain unique technologies also brought challenges—some apps struggle with multichain support, while others thrive precisely because they run only on zkSync. We have many fascinating game apps leveraging native account abstraction. This has been a great learning opportunity, helping us think about future features—some I’ve mentioned here, others still in design. Yes, all these features serve the ultimate Ethereum vision: ultra-scalability, infinite chains, infinite transactions and data availability, extreme ease of use—like carrying a Swiss bank in your pocket—while preserving privacy and enabling computation in various languages.
Moderator - Mo Dong (IOSG Ventures Partner & Co-founder @ Celer)
So Ye Zhang, before launching Scroll, several ZK rollups already existed. Why did you decide to launch Scroll? What gap did you see in the space that motivated your contribution?
Ye Zhang (Scroll)
Yes, this goes back two or three years. I think our closest similarity to Arbitrum was in the past. For the past five years, I did academic research—zero-knowledge proofs, even before blockchain. I liked ZK more than blockchain because I enjoy polynomials and all these cryptographic algorithms—they’re fun. Like how to compress a complex circuit into a tiny polynomial, evaluate it, and use probabilistic checkbox proofs.
At that time, the biggest bottleneck in practical ZK usage was lengthy proving times. I researched how to accelerate this process via hardware, along with theoretical improvements to speed it up. We eventually found solutions. Based on this, we published academic papers on using custom hardware and GPUs to boost prover speed 100x and improve other cryptocurrency advantages. All new proving systems and ownership proofs improved 1000x—around that two-year-ago timeframe. Then we realized whenever your tech improves by three orders of magnitude, new applications emerge.
That’s exactly why people started talking more about ZK recently—tech made huge progress. We realized we could build something truly ambitious, even if starting late and tech wasn’t ready. That’s why I think different people—even pioneers—made different design trade-offs. We wanted to be ZK-friendly for performance gains. But actually, even with fully compatible, bytecode-level products, you can still achieve solid performance—proof generation in minutes. I think we were fortunate to stand at this point of tech improvement—we saw it, and we’ve worked toward it for years. That’s why we started building.
Additionally, we truly adhere to Ethereum’s values and principles, which accelerated our growth. Our openness and community-driven ethos help us move fast. It’s hard to say which zkEVM performs best since everyone develops their own stack. But we’re confident in developing the most openly—being the only ZK VM fully open-source from scratch. You can see every GitHub commit from the early stages. You can follow how we co-developed with the Offchain Labs ZK team and other community members—the most community-driven approach. Because when people talk about community, it’s not just app developers—they mean the codebase, like how many external contributors contribute to our repo.
This enables faster development as we build a community-driven project. We do face issues—sometimes projects iterate faster. But it boosts efficiency because different teams focus on this codebase. We host many workshops and outreach events. People build circuits we can reuse. It’s truly a joint effort. Even though we started later, we’ve grown fast and earned recognition and respect from other projects.
I believe sticking to these principles and values—community-driven, credible, neutral—is truly important. We might be the Layer with the least market noise, because whenever I ask people about Scroll, feedback seems positive. People notice we do little marketing, but maybe we’re days from launching an update. My website will launch soon—but with minimal hype, just focusing on substance, research-driven. I think that’s who we are—Ethereum began with nerds/geeks just wanting to build something cool, like programs or Bitcoin. We follow that path, staying tech-focused even in this market. I believe this earns more respect from genuine players, and that will carry us forward smoothly in the future.
Moderator - Mo Dong (IOSG Ventures Partner & Co-founder @ Celer)
I think we’re running a bit overtime. Let’s spend five more minutes hearing about your upcoming plans and schedules? A reflection on the past and a look toward the future. You’ve articulated the entire concept and narrative of modernized blockchains—from DA layer onward, expanding in multiple directions. So let’s start from the past. When you first tried creating and pushing this space into existence, did you encounter difficulties? Nick, what new plans do you have regarding launching the Celestia Layer-1 and incorporating different rollup solutions in the future?
Nick White (Celestia)
Many people hold this vision and work toward it. Even before the term “modular” was widely used, many rollup teams had already begun. “Modular” is more of an emergent term—Celestia started using it to describe this new way of building blockchains. But clearly, from the beginning, it was a community effort. I believe the power of “modular” lies in it being a meme—a word expressing this profound new paradigm and mindset shift. And memes are powerful because they give people a way to grasp something. I think the problem before using “modular” was that topics like blockchain architecture, data availability, and rollups were complex and poorly understood by most. But once people grasp the term, it becomes their entry point to learning and understanding what’s being built.
So I believe calling this new model “modular blockchains” acts as an unlocker—it spreads the idea and attracts more talent into the field. The coolest thing is many teams here evolved within the modular space. Before, it wasn’t even called modular. Now it is—now nearly 50 different teams are building various pieces, contributing to the overall vision. I believe blockchains are fundamentally social, so having a shared vision is crucial—it draws more people in and accelerates actual tech construction. Without shared vision and goals, we’d likely fragment. But I think the idea of modular blockchains unites us.
Moderator - Mo Dong (IOSG Ventures Partner & Co-founder @ Celer)
Ed, regarding future outlook—I recall your 2021 article “Optimistic Rollup As the Present and Future of Ethereum Scaling.” But over two years have passed. ZK Rollups have certainly advanced, as has Arbitrum, right? I noticed you recently launched Stylus with fraud proofs, so you’re extending smart contract programmability into a new era. Could you share your thoughts on this?
Ed Felton (Arbitrum)
I think two things are correct. First, we’re one of the few companies in this space whose name doesn’t contain “Approving Technology”—aligning with our pragmatic philosophy. We adopt whatever technology best solves user and developer problems. Right now, optimistic proving is cheaper. Yes, ZK teams have done tremendous work narrowing the cost gap, but a gap remains. It’s still the simplest and most flexible approach—for example, we could build Stylus in a fully composable, interoperable way because our system’s core uses popular programming languages.
This means we can use popular testing tools and guarantees, and access more security auditors. Building the system core with standard tools rather than custom ones offers huge advantages in many ways. So I still believe Optimistic makes sense and will for the foreseeable future. Still, if our core technical leaders decide ZK makes more sense, we’ll switch. We’re not using Optimistic for ideological reasons—we use it for pragmatic ones. We simply believe it better solves user and developer problems. If we switched to ZK tomorrow, I don’t think users would notice—just fees might be slightly higher.
Alex Gluchowski (Matter Labs & zkSync)
I’d like to comment on that. I’d paraphrase what you said earlier—if someone without fraud proofs switches to ZK proofs, the longer you wait, the harder it gets. So you’d better switch early because of subtle differences. If you don’t include them from the start, later you might replace the system—others might be fine, just slightly more expensive—but you won’t gain all the superpowers from timely ZK integration.
Ed Felton (Arbitrum)
The issue is, I actually don’t believe any claimed superpowers manifest in user experience. The main thing ZK advocates talk about is lower cost than before—yes, that’s true, but it’s still more expensive than Optimistic. Others talk about verification time, but that doesn’t affect user experience. Users actually use fast bridge services—Arbitrum users don’t feel the seven-day delay; it’s not part of their experience. They use direct exchange channels and fast bridges. So frankly, compared to the cost, simplicity, and agility benefits of continuing with Optimistic, we don’t believe ZK offers any advantage for our users or developers today. That’s our view. I know others here may disagree—that’s normal. Let the market decide which method works best.
Ye Zhang (Scroll)
I can give a direct example of what ZK can do but OP cannot. As you mentioned, when I want to withdraw funds, if there’s a liquidity provider or third-party bridge, I can get my money quickly—that’s why ten minutes vs seventy minutes doesn’t matter. But in some cases, Layer-1 and Layer-2 need arbitrary message passing—something OP strictly cannot do because you can’t find liquidity providers offering customized arbitrary message passing.
This aspect is crucial and impossible for OP. Another thing OP struggles with: imagine if Ethereum or some critical account relies on social recovery, storing keys on certain content lines. Due to sync issues, they can’t store across multiple lines—they need data settled quickly, confirmed within minutes. They have data they can’t read due to seven-day delays—so to reliably read from that data, you need a ZK rollup with fast finality.
Finally, something OP finds hard: imagine two ZK rollup solutions, each robot routing their state to the same layer. You can flawlessly read their state leaves, then read their states from leaves, enabling interoperability. This is possible—there are countless things ZK can do that OP cannot.
Ed Felton (Arbitrum)
I’d say two things. First, on settlement time—please watch our research forum. I’ll reiterate: if we believe ZK offers net benefits, we’ll switch. But we’re not there yet. That day may come, but probably not soon. We can respectfully disagree on this point.
Moderator - Mo Dong (IOSG Ventures Partner & Co-founder @ Celer)
Yes, real adoption will tell us which solution combinations and trade-offs are correct. Perhaps for different use cases, there isn’t just one right solution.
Nick White (Celestia)
I’d say I’m not part of this debate. But I think hybrid systems might combine both strengths—for example, a baseline optimistic chain where, when challenged, you must generate a ZK proof. Or if you need fast finality, you can actually ZK-prove and verify the block, but otherwise maintain longer settlement times—so you don’t need to choose one extreme, or have a middle-ground hybrid model.
Mo Dong (IOSG Ventures Partner & Co-founder @ Celer)
Before we wrap up, let’s take one more minute. Polygon has many initiatives underway—what are your priorities?
Mihailo Bjelic (Polygon Infrastructure)
In our recently announced Polygon 2.0, I’d say priorities are consolidated. When we started Polygon, we decided to be true technology leaders. We’ve achieved much in tech and adoption. There’s a famous saying: “Let a thousand flowers bloom.” That’s exactly what we’ve done at Polygon. We formed multiple ZK teams, pursued diverse ZK approaches, conducted extensive customer development, even experimented with data availability.
On one hand, we truly wanted to explore every meaningful approach to deliver secure blockchain infrastructure aimed at mass adoption. We wanted to explore everything meaningful. The cost was community confusion—people asked: why so many ZK methods? What are you doing? How will it integrate? Now we’re at the convergence phase. It all converges into Polygon 2.0.
So, Polygon 2.0 is a turning point—consolidating all our past efforts into a single technical stack. We’re highly confident and excited about this ZK-driven multi-layer toolkit framework. We believe this is the right approach—the best way to build globally scalable, secure blockchain infrastructure. It all comes down to Polygon 2.0. This is the top priority, the guiding North Star.
Moderator - Mo Dong (IOSG Ventures Partner & Co-founder @ Celer)
Yes. I think everyone is also eager for Scroll’s upcoming launch. So Ye Zhang, can you share something?
Ye Zhang (Scroll)
Yes, our mainnet is即将 launched. From Goerli pre-alpha to beta testing, we’ve spent a year doing everything possible to ensure real security. We’ve spent millions, auditing with top firms like Trail of Bits and OpenZeppelin. Everything we’ve done this summer—no audits or other shortcuts—but we’ll do everything we can before launch. We’ve replayed nearly all transactions from different networks—perhaps Arbitrum and Binance—to verify everything. We have a security team to truly prevent attacks from malicious transactions. I think we’ve done enough—the encoder has been frozen before mainnet launch. But real battle-testing takes longer. We’re being very cautious. So please stay tuned.
Moderator - Mo Dong (IOSG Ventures Partner & Co-founder @ Celer)
Due to time constraints, let’s quickly hear from Alex and Nick—what exciting things are coming up?
Alex Gluchowski (Matter Labs & zkSync)
We’ll soon launch several features showcasing ZK’s superpowers and convincing users to switch to ZK.
Nick White (Celestia)
We’ll launch mainnet in a few months—stay tuned!
Moderator - Mo Dong (IOSG Ventures Partner & Co-founder @ Celer)
This has been an amazing discussion. I wish we had more time—thank you all for such a delightful conversation!
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














