
Discussion entre Scroll, Cysic : explorer ensemble les réseaux de prouveurs et l'accélération matérielle zk
TechFlow SélectionTechFlow Sélection

Discussion entre Scroll, Cysic : explorer ensemble les réseaux de prouveurs et l'accélération matérielle zk
Cet épisode explore la décentralisation des rollups sous l'angle des « réseaux de validateurs décentralisés et de l'accélération matérielle ».
Introduction
This is the third episode of the Decentralized Rollup Interview Series. This episode explores rollup decentralization from the perspective of "decentralized proving networks and hardware acceleration." We are joined by Ye Zhang, Co-Founder of Scroll, and Leo Fan, Co-Founder of Cysic, to discuss community-focused topics such as ZK circuits, what exactly hardware acceleration speeds up, what an open decentralized prover network looks like, and how the zk proof-generation mining market differs from Bitcoin’s PoW mechanism. Both Ye and Leo also share insights into their teams’ research and future plans regarding prover networks and zk hardware acceleration.
Guest Introductions
Ye
My name is Zhang Ye, Co-Founder of Scroll. My focus is on ZK Research—specifically ZK Hardware Acceleration (research on ZK hardware), exploring how hardware can speed up the proving process. I also work on cryptographic and mathematical algorithms underlying these systems—the magical mathematics behind the scenes. Recently, I’ve been focused on zkEVM, building a zkEVM-based zk-Rollup network compatible with EVM, so my work has become more application-oriented. Additionally, I’m involved in related protocol research.
Leo
I'm Leo Fan, Co-Founder of Cysic. I earned my PhD at Cornell in cryptography. Like Ye, my background is in algorithm research. My earlier work focused on post-quantum cryptography, but over the past few years I've shifted toward ZK-related algorithm development. At Cysic, we aim to accelerate the zero-knowledge proof generation process using hardware, removing this bottleneck for broader adoption.
Interview Transcript
Understanding Zero-Knowledge Proofs
Explaining Zero-Knowledge Proofs
Ye
Let me start with some basic concepts and explain what zero-knowledge proofs are. A zero-knowledge proof involves two parties: a Prover and a Verifier. The Prover can convince the Verifier that a certain statement is true without revealing any secret information they know. Let me give concrete examples to clarify what this statement might be and what kind of information could remain hidden.
Imagine a classroom where the teacher assigns a math problem—for example, solving an equation. Student A solves it, but student B does not. A wants to boast to B that they solved it. One direct way would be to tell B the answer, but then B would learn the solution too. This defeats the purpose of proving knowledge without revealing the actual content. This scenario exemplifies where zero-knowledge proofs come in: A can prove to B that they know the solution without disclosing what it is.
A more blockchain-relevant example involves hash functions. Suppose the output of a hash function is 0. Party A can prove to party B that they know an input (a pre-image) which hashes to 0, without revealing the input itself. Previously, finding such a pre-image required massive computational power under Proof of Work. With zero-knowledge proofs, A can demonstrate possession of this pre-image while keeping its value secret.
Even more relevant to blockchains, zero-knowledge proofs enhance privacy. While blockchain transparency offers benefits, it raises serious privacy concerns. Every transaction must be broadcast publicly, including details like sender, receiver, and amount—all visible to anyone. This lack of anonymity means once you send someone money, their entire transaction history becomes traceable. Zero-knowledge proofs allow attaching a proof to a transaction showing it's valid, without revealing its contents.
Beyond privacy, applications like zk-Rollups leverage zero-knowledge proofs to improve scalability. They batch thousands of transactions and generate a single succinct proof of their validity.
That’s the core idea behind zero-knowledge proofs and some of their key applications.
What is the zk circuit used in zk-Rollups?
Ye
What exactly do we mean by a "zero-knowledge proof circuit"? It relates to how we actually use zero-knowledge proofs. As introduced earlier, ZKPs let us prove something is true without exposing secrets. But how do we actually generate such a proof for a given program? That leads into the technical computation process of ZK.
In the previous examples—solving equations or computing hashes—we're essentially proving that a specific input to a function produces a particular output. Effectively, we’re verifying that a whole program ran correctly, but instead of rerunning it, we produce a compact cryptographic proof.
Normally, we write programs in high-level languages like C++. Similarly, for ZK, if we want to generate a proof about a program, we first need to encode it in a special ZK language—not C++, but one tailored for zero-knowledge systems.
This language is highly mathematical, somewhat akin to assembly, allowing only basic operations like addition, multiplication, and simple gate logic. Your original program must be expressed in this format. Once your program is written in this specialized ZK circuit language, you apply cryptographic algorithms to generate the proof.
In short, generating a proof for a program requires rewriting it in ZK-friendly form—this encoding method is known as a ZK circuit.
Is there a connection between zk circuits and hardware acceleration?
Ye
That’s a great question because people often associate “circuit” with physical chips or electronic circuits. However, ZK circuits are algebraic circuits—they represent programs algebraically. You can think of them as sets of mathematical statements like A × B = C. They are fundamentally different from physical circuit boards.
There are similarities though. When designing a ZK circuit, you build it up from fundamental logical gates—limited to additions, multiplications, and basic constructs—with their own layout structure. So conceptually, there are parallels, but the actual computational processes differ significantly.
One powerful aspect of ZK circuits is their ability to verify mathematical relationships like A × B = C. In contrast, physical circuits take inputs, run them through silicon, and produce outputs.
For instance, to compute A ÷ B = C in a physical circuit, you’d need to design a dedicated divider module: input A and B, perform division, and get C. But in a ZK circuit, you assume the values A, B, and C (called witnesses) are provided, and simply prove that B × C = A. This illustrates a key difference between ZK circuits and physical circuits.
What does hardware acceleration actually speed up?
Ye
When people talk about hardware acceleration, they usually aren't referring to accelerating the creation of the ZK circuit itself. The ZK circuit is just another representation of your original program—for example, writing a hash function in ZK language. It remains a piece of code, merely encoded differently.
What we actually execute is a cryptographic algorithm—running the ZK algorithm using the circuit as input. This algorithm takes a long time—sometimes hours or even days—performing intensive operations involving elliptic curves and polynomial computations. It transforms the circuit into a final proof. This stage is computationally heavy and time-consuming, hence requiring accelerators to speed up proof generation—not circuit writing.
Writing the circuit is more like pre-processing—it prepares the program into circuit form. Only after obtaining the ZK circuit can we begin generating the proof. The actual acceleration applies to the proof-generation phase: assuming you already have the ZK circuit, you now need to create the proof. This step may require ASICs or GPUs to efficiently compute the proof.
In summary, acceleration targets the proof-generation process after the ZK circuit has been created.
Leo
I think Ye summarized it very well. The ZK circuit is essentially a different way of expressing a mathematical abstraction that the ZK system can understand. This abstract model is then passed to a backend system to generate the corresponding proof. Hardware acceleration comes into play during this backend proof-generation phase. Thus, ZK circuits and hardware acceleration are distinct concepts—though hardware acceleration improves overall efficiency of ZK proof generation.
Ye
I forgot to add one point earlier: Can the same hardware accelerator handle different ZK circuits? Yes, absolutely. Different ZK circuits—whether representing different hash functions or other programs—are simply different encodings. The circuits vary, but the proof-generation algorithm remains the same.
As Leo mentioned, the acceleration happens after the circuit is formed. The proof-generation algorithm is deterministic—a fixed cryptographic procedure applied regardless of the circuit. Therefore, when you accelerate this process with an ASIC, you're speeding up the same algorithm across all circuits. Different circuits are merely different inputs to the same accelerated process. Hence, a single accelerator can serve multiple types of ZK circuits.
Centralization vs. Decentralization in Prover Networks
What role does the Prover play in zk-Rollups?
Ye
Let me briefly explain the problem zk-Rollups solve: Ethereum’s scalability challenge. Ethereum is a peer-to-peer network designed to be highly decentralized. Each transaction must propagate to every node, and each node executes it independently. Greater decentralization reduces efficiency—thousands of nodes performing identical computations make the network expensive and limit throughput.
The core idea of zk-Rollups is moving batches of transactions off-chain. Instead of processing each transaction individually on Layer 1, we aggregate thousands of transactions and generate a single ZK proof attesting to their correctness. Rather than submitting all transactions to Ethereum, we submit only the small proof and minimal data. Ethereum nodes only need to verify this compact proof to confirm the validity of thousands of transactions.
Effectively, Ethereum no longer computes thousands of transactions—only verifies a tiny proof. This dramatically increases efficiency. Imagine Ethereum handling only ten transactions per second, each being a proof verification, where each proof represents 10,000 actual transactions. Scalability improves roughly 1,000-fold. Of course, reality involves nuances, but this captures the essence: bundle transactions, generate a proof, publish it, and enable lightweight verification.
So, what role does the Prover play? **First, zk-Rollups require a block proposer—to accept and order the 10,000 transactions into a block; second, a Prover node must generate the cryptographic proof attesting to the block’s validity.** The Prover ensures the integrity of each zk-Rollup block by creating its proof.
This Prover can be centralized or decentralized. Currently, most implementations are relatively centralized, relying heavily on GPU clusters. Generating proofs is a deterministic process: given a block, run the algorithm and compute the proof. It’s a predictable workflow achievable via distributed systems or computing clusters.
Introducing Scroll’s Approach
Ye
Scroll aims to build a zk-Rollup based on zkEVM. What is zkEVM? Earlier, I mentioned generating proofs for batches of transactions. How do we create such proofs? We must express the logic to be proven using ZK language. zkEVM means rewriting the entire Ethereum Virtual Machine (EVM) in ZK language. Historically, early zk-Rollups could only generate proofs for specific use cases—like DEX swaps or token transfers—so developers only needed to encode those limited logics.
Our goal is a general-purpose network where Solidity developers don’t need to learn ZK. We’ve built a ZK version of the EVM. For developers, the experience is identical to Ethereum—they interact with the EVM—but behind the scenes, we translate EVM execution into ZK circuits and prove every EVM transaction is valid.
Simply put, Scroll is a network with higher throughput, faster speeds, and lower costs, while maintaining Ethereum-level security. Its magic lies in accepting transactions and generating proofs—instead of broadcasting and achieving consensus via traditional node communication. The Prover plays a critical role in validating each block’s correctness.
Why pursue a decentralized prover network? Benefits include reliability, incentives, and improved proof-generation efficiency
Ye
An excellent question: Why go through the trouble of decentralizing the prover market? After all, we could generate proofs ourselves—and currently, everyone does. So why build such a network? There are several compelling reasons:
First, it enhances network reliability. The core principle of zk-Rollups is ensuring users retain control—even if Scroll or the Layer 2 shuts down, stops running nodes, or halts transaction processing. Users should still be able to generate proofs independently and withdraw funds from Layer 1. If provers are centralized and fail due to downtime or errors, the network’s robustness collapses. With many decentralized backup provers, proof generation continues uninterrupted.
Second, unlike Proof of Work, ZK proof generation is deterministic—there’s no randomness. Speed is everything. But we aim to create a positive feedback loop: faster proofs lead to shorter withdrawal times on Layer 1 and quicker finality. We want to continuously reduce proof generation time.
Only by opening a public prover market can we attract innovators. Companies like Cysic or ASIC builders will develop better ZK provers. As more players invest in superior hardware, our platform improves collectively. Without an open market—relying solely on internal development—we risk stagnation. We’d constantly need to hire top talent to optimize designs, reduce costs, and shorten finality delays.
By opening the prover market, we incentivize continuous innovation. This creates a virtuous cycle: faster proofs today enable even greater performance tomorrow. Future advancements—like ASICs—could unlock radical new possibilities. We envision exponential improvements: tenfold gains leading to another tenfold leap, opening unforeseen opportunities.
Leo
I agree with Ye’s assessment. ZK is currently one of the hottest topics in blockchain, with diverse applications driving strong demand for ZK provers. From day one, Cysic has embraced open design—periodically sharing data to engage the community in co-developing better ZK hardware. Our goal is to eliminate performance bottlenecks for ZK projects.
Though the market is still nascent, we’re highly optimistic about its potential. This belief is precisely why we committed significant resources early on—starting with FPGAs and progressing toward ASICs.
Scroll’s Current Plans and Progress on Decentralized Provers
Ye
Our current priority is launching our mainnet version—building out the zkEVM and zk-Rollup infrastructure and ensuring a stable, operational mainnet.
Internally, our team is developing high-performance GPU-based solutions, which we plan to open-source so anyone can run them. We’ve already published two academic papers—one on ASIC acceleration for ZK—and collaborate with institutions like Huazhong University of Science and Technology, Tsinghua University, and others. Open-source GPU architecture papers are available for public review. Our optimization focus is maximizing GPU acceleration for zkEVM. To promote decentralization, we specifically target making our prover algorithms run efficiently on affordable GPUs. Early adopters used GTX 1080s; later came 2080s and newer models. Our goal is enabling older, cheaper GPUs like the 1080 to participate effectively—lowering entry barriers and broadening participation.
Regarding specific prover network plans, here are some high-level design principles: We don’t want the fastest prover to always win. Because proof generation is deterministic, if one entity uses ASICs or exotic hardware to dominate, they’ll monopolize all rewards. This creates fragility—if that top prover exits, the ecosystem suffers severe disruption. We prefer redundancy. Our design allows reasonable proof submission within a time window—anyone meeting the deadline qualifies. Using ASICs or advanced hardware reduces energy and operational costs, offering economic advantages. Over time, we’ll gradually shorten the window as total network capacity grows. This reflects our philosophy.
Concrete timelines are still being finalized. First, ensure network stability, then incrementally integrate decentralized entities before fully opening the system. However, our GPU implementation will eventually be accessible to all, delivering strong performance backed by comprehensive open documentation.
One additional note: We uphold three core values—neutrality, openness, and community-driven development. I believe decentralized provers embody both openness and community orientation.
Ethereum’s strength lies in its vast community. Likewise, we aim to cultivate our own vibrant ecosystem. Entities like Cysic contribute hardware that becomes part of our ecosystem. We actively host workshops and publish tutorials to help others understand our ZK tooling stack. This fosters more projects—even 100 next-gen ZK ventures—using shared tools. They benefit from our prover network and hardware advances. It’s not just about building for ourselves; we’re constructing a broad framework and network for the entire community. Multiple zk applications using the same toolset could eventually leverage identical ASICs or GPUs. This is our long-term vision, though exact milestones are still under discussion.
Understanding zk Proof Hardware Acceleration
Why accelerate zk proof generation?
Leo
Our motivation for founding Cysic stemmed from my prior work at Algorand, where I developed Algorand State Proofs—essentially zk proofs used in cross-chain bridges. After completing the proof-of-concept around March–April last year, I realized proof generation took far too long—several minutes. Despite extensive software and algorithmic optimizations, I couldn’t overcome this bottleneck. That’s when I turned to hardware acceleration. Algorand aimed for proof generation within ~30 seconds; when conventional methods failed, hardware became the only viable path to boost efficiency. This mirrors historical precedents in cryptography—like RSA encryption, which was initially slow on general-purpose computers. The inventors of RSA created custom hardware to accelerate it. The same principle applies to ZKP: when generic hardware proves insufficiently fast, dedicated hardware drives progress forward.
Main hardware approaches and Cysic’s roadmap
Leo
Cysic drew significant inspiration from a paper authored by Ye. Main hardware options include CPU, GPU, FPGA, and ASIC. CPUs are general-purpose but typically slow unless using many cores—e.g., 192-core systems—which are inaccessible to most individuals.
GPUs and FPGAs offer faster market entry. Two key metrics govern hardware evaluation: performance per dollar and performance per watt.
Performance per dollar refers to initial capital expenditure for hardware acquisition. Here, GPUs outperform FPGAs at equivalent cost. Due to inherent hardware limitations, FPGAs cannot match GPU throughput. Thus, GPUs are an excellent choice for early market adoption—aligning with Ye’s in-house design strategy.
However, Cysic’s ultimate goal is ASIC development—creating a universal ZKP accelerator. **Before mass-producing ASICs, extensive testing and prototyping on FPGAs are essential. This is our current focus.** While a single FPGA may underperform a GPU, connecting multiple FPGAs in parallel can surpass GPU capabilities. FPGA platforms also allow iterative experimentation to inform future ASIC designs.
Performance per watt measures electricity costs during operation. On this metric, FPGAs and GPUs are comparable. However, ASICs surpass both—especially at scale—once manufacturing volumes justify NRE (non-recurring engineering) costs. This outlines the current hardware technology landscape.
How will Scroll approach hardware acceleration?
Ye
As Leo noted, we’ve conducted extensive research on ASICs and GPUs. Ultimately, however, our in-house and open-sourced solution will be GPU-based.
Designing ASICs or FPGAs demands specialized expertise: selecting chip models, managing fabrication, navigating suppliers and supply chains—tasks typical of hardware firms. Software companies struggle with these complexities. They require R&D investment and significant upfront costs.
GPUs are different. Our software engineers write code, and anyone with a GPU can run it. This aligns with our philosophy: reuse existing infrastructure, potentially leveraging retired Ethereum mining GPUs to collectively generate proofs. Considering accessibility, device availability, and timeline (FPGA/ASIC development takes much longer), we chose GPUs. Their performance is already sufficient, making GPU our short-term choice.
That said, we warmly welcome ASIC and FPGA integrations. Several groups explore GPU paths—like Supranational; ASIC efforts include Cysic, Accseal; FPGA players include Ulvantanna, Ingonyama. All are part of our ecosystem. We provide support, answer benchmarking questions, and maintain openness. Ultimately, the field evolves through community and industry collaboration.
Our outlook aligns closely with Leo’s: FPGAs suit transitional phases. Single FPGAs rarely beat GPUs, but interconnected FPGA arrays can surpass them. Yet FPGAs are costly. ASICs promise similar performance to multi-FPGA setups but require tens of millions in NRE and lengthy timelines—likely mid-next-year delivery. Hence, pragmatically, we start with GPUs, expecting the community and various companies to progressively deliver faster solutions, eventually converging on ASIC-accelerated networks.
Future Outlook: Prover Networks and Hardware Acceleration Markets
Supply and demand in the ZKP hardware acceleration market
Leo
Cysic aims to establish a ZK Prover DAO, integrating diverse hardware into a unified network. This DAO will partner with numerous ZK projects, forming the demand side. On the supply side, provers will contribute computational power. Cysic has already engaged over 20 former mining farms possessing infrastructure suitable for proof generation—but lacking R&D expertise. Initially, Cysic will provide these farms with our hardware, enabling them to join the DAO and serve the broader ZK community.
This jump-starts the ZK Prover DAO: instead of building decentralized provers from scratch, Cysic leverages existing facilities. Naturally, the DAO won’t rely solely on ASICs—many GPUs will participate too. This aligns with community ideals of decentralization.
Currently, Cysic collaborates primarily with scaling solutions, privacy-focused Layer 1 chains, and cross-chain bridge protocols. We also work with ZK indexers like Axiom and HyperOracle, which use ZKPs to improve indexing efficiency. Additionally, we engage with emerging ZKML (ZK + machine learning) projects, though this area remains early-stage. These represent Cysic’s primary blockchain partners.
How does this zk proof-generation "mining" market differ from Bitcoin’s PoW?
Leo
It depends on our partners. Take Scroll: they embrace a decentralized design philosophy—not rewarding only the fastest prover. Instead, they allow proof submissions within a time window, giving multiple participants a chance to earn rewards. This is a brilliant approach to decentralization.
This market won’t mirror Bitcoin or Ethereum mining, dominated by a single player—e.g., Bitmain controlling over half the hash rate. Instead, it will be decentralized. Participants benefit collectively from the growth of ZK projects. Provers earn solid profits without centralization. No single entity will monopolize the ecosystem.
Ye
To me, the biggest difference between PoW and ZK proof generation returns to what we discussed: ZK algorithms are deterministic. We can enforce time windows—proofs are valid upon completion. Ideally, we’d have 10,000 provers simultaneously working on useful tasks—generating proofs for 10,000 blocks in parallel, distributing costs and increasing throughput. Everyone contributes meaningfully. In contrast, PoW resembles 10,000 miners racing—one wins, the rest waste effort. This is a stark contrast: our energy and costs may be less than 0.01% of Ethereum’s.
Hardware requirements also differ fundamentally. PoW computes meaningless hashes, leading to mining rigs optimized for brute-force GPU hashing—minimal CPUs, maximum GPUs. ZK involves real computation. This creates a major divergence in hardware needs. We require powerful CPUs with large memory capacities—different machine specifications altogether. PoW rigs simply connect GPUs to a network and hash in parallel. ZK provers need capable CPUs coordinating with GPUs—forming an integrated processing unit.
Thus, machine selection differs significantly—you need a strong CPU to manage these computations. Traditional mining rigs rely almost entirely on GPUs. In summary: determinism vs. randomness; minimal waste vs. massive waste; useful computation vs. useless work. As mentioned, zk places higher demands on CPUs. Though some networks like Filecoin already have notable CPU requirements.
Leo
Let me add some quantitative context. As Ye said, PoW chips are tiny—Bitcoin mining rigs contain hundreds of small ASIC chips. Total power consumption reaches 30–40 MW per rig.
In contrast, ZK chips are much larger, but fewer fit per machine. The computation involves complex interactions between chip logic and memory. We estimate total machine power draw around 400–500 kW—comparable to a single high-end GPU system.
How does hardware acceleration promote prover network decentralization?
Ye
Better hardware suppliers producing advanced ASICs will lower entry costs into the network. Combined with open-sourcing our algorithms, individuals with home GPUs or similar equipment can participate. Hardware acceleration reduces both cost and power consumption, lowering the barrier to entry for becoming a prover. More participants enhance network throughput, stability, reliability, and resilience.
Leo
I’d add that rapid proof generation enabled by hardware acceleration opens doors to novel applications. For example, proving correctness of multi-layer neural networks might take hours with standard methods—rendering it impractical. With hardware acceleration, this could shrink to minutes or seconds, unlocking previously unfeasible use cases.
Bienvenue dans la communauté officielle TechFlow
Groupe Telegram :https://t.me/TechFlowDaily
Compte Twitter officiel :https://x.com/TechFlowPost
Compte Twitter anglais :https://x.com/BlockFlow_News














