
Introducing Bonsol: ZK on Solana, What New Use Cases Will Verifiable Computing Enable?
TechFlow Selected TechFlow Selected

Introducing Bonsol: ZK on Solana, What New Use Cases Will Verifiable Computing Enable?
Verifiable computation (VC) runs specific workloads in a way that generates proofs of its computational process, which can be publicly verified without re-executing the computation.
Written by: Austbot
Translated by: TechFlow
At Anagram Build, most of our time is spent researching novel cryptographic use cases and applying them to specific products. One of our recent research initiatives has led us into the field of verifiable computation (VC). From this research, our team developed a new open-source system called Bonsol. We chose this research area because verifiable computation enables many powerful applications, and various L1 blockchains are actively working to improve the cost-efficiency and scalability of VC.
In this article, we have two goals:
-
First, we want to ensure you gain a better understanding of VC as a concept and the types of products it can enable within the Solana ecosystem.
-
Second, we’d like to introduce you to our latest project: Bonsol.
What is Verifiable Computation?
The term "verifiable compute" might not appear in investment memos for bull-market startups, but "zero-knowledge" certainly does. So what do these terms actually mean?
Verifiable computation (VC) refers to running specific workloads in a way that generates a proof of their execution, which can then be publicly verified without re-running the computation. Zero-knowledge (ZK) refers to the ability to prove statements about data or computations without revealing all inputs or internal details. In practice, these terms are often conflated, and “zero-knowledge” is somewhat of a misnomer—it’s more about selectively disclosing only the information necessary to verify a claim. VC is a more accurate overarching term and aligns with the broader goals of many distributed system architectures.
How Can VC Help Us Build Better Crypto Products?
So why would we want to add VC or ZK systems on platforms like Solana or Ethereum? The answer lies largely in developer security. Developers act as intermediaries between users’ trust in a black box and the technical mechanisms that make that trust objectively valid. By leveraging ZK/VC technologies, developers can reduce the attack surface of the products they build. VC shifts the focus of trust onto the proof system and the workload being proven—similar to the trust inversion seen when moving from traditional web2 client-server models to web3 blockchain-based systems. Trust moves from reliance on corporate promises to trust in open-source code and cryptographic networks. From the user’s perspective, there’s no true zero-trust system; everything still appears as a black box.
For example, using a ZK login system reduces developers’ responsibilities around securing databases and infrastructure, since the system only needs to verify certain cryptographic properties have been satisfied. VC technology is being applied in areas where consensus is required, and the only condition for achieving that consensus should be mathematical validity.
While there are already successful real-world implementations of VC and ZK, many depend on ongoing progress across multiple layers of the crypto software stack to become fast and efficient enough for production use.
As part of our work at Anagram, we’ve had the opportunity to speak with numerous crypto founders and developers about how the current state of the crypto software stack impacts product innovation. Historically, these conversations revealed an interesting trend: a growing number of projects are actively moving on-chain product logic off-chain because it’s become too expensive or because they need to implement more complex business logic. As a result, developers find themselves searching for systems and tools to balance the on-chain and off-chain components of increasingly sophisticated products. This is precisely where VC becomes crucial—by enabling untrusted yet verifiable connections between on-chain and off-chain worlds.
How Do Current VC/ZK Systems Work?
Currently, VC and ZK functions are primarily executed on alternative compute layers—such as rollups, sidechains, relays, oracles, or coprocessors—and accessed via callbacks from smart contract runtimes. To support this workflow, many L1 blockchains are introducing shortcuts outside the smart contract runtime (e.g., system calls or precompiles) to perform operations that would otherwise be too costly on-chain.
There are several common patterns in existing VC systems. I’ll outline the first four I’m aware of. Except for the last case, ZK proofs are generated off-chain, but the timing and location of verification give each pattern its unique advantages.
Fully On-Chain Verification
For VC and ZK proof systems capable of generating small proofs—like Groth16 or certain Plonk variants—the proof is submitted on-chain and verified using previously deployed code. This approach is now quite common. A good way to experiment with it is using Circom along with a Groth16 verifier on Solana or EVM chains. However, these proof systems tend to be slow. They also usually require learning a new language. For instance, verifying a 256-bit hash in Circom requires manually handling each bit. While libraries exist to abstract away some complexity, under the hood, you're still reimplementing these functions in Circom code. These systems shine when the ZK/VC component of your use case is relatively small, or when you need to prove validity before taking other deterministic actions. Bonsol currently falls into this first category.
Off-Chain Verification
Proofs are submitted on-chain so all parties can observe them, but verification happens later via off-chain computation. This model supports any proof system, but since verification isn’t on-chain, you don’t get the same level of finality for actions dependent on the proof. It works well for systems with a challenge period, allowing participants to “dispute” and attempt to show the proof is invalid.
Verification Network
Proofs are submitted to a verification network, which acts as an oracle calling smart contracts. You gain determinism, but must also trust the verification network.
Synchronous On-Chain Verification
The fourth and final model is quite different: both proof generation and verification occur simultaneously on-chain. Here, an L1 or its smart contract can run a ZK scheme directly over user inputs, enabling execution proofs over private data. There aren't many widespread examples of this yet, and the types of operations supported are generally limited to basic mathematical functions.
Summary
All four models are currently being tested across various blockchain ecosystems. It remains to be seen whether new models will emerge and which will dominate. On Solana, for example, no single model has emerged as a clear winner, and the VC/ZK landscape is still early. Across many chains—including Solana—the most popular approach is the first one: full on-chain verification. This is considered the gold standard, though it comes with trade-offs such as latency and circuit limitations. As we dive deeper into Bonsol, you'll see it follows this first model—but with key differences.
Introducing Bonsol
Bonsol is a new Solana-native VC system built and open-sourced by Anagram. Bonsol allows developers to create verifiable executables involving both private and public data, and integrate the results into Solana smart contracts. Note that this project relies on the popular RISC0 toolchain.
This project was inspired by a recurring question we hear weekly from various teams: “How can I prove something on-chain using private data?” While the specifics vary, the underlying desire is consistent: reduce centralized dependencies.
Before diving into technical details, let’s illustrate Bonsol’s power through two distinct use cases.
Scenario One
A dApp allows users to buy lottery tickets in various token pools. Each day, these pools are “tilted” from a global pool, meaning the amounts (in each token) inside remain hidden. Users can purchase access to increasingly specific ranges of tokens within the pool. But here's the catch: once a user buys a range, it becomes visible to everyone. Then users must decide whether to buy a lottery ticket—they may conclude it’s not worth it, or choose to secure a share in the pool.
Bonsol comes into play when pools are created and when users pay for range access. When a pool is created or tilted, a ZK program receives private inputs indicating the quantity of each token. Token types and the pool address are public inputs. The proof verifies that tokens were randomly selected from the global pool into the current one. The proof also includes commitments to balances. On-chain contracts receive and validate this proof, storing commitments so that when the pool eventually closes and funds are sent to lottery winners, they can verify whether token quantities changed since initial selection.
When a user purchases access to a hidden token balance range (“opening” it), the ZK program takes actual token balances as private input and generates a series of values committed alongside the proof. Public inputs include the prior pool creation proof and its output. This ensures end-to-end system integrity: the previous proof must be validated within the range proof, and token balances must hash to the same value committed in the first proof. The range proof is submitted on-chain, making the range visible to all participants as described earlier.
While there are many ways to implement such a lottery-like system, Bonsol’s properties minimize trust required in the organizer. It also highlights interoperability between Solana and VC systems. Solana programs (smart contracts) play a critical role in establishing trust by verifying proofs and enabling subsequent actions.
Scenario Two
Bonsol allows developers to create toolkits usable by other systems. Bonsol includes the concept of deployment, where developers can create ZK programs and deploy them to Bonsol operators. Currently, Bonsol network operators have basic ways to assess whether executing a ZK program request is economically viable—such as computational requirements, input size, and tips offered by requesters. Developers can deploy toolkits they believe other dApps will find useful.
In configuring a ZK program, developers specify the order and type of required inputs. They can publish an InputSet with some or all inputs pre-filled, enabling users to verify computations over very large datasets more easily.
For example, suppose a developer creates a system proving on-chain NFT ownership transfers included a specific set of wallets. They could provide a preconfigured InputSet containing extensive historical transaction data. The ZK program searches this dataset to find matching owners. This is a contrived example, implementable in many ways.
Consider another example: a developer writes a ZK program that verifies a signature came from a key pair or hierarchical key pair without revealing the public keys of those authorized keys. If this proves useful to many dApps, they adopt it. The protocol pays the ZK program author a small usage fee. Performance is critical, so developers are incentivized to optimize speed so operators will run their programs. Copycats trying to steal another developer’s work must modify the program significantly to redeploy it, due to content validation of ZK programs. Any added operation affects performance—while not foolproof, this helps ensure innovators are rewarded.
Bonsol Architecture
These use cases help illustrate Bonsol’s capabilities, but let’s now examine its current architecture, incentive model, and execution flow.

The above image depicts the flow when a user needs to execute verifiable computation, typically initiated by a dApp requiring user action. This takes the form of an execution request, including details about the ZK program to run, inputs or input sets, time constraints, and a tip (used to compensate relayers). Relayers pick up the request and compete to claim ownership and begin proving. Based on their individual capabilities, they may opt out if the tip is insufficient or if the ZK program or inputs are too large. If they proceed, they must submit a claim. The first to successfully claim gains exclusive rights to submit the proof until a deadline. If they fail to produce the proof in time, other nodes can claim it. To claim, relayers must stake collateral—currently hardcoded to half the tip—which is slashed if they fail to generate a correct proof.
Bonsol is built on the premise that more computation will shift to layers where it is verified and checked on-chain, and that Solana will soon become the preferred chain for VC and ZK. Solana’s fast transactions, low-cost computation, and growing user base make it an ideal environment for testing these ideas.
Is This Easy to Build? Definitely Not!
Building Bonsol wasn’t without challenges. To bring Risc0 proofs to Solana and verify them on-chain, we needed to shrink their size. But we couldn’t sacrifice proof security. So we used Circom to wrap the Risc0 STARK (around 200KB) into a Groth16 proof, which is always just 256 bytes. Fortunately, Risc0 provided preliminary tools for this, but it introduced significant overhead and dependencies.
As we began building Bonsol and wrapping Stark proofs with Snarks using existing tools, we sought ways to reduce dependencies and increase speed. Circom allows compiling circuits into C++ or WASM. We first tried compiling Circom circuits into WASM files via LLVM—a fast and portable path toward a lightweight, high-performance Groth16 toolkit. We chose WASM for portability, whereas C++ depends on x86 CPU architecture, making it incompatible with newer MacBooks or ARM-based servers. However, this path became a dead end on our timeline. Most of our product research experiments were time-boxed—typically 2–4 weeks—to test viability. The LLVM WASM compiler failed to handle the generated WASM code. Despite trying various optimization flags and attempting to run the LLVM compiler as a Wasmer plugin to precompile to native code, we didn’t succeed. Given the Circom circuit spans ~1.5 million lines, the resulting WASM output was enormous. We then explored creating a bridge between C++ and our Rust-based relay codebase. This too failed quickly, as the C++ code included x86-specific assembly we weren’t willing to modify. To ship publicly, we ultimately launched a version using C++ code while removing some dependencies. Looking ahead, we aim to pursue another optimization path: compiling C++ code into an execution graph. The compiled C++ components from Circom mainly perform modular arithmetic over finite fields with very large primes. This showed promise for smaller components, but scaling it to the full Risc0 system remains challenging. The generated C++ code is around 7 million lines, and the graph generator hits stack limits—increasing them causes other failures we lacked time to debug. While some approaches didn’t yield expected results, we contributed back to open-source projects and hope these contributions will eventually be merged upstream.
Next came design-related challenges. A key feature of the system is supporting private inputs. These inputs must originate somewhere, and due to time constraints, we couldn’t implement a sophisticated MPC encryption system to securely encapsulate private inputs within the system. To meet demand and unblock developers, we introduced the concept of a private input server, which verifies requesters via payload signatures and serves them accordingly. Going forward, we plan to implement an MPC threshold decryption system, allowing relay nodes to collaboratively decrypt private inputs for claimants. All discussions around private inputs lead us to a planned evolution in the Bonsol repository: Bonsolace. This simpler system lets developers easily prove ZK programs on their own infrastructure. You can generate proofs yourself and verify them on the same contract used by the proof network. This suits high-value private data use cases where access must be minimized.
One final aspect of Bonsol we haven’t seen elsewhere with Risc0 is our requirement to commit (hash) input data before it enters the ZK program. We explicitly check on-chain that the prover commits to inputs matching what the user intended and sent to the system. This adds some cost, but prevents provers from cheating by running the ZK program on unintended inputs. The rest of Bonsol’s development followed standard Solana practices, though we intentionally experimented with novel ideas. In our smart contracts, we use FlatBuffers as the sole serialization system—a somewhat unconventional choice we hope to evolve into a cross-platform SDK framework. One last note: Bonsol currently requires precompilation for optimal performance, a feature expected in Solana 1.18. Until then, we’re exploring whether the core team is interested in this research direction and looking beyond Bonsol at other technologies.
Conclusion
Beyond Bonsol, the Anagram Build team has explored many corners of the VC space. Projects like Jolt, zkllvm, spartan2, and Binius are on our radar, along with companies working in fully homomorphic encryption (FHE).
Check out the Bonsol repository, and feel free to ask questions about examples you need or ways you’d like to extend it. This is a very early-stage project—you have a chance to make a big impact.
If you’re working on an interesting VC project, apply to the Anagram EIR program.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














