
Interview with Ben Fielding, Co-founder of Gensyn: How a16z-backed Decentralized Compute Protocol Democratizes AI?
TechFlow Selected TechFlow Selected

Interview with Ben Fielding, Co-founder of Gensyn: How a16z-backed Decentralized Compute Protocol Democratizes AI?
Gensyn represents a dual-nature solution: it is both an open-source protocol for software connectivity and a financial mechanism for resource compensation.
Interview: Sunny and Min, TechFlow
Guest: Ben Fielding, Gensyn Co-Founder
Our goal is not to monopolize the entire machine learning ecosystem, but to establish Gensyn as a protocol that optimizes computing resource utilization—sitting just above electricity—to dramatically enhance humanity's ability to effectively use computational resources.
-- Ben Fielding, Gensyn Co-Founder
In January 2024, OpenAI CEO Sam Altman stated that the two most important “currencies” in the future will be compute and energy.
However, as a currency of power in the AI era, compute is often monopolized by large corporations—especially in the field of AGI models. Where there is monopoly, there is also resistance. Decentralized Artificial Intelligence (Decentralized AI) has thus emerged.
“Blockchain’s permissionless nature can create a market for buyers and sellers of compute (or any other type of digital resource such as data or algorithms), enabling global transactions without intermediaries,” noted renowned venture firm a16z in an article outlining the blockchain path to AI compute—a project they were describing was Gensyn.
Gensyn is a decentralized deep learning compute protocol, aiming to become the foundational layer for machine learning computation, enabling task allocation and reward mechanisms via smart contracts to rapidly advance AI model learning capabilities while reducing the cost of deep learning training.
Gensyn connects developers (anyone capable of training machine learning models) with solvers (anyone who wants to use their own hardware to train machine learning models). By leveraging idle, long-tail computing devices around the world with machine learning capabilities—such as small data centers or personal gaming PCs—it aims to increase available machine learning compute capacity by 10–100 times.
In summary, Gensyn’s core mission is to democratize AI through blockchain-enabled coordination.
In June 2023, Gensyn announced it had raised $43 million in Series A funding led by a16z, with participation from CoinFund, Canonical Crypto, Protocol Labs, and Eden Block.
Gensyn was founded in 2020 by Ben Fielding and Harry Grieve, seasoned professionals in computer science and machine learning research. Harry Grieve studied at Brown University and the University of Aberdeen and is a data scientist and entrepreneur. Ben Fielding graduated from Northumbria University and previously co-founded the SaaS platform Fair Custodian and served as Director of Research Analytics.
TechFlow interviewed Gensyn co-founder Ben Fielding to explore his journey into crypto-AI and uncover Gensyn’s AI arsenal.

From the Founder’s Perspective: Gensyn’s Value Proposition
TechFlow: What inspired you to found Gensyn?
Ben:
My original background was in academia, where I worked as a machine learning researcher focusing on Neural Architecture Search (NAS). This field involves optimizing the structure of deep neural networks, particularly for computer vision applications.
My work involved developing algorithms to evolve neural network architectures in a population-based manner. This process included simultaneously training numerous candidate model architectures and gradually evolving them into a single meta-model optimized for specific tasks.
During this time, I faced significant challenges related to computational resources. As a PhD student, I had access to a few high-performance GPUs housed in large workstations under my desk—hardware I managed to acquire myself.
Meanwhile, companies like Google were conducting similar research using thousands of GPUs and TPUs in data centers, running continuously for weeks. This disparity made me realize that despite having all necessary resources except sufficient compute power, many others worldwide face similar limitations, which hinders the pace of research and societal progress. I felt frustrated by this situation—and that frustration ultimately led us to create Gensyn.

Before fully committing to Gensyn, I spent two years co-founding a data privacy startup focused on managing consumer data flows and consent-based user data access, aiming to improve how individuals and businesses interact with data.
That experience taught me valuable lessons about common entrepreneurial pitfalls and reinforced my cautious approach toward personal data flows and consent-based access.
Four years ago, I shut down my startup and joined Entrepreneur First, an accelerator in London, where I met my co-founder Harry Grieve. It was there that we launched Gensyn to tackle the global challenge of computational resource access. Our initial strategy involved distributing compute tasks across private data silos within a single organization (federated learning), which was fascinating. We quickly realized the broader potential of scaling this approach globally. To realize this expanded vision, we had to solve fundamental trust issues related to the compute sources themselves.
Since then, Gensyn has been working on ensuring the accuracy of machine learning tasks processed on devices through a combination of proofs, game-theoretic incentives, and probabilistic checking. While the details are quite technical, Gensyn is committed to building a system that allows anyone, anywhere, to train machine learning models using any computing device.
TechFlow: Sam Altman says he needs $7 trillion to build AI chip factories to address the global chip shortage. Is his plan realistic in terms of scaling chip supply? Meanwhile, what AI problems is Gensyn solving differently from Altman’s approach?
Ben:
Gensyn is addressing challenges in the AI space that are similar to those Sam Altman faces. Essentially, there are two ways to solve the compute access problem. Machine learning is becoming increasingly ubiquitous and may eventually be integrated into every piece of technology we use, transitioning from imperative code to probabilistic models. These models require massive amounts of compute power. When you compare compute demand against the world’s semiconductor manufacturing capacity, you see a significant gap—demand is skyrocketing, while chip production grows only incrementally.
The solution lies in either (1) manufacturing more chips to meet demand or (2) improving the efficiency of existing chip usage.
Ultimately, both strategies are necessary to meet the growing demand for computational resources.
I believe Altman is directly confronting this issue. The problem lies in the chip supply chain—an extremely complex system. Certain parts of this chain are especially challenging, with only a few companies capable of managing these complexities. Currently, many governments are treating this as a geopolitical issue, investing in reshoring semiconductor manufacturing and addressing bottlenecks in the supply chain.
In my view, Altman’s $7 trillion figure serves as a market test to gauge how seriously global financial markets take this issue. The fact that this staggering number hasn’t been outright rejected is telling. It forces people to reconsider: “This sounds absurd—but could it actually be true?”
This reaction indicates genuine concern and a willingness to allocate substantial capital to solve the problem. By setting such a high benchmark, Altman has effectively created a reference point for any future chip production effort. This strategic move sets a precedent for large-scale investment in the field—even if actual costs fall short of $7 trillion—demonstrating a strong commitment to overcoming semiconductor manufacturing challenges.
Gensyn’s approach differs—we aim to optimize the use of existing chips worldwide. Many of these chips—from gaming GPUs to MacBooks equipped with M1, M2, and M3 chips—are underutilized.
These devices are fully capable of supporting AI processing without requiring new, specialized AI processors. However, harnessing these existing resources requires a protocol to integrate them into a unified network—similar to how TCP/IP enables internet communication.
Such a protocol would allow these devices to be used on-demand for computational tasks.
The key difference between our protocol and traditional open protocols like TCP/IP is financial. While the latter are purely technical solutions, using hardware resources involves inherent costs—electricity and the physical cost of the hardware itself.
To address this, our protocol incorporates cryptocurrency and decentralization principles to build a value-coordination network that incentivizes hardware contributors.
Thus, Gensyn represents a dual-natured solution: it is both an open-source software protocol enabling connectivity and a financial mechanism for compensating resource providers.
Moreover, the machine learning market faces challenges beyond just compute resources.
-
Other factors such as data access and knowledge sharing also play critical roles. Through decentralized technologies, we can enable fair attribution of value across these components, fostering a more integrated and efficient ecosystem. Therefore, Gensyn does not operate in isolation; we address part of a broader challenge, while other solutions must tackle the remaining pieces. This collaborative effort is essential for advancing the field of machine learning.
Defining Gensyn’s Dual-Natured Solution
TechFlow: Can you explain Gensyn’s dual solution in the simplest possible terms?
Ben:
Simply put, Gensyn is a peer-to-peer network built atop open-source software. To participate, your device just needs to run this software and be capable of performing machine learning training tasks. The network consists of multiple nodes—each running the software like yours—that communicate directly, sharing information about available hardware and pending tasks. The benefit is that no central server is required; your device can interact directly with others, eliminating reliance on centralized infrastructure.
A key feature of Gensyn is its decentralized communication—there is no central authority. For example, if you're using a MacBook, it directly connects and communicates with other MacBooks, exchanging information about hardware capabilities and available tasks.
One of Gensyn’s main challenges is verifying off-chain non-deterministic computations that are too large for blockchains.
Our solution introduces a verification mechanism allowing devices to generate verifiable proofs of computation. These proofs can be checked by other devices to ensure work integrity, without revealing which parts of the task might be inspected—thus preventing devices from completing only the portions likely to be verified.
Our system incentivizes devices to act as both solvers and verifiers in cryptographic proof processes or selective re-execution of tasks to confirm validity. Essentially, Gensyn aims to enable interoperability among nodes, mutual verification of work, and consensus on completed tasks. Payments for tasks are executed within this framework, leveraging blockchain’s trust mechanisms. This technical ecosystem mirrors Ethereum’s functionality, focusing on node-to-node validation to ensure task integrity.
Our primary goal is to achieve consensus on task completion with minimal computational overhead, ensuring system integrity while accommodating large-scale machine learning workloads.
In summary, Gensyn can be divided into two main components:
-
The first is the blockchain aspect, including the state machine I mentioned earlier. This is where shared computation occurs among participants.
-
The second half of Gensyn involves the communication infrastructure, focusing on how nodes interact and handle machine learning tasks.
This setup allows any node to execute any computation, provided it can later be verified on-chain.
We are building a communication infrastructure spanning all nodes to facilitate information sharing, optional model partitioning, and broad data processing. This supports various model training methods—data parallelism, model parallelism, and pipeline partitioning—without requiring immediate trust coordination.
Dual-Natured Solution = State Machine + ML Task Communication
Gensyn State Machine
TechFlow: How does the Gensyn chain function within this specific peer-to-peer machine learning network?
Ben:
Initially, we assume all participants fulfill their tasks according to their roles and generate corresponding proofs. Then we turn to the blockchain side, where we maintain a shared state similar to other blockchains—including hashed transactions and operations and the hash of the previous block—forming a complete chain.
Consensus among participants means that if the computations in a block match and produce the same hash, the work is considered correctly done, allowing us to proceed to the next block.
Gensyn operates using a Proof-of-Stake (PoS) mechanism, rewarding contributors who validate block generation.
Creating a block involves hashing (1) the operations required for machine learning verification work and (2) recording transactions occurring within that block.
While our approach resembles systems like Ethereum, our key innovation lies in the communication layer, specifically how nodes manage and collaborate on machine learning tasks.
TechFlow: How does the Gensyn chain differ from Ethereum? If the core infrastructure isn't novel, how is the PoS chain designed for machine learning use cases?
Ben:
Our blockchain’s core architecture is not novel, except for a novel data availability layer. The significant difference lies in our ability to handle much larger computational tasks, making our operations more efficient than typically possible on Ethereum.
This is particularly relevant for convolution operations, which are fundamental building blocks of many machine learning models.
Efficiently executing these operations in the Ethereum Virtual Machine (EVM) using Solidity is challenging.
The Gensyn chain offers greater flexibility, allowing us to process these computations more efficiently without being constrained by the EVM’s operational scope.
The real challenge lies in achieving generalizability: meaning the model can accurately predict outcomes for entirely new samples it has never seen before, thanks to its sufficiently broad spatial understanding.
This training process demands massive computational resources, as it requires repeatedly passing data through the model.
Gensyn’s machine learning runtime is responsible for obtaining a graphical representation of the model and placing it in a framework that generates a proof of completion for each operation during execution.
Here arises a critical issue: determinism and reproducibility.
Ideally, in a mathematical world, repeating an operation should yield the same result. But in the physical world of computing hardware, unpredictable variables can cause slight variations in computational outputs.
So far, a degree of randomness in machine learning has been acceptable—or even beneficial—as it helps prevent overfitting and promotes better generalization.
However, for Gensyn, both generalizability and reproducibility are crucial.
Variations in computational results can lead to completely different hashes, potentially causing our verification system to falsely flag work as incomplete, risking financial loss. To counteract this, our runtime ensures operations are deterministic and reproducible across devices—a complex but necessary solution.
This approach is somewhat analogous to using machine learning frameworks like PyTorch, TensorFlow, or JAX. Users define models within these frameworks and initiate training. We are adapting these frameworks and underlying libraries—such as CUDA—to ensure model execution is accurate and repeatable on any device.
This ensures that the hash of an operation’s result on one device matches the hash on another, highlighting the importance of reproducible machine learning execution in our system.
Gensyn Decentralizes Cloud Services via Open-Source Blockchain Communication to Enable Decentralized Machine Learning
TechFlow: So how does the blockchain communication infrastructure on top of the Gensyn chain work for this specific machine learning network?
Ben:
The purpose of the communication infrastructure is to facilitate direct communication between devices. Its primary function is to allow one device to verify the work and proofs generated by another.
At its core, inter-device communication is used for mutual work verification, a process that must go through the blockchain, as the blockchain acts as the central arbiter in case of disputes. The blockchain is the sole source of truth in our system. Without it, there’s no reliable way to verify participant identities—anyone could falsely claim they’ve validated work.
Blockchain and its cryptography secure identity verification and work attestation. Devices can prove their identity and securely submit information, enabling others to recognize and authenticate its legitimacy.
The ultimate goal of this system is to compensate hardware owners. If you have hardware capable of running machine learning tasks, you can rent it out.
However, in traditional systems, this process is complex and costly. For example, buying many Nvidia GPUs and renting them out—converting capital expenditure into operational expenditure, much like cloud providers—poses numerous challenges. You need to find AI companies interested in your hardware, build a sales channel, develop infrastructure for model transfer and access, and manage legal and operational agreements including service-level agreements (SLAs). SLAs require on-site engineers to guarantee uptime per contract—any downtime leads to contractual liability and potential financial risk. This complexity is a major barrier for individuals or small businesses, which is partly why centralized cloud services dominate.
Gensyn offers a more efficient alternative, eliminating the human and business costs typically involved in such transactions. You simply run some software, without relying on legal contracts or engineers to build infrastructure. Legal agreements are replaced by smart contracts, and work verification is automated, checking whether tasks were correctly completed. There’s no need for manual dispute resolution or legal recourse—all handled instantly through technology, a significant advantage. This means suppliers can start earning from their GPUs immediately, with zero additional hassle.
Go-to-Market Strategy
We encourage suppliers to join the Gensyn network by showing them they can instantly enter the machine learning compute demand market by simply running open-source software. This unprecedented opportunity significantly expands the market, enabling new entrants to challenge the dominance of traditional services like AWS. AWS and others must manage complex operations, while we’re converting those operations into code, creating new pathways for value flow.
Traditionally, if you had a machine learning model to train and were willing to pay for compute, your money would flow to dominant cloud providers who manage operations efficiently. Despite increasing competition from Google Cloud, Azure, and others, these vendors still enjoy high profit margins.
Purpose of Decentralized Cloud Services: Decentralized Training vs. Decentralized Inference
TechFlow: Machine learning is broadly divided into training and inference. Where does Gensyn’s P2P compute resource play a role?
Ben:
Our focus is on training, which involves value extraction.
Training encompasses everything from initial learning to fine-tuning, while inference merely involves querying a model with data without changing it—essentially seeing what prediction the model makes based on input.
-
Training requires massive compute resources and is typically asynchronous, not needing immediate results.
-
In contrast, inference requires rapid execution to ensure user satisfaction in real-time applications, a stark contrast to the compute-intensive nature of training.
Decentralized technologies are currently insufficient to address the latency-critical nature of inference. For effective inference, models need to be deployed as close to users as possible, minimizing latency through geographic proximity.
However, launching such a network is challenging because its value and effectiveness grow with scale—following Metcalfe’s Law, similar to dynamics seen in projects like the Helium network.
Therefore, it’s unrealistic for Gensyn to directly tackle inference challenges; this task is better suited for independent entities focused on optimizing latency and network coverage.
We support protocols that specialize in single-function optimization rather than trying to excel in multiple areas simultaneously, which would dilute effectiveness. Such specialization drives competition and innovation, leading to interoperable protocols each mastering a specific aspect of the ecosystem.
Ideally, beyond running Gensyn nodes for computation, users could also operate other functional nodes—such as inference, data management, or data labeling. Interconnected networks would help build a robust ecosystem where machine learning tasks seamlessly transition across platforms. This decentralized future envisions a new network layer, each level collectively enhancing machine learning capability.
Decentralized AI Ecosystem: How to Achieve Win-Win Collaboration with Decentralized Data Protocols?
TechFlow: Given that compute and data are key inputs for machine learning, how does Gensyn’s compute protocol collaborate with data protocols?
Ben:
Compute is just one aspect; data is another critical domain where value-flow models can also be applied, though with different verification and incentive mechanisms.
We envision a rich ecosystem with multiple nodes running on devices like your MacBook. Your device might host a Gensyn compute node, a data node, and even a data labeling node, contributing to labeling through gamified incentives or direct payments—usually without users realizing the complex processes behind these models.
This ecosystem paves the way for what we ambitiously call the machine intelligence revolution—a new stage or evolution of the internet. Today’s internet is a vast repository of human knowledge in text form.
Compute is one crucial piece; data is another, equally important area where value-flow models apply, albeit with different verification and incentive structures.
We envision a vibrant ecosystem with multiple nodes running on devices like MacBooks. Your device might not only run a Gensyn compute node but also data and data labeling nodes. Through gamified incentives or direct payments, these nodes contribute to data labeling—users typically unaware of the complex processes behind.
This ecosystem lays the foundation for what we call the machine intelligence revolution, marking a new phase in the internet’s development. Today’s internet is a massive textual archive of human knowledge.
The future internet we envision presents knowledge not through text, but through machine learning models. This means machine learning model fragments will be distributed globally—on MacBooks, iPhones, cloud servers—enabling queries and reasoning across this decentralized network. Compared to centralized models controlled by a few cloud providers, this promises a more open ecosystem, enabled by blockchain technology.
Blockchain not only facilitates resource sharing but also ensures instant verification of tasks, confirming that remote devices execute tasks correctly and without tampering.
Gensyn is committed to building the computational foundation within this framework and encourages others to explore incentive designs for data networks. Ideally, Gensyn will seamlessly integrate with these networks, boosting the efficiency of machine learning training and deployment. Our goal is not to monopolize the entire machine learning ecosystem, but to establish Gensyn as a protocol that optimizes compute resource utilization—sitting just above electricity—to dramatically enhance humanity’s ability to effectively use computational resources.
Gensyn specifically addresses the challenge of transforming value and data into model parameters. Essentially, if you have a data sample—be it image, book, text, audio, or video—and wish to convert it into model parameters, Gensyn facilitates this process. This enables models to make predictions or inferences on future similar data, evolving as parameters update. The entire process of distilling data into model parameters is Gensyn’s specialty, while other aspects of the machine learning stack are managed by other systems.
Bonus Topic: Are AI and Crypto Startups Geographically Constrained?
TechFlow: Given your extensive experience, how does your current journey differ from your early days as a builder and researcher facing technical and computational frustrations? Could you share how this transition and London’s tech culture influenced your growth and achievements?
Ben:
The tech environment in London and the UK overall differs significantly from Silicon Valley. While the UK tech community includes exceptional talent and groundbreaking work, it tends to be more inward-looking. This creates barriers for newcomers trying to enter these circles.
I think this stems from contrasting attitudes between the US and UK. Americans generally exhibit a more open demeanor, while Brits tend to be more skeptical and conservative. These cultural nuances mean integrating into the UK tech ecosystem requires effort and time. Yet once you do, you discover a vibrant, rich community working on compelling projects. The difference lies in visibility and outreach: unlike Silicon Valley, where achievements are loudly celebrated, London innovators often work more quietly.
Recently, the UK—particularly in the shift toward decentralization and AI—seems to be carving out a niche. Partly due to regulatory developments in the US and Europe. For instance, recent US regulations, such as President Biden’s executive order, impose certain restrictions on AI development, including mandatory government reporting for projects exceeding specific thresholds. These rules may dampen enthusiasm among new developers. In contrast, the UK appears to adopt a more open stance, favoring open-source over strict regulation, thus cultivating a more innovation-friendly environment.
San Francisco, known for its strong open-source movement, now faces new legislative challenges in California that echo federal executive orders. While intended to protect society, these regulations inadvertently concentrate AI development within established entities—those capable of meeting compliance requirements—placing smaller players with potentially revolutionary ideas at a disadvantage. The UK recognizes the value of open-source as a means of societal oversight in AI development, avoiding the need for restrictive government surveillance. Open-source practices naturally foster scrutiny and collaboration, ensuring AI advances remain accountable without stifling innovation.
The EU’s initial AI regulations are stricter than what we see in the UK, where a balance has been struck to encourage open-source development. This strategy not only achieves similar regulatory goals but also ensures the market remains dynamic and competitive. The UK is well-positioned to nurture a vibrant, open ecosystem for AI and crypto innovation. It’s an exciting time for London’s tech industry.
Further Reading:
-
https://docs.gensyn.ai/litepaper
-
https://epicenter.tv/episodes/471
-
https://www.techflowpost.com/article/detail_14995.html
-
https://hyperobjects.substack.com/p/understanding-maximum-computing-supply
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News












