
Conversation avec Ben Fielding, cofondateur de Gensyn : Comment le protocole de calcul décentralisé soutenu par a16z vise-t-il à démocratiser l'IA ?
TechFlow SélectionTechFlow Sélection

Conversation avec Ben Fielding, cofondateur de Gensyn : Comment le protocole de calcul décentralisé soutenu par a16z vise-t-il à démocratiser l'IA ?
Gensyn représente une solution à double nature : elle est à la fois un protocole open source assurant la connexion logicielle, et un mécanisme financier permettant la compensation des ressources.
Interview: Sunny and Min, TechFlow
Guest: Ben Fielding, Gensyn Co-Founder
Our goal is not to monopolize the entire machine learning ecosystem, but to establish Gensyn as a protocol for optimizing computing resource utilization—sitting just above electricity—to dramatically enhance humanity's ability to effectively use computational power.
-- Ben Fielding, Gensyn Co-Founder
In January 2024, OpenAI CEO Sam stated that the two most important “currencies” in the future will be computing power and energy.
However, as the currency of power in the AI era, computing power is often monopolized by large corporations, especially in the field of AGI models. Where there is monopoly, there are also anti-monopoly forces—decentralized artificial intelligence (Decentralized AI) has thus emerged.
"Blockchain’s permissionless component can create a marketplace for buyers and sellers of computing power (or any other digital resources such as data or algorithms), enabling global transactions without intermediaries," noted renowned investment firm a16z in an article describing this blockchain approach to AI compute—the project referenced was Gensyn.
Gensyn is a decentralized deep learning computation protocol, aimed at becoming the foundational layer for machine learning computing, facilitating task allocation and rewards via smart contracts to rapidly advance AI model learning capabilities while reducing the cost of deep learning training.
Gensyn connects developers (anyone capable of training machine learning models) with solvers (anyone who wants to use their machines to train ML models). By leveraging idle, long-tail computing devices around the world with machine learning capabilities (e.g., small data centers, personal gaming PCs), it increases available computing capacity for machine learning by 10–100x.
In summary, Gensyn’s core mission is to democratize AI through blockchain.
In June 2023, Gensyn announced a $43 million Series A funding round led by a16z, with participation from CoinFund, Canonical Crypto, Protocol Labs, Eden Block, and others.
Gensyn was founded in 2020 by Ben Fielding and Harry Grieve, seasoned professionals in computer science and machine learning research. Harry Grieve studied at Brown University and the University of Aberdeen and is a data scientist and entrepreneur; Ben Fielding graduated from Northumbria University and previously co-founded the SaaS platform Fair Custodian and served as director at Research Analytics.
TechFlow interviewed Gensyn co-founder Ben Fielding to explore his journey into crypto-AI and Gensyn’s AI toolkit.

Gensyn’s Value Proposition from the Founder’s Perspective
TechFlow: What inspired you to found Gensyn?
Ben:
My original background was in academia, where I worked as a machine learning researcher focused on Neural Architecture Search (NAS). This field involves optimizing the structure of deep neural networks, particularly for computer vision applications.
My work involved developing algorithms to evolve neural network architectures in a population-based manner. This process included simultaneously training numerous candidate model architectures and gradually evolving them into a single meta-model optimized for specific tasks.
During this time, I encountered significant challenges related to computational resources. As a PhD student, I had access to a few high-performance GPUs housed in large workstations under my desk—machines I managed to acquire myself.
Meanwhile, companies like Google were conducting similar research using thousands of GPUs and TPUs across data centers, running continuously for weeks. This disparity made me realize that despite having all necessary resources except sufficient computing power, many others worldwide face similar constraints, which hinders the pace of research and societal progress. I was frustrated by this situation, and ultimately, it led us to create Gensyn.

Before fully committing to Gensyn, I spent two years co-founding a data privacy startup focused on managing consumer data flows and consent-based user data access, aiming to improve how individuals and businesses interact around data.
This experience taught me valuable lessons, including common entrepreneurial pitfalls, and reinforced my cautious approach toward personal data flows and consent-based access.
Four years ago, I shut down my startup and joined Entrepreneur First, a London-based accelerator, where I met my co-founder Harry Grieve. It was there that we launched Gensyn to tackle the global challenge of computing resource accessibility. Our initial strategy involved distributing computational tasks across private data silos within a single organization (federated learning), which was very interesting. We quickly realized the broader potential of scaling this approach globally. To achieve this vision, we had to solve fundamental trust issues related to the computing sources themselves.
Since then, Gensyn has been working on ensuring the accuracy of machine learning tasks processed on devices through a combination of proofs, game-theoretic incentives, and probabilistic verification. While the details may be quite technical, Gensyn is committed to building a system that allows anyone, anywhere, to train machine learning models using any computing device.
TechFlow: Sam Altman needs $7 trillion to run an AI chip factory to address the global chip shortage. Is his plan realistic in terms of scaling chip supply? Meanwhile, what AI problems is Gensyn solving differently from Altman’s solution?
Ben:
Gensyn is addressing challenges in the AI space that are similar to those Altman faces. Fundamentally, there are two ways to solve the problem of compute access. Machine learning is becoming increasingly pervasive, potentially integrating into every technology we use, transitioning from imperative code to probabilistic models. These models require massive amounts of computing power. When comparing compute demand against global chip manufacturing capacity, a significant gap becomes apparent: demand is skyrocketing while chip production grows only incrementally.
Solutions lie in either (1) producing more chips to meet demand or (2) improving the efficiency of existing chip usage.
Ultimately, both strategies are essential to meeting growing computational demands.
I believe Altman is directly confronting this issue. The problem lies in the chip supply chain—an extremely complex system. Certain parts of this chain are particularly challenging, with only a few companies capable of managing these complexities. Currently, many governments are treating this as a geopolitical issue, investing in reshoring semiconductor manufacturing and resolving bottlenecks in the supply chain.
In my view, Altman’s $7 trillion figure serves as a market test—a way to gauge how seriously global financial markets take this issue. The fact that this staggering number wasn’t outright rejected is striking. It prompts people to reconsider: “This sounds absurd—but could it actually be true?”
This reaction indicates genuine concern and a willingness to allocate substantial funds to resolve the issue. By setting such a high benchmark, Altman effectively creates a reference point for any future chip production efforts. This strategic move sets a precedent for large-scale investment in the sector—even if actual costs fall short of $7 trillion—demonstrating a strong commitment to overcoming chip manufacturing challenges.
Gensyn takes a different approach—we aim to optimize the use of existing chips globally. Many of these chips—from gaming GPUs to MacBooks equipped with M1, M2, and M3 chips—are underutilized.
These devices are fully capable of supporting AI processing without requiring new specialized AI processors. However, harnessing these existing resources requires a protocol that integrates them into a unified network, much like TCP/IP enables internet communication.
Such a protocol would allow these devices to be used on-demand for computational tasks.
The key difference between our protocol and traditional open protocols like TCP/IP is financial. While the latter are purely technical solutions, using hardware resources inherently involves costs such as electricity and physical wear on equipment.
To address this, our protocol incorporates cryptocurrency and decentralization principles to build a value-coordination network that incentivizes hardware contributors.
Thus, Gensyn represents a dual-natured solution: it is both an open-source software protocol for connectivity and a financial mechanism for resource compensation.
Moreover, the machine learning market faces challenges beyond just compute resources.
-
Other factors such as data access and knowledge sharing also play critical roles. Through decentralized technologies, we can enable fair attribution of value across these components, fostering a more integrated and efficient ecosystem. Therefore, Gensyn does not operate in isolation—we address part of a broader set of challenges, while other solutions must tackle the remaining pieces. This collaborative effort is crucial for advancing the field of machine learning.
Defining Gensyn’s Dual-Natured Solution
TechFlow: Can you explain Gensyn’s dual solution in the simplest possible terms?
Ben:
Simply put, Gensyn is a peer-to-peer network built atop open-source software. To participate, your device runs this software and must be capable of performing machine learning training tasks. The network consists of multiple nodes—each running the software—communicating directly to share information about available hardware and pending tasks. This eliminates the need for a central server, allowing your device to interact directly with others.
A key feature of Gensyn is its decentralized communication—there is no central authority. For example, if you're using a MacBook, it directly connects and communicates with other MacBooks, exchanging information about hardware capabilities and available tasks.
One of Gensyn’s main challenges is verifying off-chain non-deterministic computations that are too large for blockchains.
Our solution introduces a verification mechanism allowing devices to generate verifiable computation proofs. These proofs can be checked by other devices to ensure work integrity, without revealing which parts of the task might be inspected—thus preventing devices from completing only those portions likely to be verified.
Our system incentivizes devices to act as solvers and verifiers in cryptographic proof processes or selective re-execution to validate completed tasks. Essentially, Gensyn aims to enable node interoperability, mutual work verification, and consensus on completed tasks. Payments occur within this framework, leveraging blockchain’s trust mechanisms. This technical ecosystem mirrors Ethereum’s functionality, focusing on mutual node verification to ensure task integrity.
Our primary goal is to achieve consensus on task completion with minimal computational overhead, ensuring system integrity while accommodating large-scale machine learning workloads.
In summary, Gensyn comprises two main components:
-
The first is the blockchain aspect, including the state machine I mentioned earlier. This is where shared computation occurs among participants.
-
The second half involves the communication infrastructure, focusing on how nodes interact and handle machine learning tasks.
This setup allows any node to perform any computation, provided the work can later be verified on-chain.
We are building a communication infrastructure spanning all nodes to facilitate information sharing, model partitioning when needed, and broad data processing. This supports various training methods—including data parallelism, model parallelism, and pipeline partitioning—without requiring immediate trust coordination.
Dual-Natured Solution = State Machine + ML Task Communication
Gensyn State Machine
TechFlow: How does the Gensyn chain function within this specific peer-to-peer machine learning network?
Ben:
Initially, we assume all participants fulfill their tasks according to their roles and generate corresponding proofs. Then we turn to the blockchain side, where we maintain a shared state similar to other blockchains—including hashed transactions and operations and the hash of the previous block—forming a complete chain.
Consensus among participants is that if the computations in a block match and produce the same hash, the work is considered correctly completed, allowing us to proceed to the next block.
Gensyn operates using PoS, rewarding contributors who verify block generation.
Creating a block involves hashing (1) operations required for machine learning verification and (2) recording transactions occurring within that block.
While our approach resembles systems like Ethereum, our key innovation lies in the communication layer, particularly how nodes manage and collaborate on machine learning tasks.
TechFlow: How does the Gensyn chain differ from Ethereum? If the core infrastructure isn’t novel, how is the PoS chain designed to serve machine learning-specific use cases?
Ben:
Our blockchain’s core architecture isn’t novel, except for a novel data availability layer. The key difference is our ability to handle larger computational tasks, making our operations more efficient than typically possible on Ethereum.
This is especially relevant for convolution operations, a fundamental component of many machine learning models.
Efficiently executing these operations in Solidity on the Ethereum Virtual Machine (EVM) is challenging.
The Gensyn chain offers greater flexibility, allowing us to handle these computations more efficiently without being constrained by EVM operation limits.
The real challenge lies in achieving generalizability: meaning the model can accurately predict outcomes for entirely new samples it has never seen before, due to its sufficiently broad spatial understanding.
This training process demands massive computational resources, requiring repeated data passes through the model.
Gensyn’s machine learning runtime is responsible for obtaining a graphical representation of the model and placing it in a framework that generates a proof of completion for each operation during computation.
A critical issue here is determinism and reproducibility.
Ideally, in the mathematical world, repeating an operation should yield identical results. But in the physical world of computing hardware, unpredictable variables can cause slight variations in computational outputs.
So far, some degree of randomness in machine learning has been acceptable—or even beneficial—as it helps prevent overfitting and promotes better generalization.
However, for Gensyn, both generalizability and reproducibility are crucial.
Variations in computation results could lead to completely different hashes, causing our verification system to incorrectly flag work as incomplete, risking financial loss. To address this, our runtime ensures operations are deterministic and reproducible across devices—a complex but necessary solution.
This approach is somewhat analogous to using machine learning frameworks like PyTorch, TensorFlow, or JAX. Users define models and initiate training within these frameworks. We are adapting these frameworks and underlying libraries—such as CUDA—to ensure model execution is accurate and reproducible across any device.
This ensures that the hash of an operation result on one device matches the hash on another, highlighting the importance of reproducible ML execution in our system.
Gensyn Decentralizes Cloud Services via an Open-Source Blockchain Communication Protocol to Support Decentralized Machine Learning
TechFlow: So how does this blockchain communication infrastructure, tailored for machine learning networks, function atop the Gensyn chain?
Ben:
The purpose of the communication infrastructure is to facilitate direct communication between devices. Its primary function is enabling one device to verify the work and proofs generated by another.
Essentially, inter-device communication is used for mutual work verification, a process that must go through the blockchain because the blockchain acts as the central arbiter in any dispute. The blockchain is the sole source of truth in our system. Without it, participant identity cannot be reliably verified—anyone could falsely claim they’ve validated work.
Blockchain and its cryptography secure identity verification and work confirmation. Devices can prove their identity and securely submit information under this mechanism, allowing others to recognize and verify its authenticity.
The ultimate goal of this system is to compensate device owners. If you own hardware capable of performing machine learning tasks, you can rent it out.
However, in traditional systems, this process is complex and costly. For instance, buying many Nvidia GPUs and renting them out—converting capital expenditure into operational expenditure, similar to cloud providers—involves numerous challenges. You need to find AI companies interested in your hardware, build a sales channel, develop infrastructure for model transfer and access, and manage legal and operational agreements including service-level agreements (SLAs). SLAs require on-site engineers to guarantee uptime as promised to clients—any downtime leads to contractual liability and potential financial risk. This complexity is a major barrier for individuals or small businesses, which is partly why centralized cloud services dominate.
Gensyn offers a more efficient approach, eliminating the human and business costs typically associated with these transactions. You simply run some software, without relying on legal contracts or engineers to build infrastructure. Legal agreements are replaced by smart contracts, and work verification is automated, checking whether tasks are correctly completed. There’s no need to manually handle breach claims or seek legal resolution—everything can be resolved instantly through technology, a significant advantage. This means suppliers can start earning from their GPUs immediately by just running software, without any extra hassle.
Go-to-Market Strategy
Our way of encouraging suppliers to join the Gensyn network is by telling them they can immediately access the machine learning compute demand market just by running open-source software. This is an unprecedented opportunity that significantly expands the market, allowing newcomers to challenge the dominance of traditional services like AWS. AWS and others must manage complex operations, while we’re converting those operations into code, creating new pathways for value flow.
Traditionally, if you have a machine learning model to train and are willing to pay for compute, your money goes to dominant cloud providers who manage the supply. They dominate because they can effectively manage operations. Despite increasing competition from Google Cloud, Azure, and others, these vendors still enjoy high profit margins.
Purpose of Decentralized Cloud Services: Decentralized Training vs. Decentralized Inference
TechFlow: Machine learning is broadly divided into training and inference. Where does Gensyn’s P2P computing resource come into play?
Ben:
Our focus is on training, which involves value extraction.
Training includes everything from initial learning to fine-tuning, while inference only involves querying a model with data without changing it—essentially seeing what the model predicts based on input.
-
Training requires massive compute, is usually asynchronous, and doesn’t need immediate results.
-
In contrast, inference requires fast execution to ensure user satisfaction in real-time applications—a clear contrast to the compute-intensive nature of training.
Decentralized technologies are currently insufficient to solve the latency-critical challenges of inference. To perform inference effectively, models need to be deployed as close to users as possible, minimizing latency through geographic proximity.
However, launching such a network is challenging because its value and effectiveness grow with scale—following Metcalfe’s Law, similar to dynamics we see in projects like the Helium network.
Therefore, it’s unrealistic for Gensyn to directly tackle inference challenges; this task is better suited for independent entities focused on optimizing latency and network coverage.
We support protocols that specialize in single-function optimization rather than trying to excel in multiple areas simultaneously, which dilutes effectiveness. Such specialization drives competition and innovation, leading to a suite of interoperable protocols, each mastering a specific aspect of the ecosystem.
Ideally, beyond running a Gensyn node for computation, users could also operate other functional nodes—for inference, data management, or data labeling. Interconnection among these networks would help build a robust ecosystem where machine learning tasks seamlessly transfer across platforms. This decentralized future envisions a new network layer, with each level collectively enhancing machine learning capabilities.
Decentralized AI Ecosystem: How Can Gensyn Collaborate with Decentralized Data Protocols for Mutual Success?
TechFlow: Given that both compute and data are critical inputs for machine learning, how can Gensyn’s compute protocol collaborate with data protocols?
Ben:
Compute is just one piece; data is another critical domain where value-flow models can also apply, though with different verification and incentive mechanisms.
We envision a rich ecosystem where multiple nodes run on devices like your MacBook. Your device might host a Gensyn compute node, a data node, or even a data labeling node contributing to annotation through gamified incentives or direct payments—users typically unaware of the complex processes behind these models.
This ecosystem paves the way for what we ambitiously call the machine intelligence revolution—a new stage or evolution of the internet. Today’s internet is a vast repository of human knowledge in textual form.
Compute is a crucial component, and data is another key area where value-flow models can apply, albeit with different verification and incentive structures.
We envision a vibrant ecosystem with multiple nodes running on devices like MacBooks. Your device could run not only a Gensyn compute node but also data and data labeling nodes. Through gamified incentives or direct payments, these nodes contribute to data labeling, with users generally unaware of the underlying complexity.
This ecosystem lays the foundation for what we call the machine intelligence revolution, marking a new phase in the internet’s development. Today’s internet is a massive textual repository of human knowledge.
The future internet we foresee presents information through machine learning models rather than text. This means machine learning model fragments will be distributed globally across devices—from MacBooks to iPhones to cloud servers—enabling us to query and reason through this decentralized network. Compared to centralized models controlled by a few cloud providers, this promises a more open ecosystem, enabled by blockchain technology.
Blockchain not only facilitates resource sharing but also ensures instant verification of tasks, confirming that remote devices execute tasks correctly and without tampering.
Gensyn is dedicated to developing the compute foundation within this framework and encourages others to explore incentive models for data networks. Ideally, Gensyn will integrate seamlessly with these networks, enhancing the efficiency of machine learning training and deployment. Our goal is not to monopolize the entire machine learning ecosystem, but to establish Gensyn as a protocol that optimizes computing resource utilization—sitting just above electricity—to dramatically enhance humanity’s ability to effectively use computational power.
Gensyn specifically addresses the challenge of transforming value and data into model parameters. Essentially, if you have a data sample—be it an image, book, text, audio, or video—and wish to convert it into model parameters, Gensyn facilitates this process. This enables models to make predictions or inferences on similar future data, evolving as parameters update. The entire process of distilling data into model parameters is Gensyn’s specialty, while other aspects of the machine learning stack are managed by other systems.
Bonus Topic: Are AI and Crypto Startups Geographically Constrained?
TechFlow: Given your extensive experience, how does your current experience differ from your early days as a builder and researcher dealing with computing and technological frustrations and challenges? Could you share how this transition, and London’s tech culture, influenced your growth and achievements?
Ben:
The tech environment in London and the UK overall differs significantly from Silicon Valley. Although the UK tech community is full of exceptional talent and groundbreaking work, it tends to be more inward-looking. This creates barriers for newcomers trying to enter these circles.
I think this difference stems from contrasting attitudes between the UK and the US. Americans tend to be more open, while Brits are often more skeptical and conservative. These cultural nuances mean integrating into and adapting to the UK tech ecosystem requires effort and time. However, once you do, you discover a vibrant and rich community working on fascinating projects. The difference lies in visibility and outreach: unlike Silicon Valley, where achievements are loudly celebrated, London innovators tend to work more quietly.
Recently, the UK appears to be carving out a niche for itself, especially in the shift toward decentralization and AI. Partly due to regulatory developments in the US and Europe. For example, recent US regulations, such as those outlined in President Biden’s executive order, impose certain restrictions on AI development, including mandatory government reporting for projects exceeding specific thresholds. These regulations may dampen enthusiasm among new developers. In contrast, the UK seems to adopt a more open stance, favoring open-source over strict regulation, thereby cultivating a more innovation-friendly environment.
San Francisco, known for its strong open-source movement, now faces new challenges as California legislation echoes federal executive orders. While well-intentioned to protect society, these regulations inadvertently concentrate AI development within established entities—those with the resources to comply—placing smaller players with potentially revolutionary ideas at a disadvantage. The UK recognizes the value of open-source as a means of societal oversight in AI development, avoiding the need for restrictive government surveillance. Open-source practices naturally promote scrutiny and collaboration, ensuring AI advances remain accountable without stifling innovation.
The EU’s initial AI regulations are stricter than what we see in the UK, where a balance has been struck to encourage open-source development. This strategy not only achieves similar regulatory goals but also ensures the market remains dynamic and competitive. The UK is well-positioned to foster a vibrant and open ecosystem for AI and crypto innovation. It’s an exciting time for London’s tech industry.
Further Reading:
-
https://docs.gensyn.ai/litepaper
-
https://epicenter.tv/episodes/471
-
https://www.techflowpost.com/article/detail_14995.html
-
https://hyperobjects.substack.com/p/understanding-maximum-computing-supply
Bienvenue dans la communauté officielle TechFlow
Groupe Telegram :https://t.me/TechFlowDaily
Compte Twitter officiel :https://x.com/TechFlowPost
Compte Twitter anglais :https://x.com/BlockFlow_News












