
Variant Investment Partner: The Dilemma and Breakthrough of Open-Source AI—Why Cryptoeconomics Is the Final Piece of the Puzzle?
TechFlow Selected TechFlow Selected

Variant Investment Partner: The Dilemma and Breakthrough of Open-Source AI—Why Cryptoeconomics Is the Final Piece of the Puzzle?
Combining open-source AI with cryptographic techniques can support the development of larger-scale models and drive further innovation, leading to more advanced AI systems.
Author: Daniel Barabander
Translation: TechFlow
Summary
-
Today’s foundational AI development is dominated by a handful of large tech companies, resulting in a closed and uncompetitive landscape.
-
While open-source software development offers a potential alternative, foundational AI cannot operate like traditional open-source projects (e.g., Linux) due to a “resource problem”: contributors must not only donate time but also bear computing and data costs far beyond individual capacity.
-
Cryptotechnology can solve this resource problem by incentivizing individuals and organizations to contribute resources to open-source foundational AI projects.
-
Combining open-source AI with crypto incentives enables larger-scale model development and fosters greater innovation, ultimately leading to more advanced AI systems.
Introduction
According to a 2024 survey by the Pew Research Center, 64% of Americans believe social media has done more harm than good to the country; 78% say social media companies have too much power and influence in politics; and 83% think these platforms are likely to intentionally censor political views they disagree with. Dissatisfaction with social media has become one of the few areas of consensus in American society.
Looking back at the past two decades of social media evolution, this outcome seems inevitable. The story is straightforward: a small number of large technology firms captured user attention—and more importantly, user data. Despite early optimism about open data, these companies quickly shifted strategies, leveraging their data to build insurmountable network effects and restricting external access. Today, fewer than ten major tech firms dominate the social media landscape, creating an "oligopoly." Since the current state benefits them immensely, there's little incentive for these companies to change. This model is closed and lacks competition.
Now, AI development appears to be repeating this pattern—but with even higher stakes. A few tech giants, controlling access to GPUs and data, are building foundational AI models while keeping them closed off. For new entrants without billions in capital, developing a competitive model is nearly impossible. Training a single foundational model can cost hundreds of millions or even billions of dollars in computation alone. Meanwhile, the same social media companies that profited from the last technological wave are using their proprietary user data to train models that competitors simply cannot match. We’re repeating the mistakes of social media, heading toward a closed and uncompetitive AI world. If this trend continues, a handful of tech companies will gain unchecked control over access to information and opportunity.
Open-Source AI and the “Resource Problem”
If we want to avoid a closed AI future, what are our options? The obvious answer is to develop foundational models as open-source software projects. Historically, countless successful open-source projects have built the core software we rely on daily. The success of Linux proves that even something as fundamental as an operating system can be developed openly. Why can't LLMs (Large Language Models) follow the same path?
However, foundational AI models face unique constraints that make them fundamentally different from traditional software, severely limiting their viability as conventional open-source endeavors. Specifically, foundational AI requires massive computational and data resources—far beyond what any individual can provide. Unlike traditional open-source projects that depend only on donated time, open-source AI also demands contributions of computing power and data. This is the so-called “resource problem.”
Meta’s LLaMa model illustrates this issue well. Unlike competitors such as OpenAI and Google, Meta did not hide its model behind a paid API. Instead, it released LLaMa’s weights publicly for anyone to use freely (with some restrictions). These weights contain all the knowledge the model acquired during training at Meta and are essential for running the model. With these weights, users can fine-tune the model or use its outputs as inputs for new models.
While releasing LLaMa’s weights is commendable, it does not constitute a true open-source software project. Behind the scenes, Meta retains full control over the training process—using its own compute infrastructure, data, and decision-making authority—to unilaterally decide when to release updated versions. It does not invite independent researchers or developers into a collaborative community effort because the resources required to train or retrain the model—tens of thousands of high-end GPUs, data centers to house them, complex cooling systems, and trillions of tokens of training data—are far beyond the reach of ordinary individuals. As the Stanford AI Index Report 2024 notes, “The steep rise in training costs has effectively excluded universities—traditional hubs of AI research—from developing cutting-edge foundational models.” For example, Sam Altman reportedly said training GPT-4 cost $100 million, not including capital expenditures on hardware. Moreover, Meta increased its capital spending by $2.1 billion in Q2 2024 compared to the same period in 2023, primarily on servers, data centers, and networking infrastructure tied to AI model training. Therefore, even if members of the LLaMa community possess the technical skills to improve the model architecture, they lack the resources to implement those improvements.
In short, unlike traditional open-source software, open-source AI projects require contributors not just to give time, but also to shoulder substantial computing and data costs. Relying solely on goodwill and volunteerism to attract enough resource providers is unrealistic. They need additional incentives. Take the open-source large language model BLOOM as an example—a 176-billion-parameter model developed by over 1,000 volunteer researchers from 250 institutions across 70+ countries. While BLOOM’s achievement is admirable (and I fully support it), it took a year to coordinate a single training run and relied on €3 million in funding from French research agencies (not including the capital cost of the supercomputer used for training). Depending on repeated rounds of grant funding makes coordination and iteration too slow to compete with large tech labs. It has been over two years since BLOOM’s release, and there has been no public update about a successor model.
To make open-source AI viable, we need a way to incentivize resource providers to contribute their computing power and data, rather than expecting open-source contributors to bear these costs themselves.
Why Cryptotechnology Can Solve the “Resource Problem” in Open-Source Foundational AI
The key breakthrough of cryptotechnology lies in enabling high-resource-cost open-source software projects through mechanisms of “ownership.” It solves the resource problem in open-source AI by incentivizing potential resource providers to participate in the network—instead of requiring contributors to front these costs upfront.
Bitcoin is a prime example. As the first crypto project, Bitcoin is entirely open-source—the code has always been public. But the code itself isn’t what gives Bitcoin value. Simply downloading and running a Bitcoin node to create a local blockchain has little practical effect. The real value emerges only when block mining involves computational work so vast that no single participant can overpower the network—thus maintaining a decentralized, trustless ledger. Like open-source foundational AI, Bitcoin is an open-source project that depends on resources beyond any individual’s capacity. The reasons differ—Bitcoin uses computation to ensure immutability, while foundational AI uses it to optimize and iterate models—but both share the need for collective, large-scale resource pooling.
The “secret” that allows Bitcoin and other crypto networks to motivate participants to contribute resources to open-source software is token-based ownership. As Jesse outlined in Variant’s founding thesis in 2020 (The Ownership Economy), ownership creates powerful incentives for resource providers to contribute in exchange for potential upside within the network. This mechanism mirrors how startups solve early-stage capital shortages through “sweat equity”—by compensating early employees (like founders) primarily with company equity, startups attract talent they otherwise couldn’t afford. Cryptotechnology extends the concept of sweat equity beyond time contributors to include resource providers. That’s why Variant focuses on investing in projects that use ownership to build network effects, such as Uniswap, Morpho, and World.
If we want open-source AI to succeed, then cryptographic ownership is the critical solution to the resource problem. It allows researchers to freely contribute their model design ideas to open-source projects, knowing that the compute and data needed to realize those ideas will be provided by others—who are compensated not through direct payment, but through partial ownership of the project. In open-source AI, ownership can take many forms, but one of the most promising is ownership of the model itself—the approach proposed by Pluralis.
Pluralis calls this approach Protocol Models. Under this model, compute providers contribute processing power to train specific open-source models and, in return, receive partial ownership of the model’s future inference revenue. Because this ownership is tied directly to a specific model and derives value from its usage income, contributors are incentivized to train high-quality models and avoid submitting useless or fake training data (which would reduce expected future revenue). However, a key concern arises: if training requires sending model weights to compute providers, how does Pluralis ensure ownership security? The answer lies in using “model parallelism” to shard the model across different workers. A key property of neural networks is that even with access to only a tiny fraction of the model weights, a worker can still perform useful training tasks—while making it impossible to reconstruct the full weight set. Furthermore, since many different models are trained simultaneously on the Pluralis platform, each worker sees only fragments of many distinct models, making reconstruction of any complete model extremely difficult.
The core idea behind Protocol Models is that these models can be trained and used, but cannot be fully extracted from the protocol—unless someone expends more computational effort than it would take to train the model from scratch. This mechanism addresses a common criticism of open-source AI: that closed AI competitors could simply steal the fruits of open-source labor.
Why Crypto + Open Source = Better AI
At the beginning of this article, I highlighted the ethical and normative problems of closed AI by analyzing how big tech controls AI development. But in an era defined by networked powerlessness, I worry such arguments may fail to resonate with most readers. So let me instead offer two practical reasons why crypto-enabled open-source AI can lead to genuinely better AI.
First, combining crypto with open-source AI mobilizes vastly more resources, enabling the next generation of foundational models. Research shows that increasing either compute or data improves model performance—which explains the relentless growth in model scale. Bitcoin demonstrates the immense computational potential of open-source software combined with crypto incentives. It has become the largest and most powerful computing network in the world, surpassing the cloud computing capacity of even the largest tech companies. What makes crypto unique is its ability to transform isolated competition into cooperative competition. By incentivizing resource providers to collaborate on shared challenges instead of duplicating efforts, crypto networks achieve unprecedented efficiency in resource utilization. Crypto-powered open-source AI can tap into global computing and data resources to build models far larger than any closed counterpart. For instance, Hyperbolic has already demonstrated the potential of this model—an open marketplace where anyone can rent GPU capacity at low cost, efficiently leveraging distributed compute.
Second, combining crypto with open-source AI accelerates innovation. Once the resource problem is solved, machine learning research can return to its highly iterative, open, and innovative roots. Before the rise of large foundational LLMs, ML researchers routinely published their models and reproducible design blueprints. These models typically used open datasets and had modest computational requirements, allowing rapid experimentation and improvement. This open, iterative process led to breakthroughs in sequence modeling—including RNNs, LSTMs, and Attention Mechanisms—that ultimately enabled the Transformer architecture. But this changed with the release of GPT-3. OpenAI demonstrated that throwing massive compute and data at the problem could yield language-capable LLMs. This shifted the paradigm: resource barriers skyrocketed, pushing academia out of the frontier, while big tech firms stopped sharing architectures to preserve competitive advantage. The result? Innovation slowed at the very edge of AI progress.
Crypto-enabled open-source AI can reverse this trend. It empowers researchers to iterate again on state-of-the-art models—potentially discovering the “next Transformer.” This combination doesn’t just solve the resource problem—it reignites the engine of innovation in machine learning, opening up broader and more diverse pathways for the future of AI.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














