
Vitalik's latest article: What is the most productive intersection between cryptocurrency and AI?
TechFlow Selected TechFlow Selected

Vitalik's latest article: What is the most productive intersection between cryptocurrency and AI?
This article will classify the different ways in which Crypto and AI may intersect, and explore the prospects and challenges of each category.
Author: Vitalik Buterin
Translation: Karen, Foresight News
Special thanks to the Worldcoin and Modulus Labs teams, Xinyuan Sun, Martin Koeppelmann, and Illia Polosukhin for feedback and discussions.
For many years, people have asked me a recurring question: "Where is the most productive intersection between cryptocurrency and AI?" This is a reasonable question: crypto and AI are the two major deep (software) technological trends of the past decade, so there must be some connection between them.
At first glance, it’s easy to identify synergies: decentralization in crypto could counterbalance centralization in AI, AI is opaque while crypto brings transparency, and AI needs data—blockchains excel at storing and tracking data. But over the years, when pressed for concrete applications, my answer has generally been disappointing: "Yes, there are some interesting use cases, but not many."
Over the past three years, however, with the rise of more powerful AI technologies such as modern LLMs (large language models), and stronger crypto technologies—not just blockchain scaling solutions, but also zero-knowledge proofs, fully homomorphic encryption, and (two-party and multi-party) secure computation—I’ve started to see a shift. There are indeed promising AI applications within the blockchain ecosystem, or by combining AI with cryptography, though we must proceed cautiously when applying AI. A particular challenge arises because in cryptography, open-sourcing is the only way to truly ensure security, whereas in AI, open models (or even their training data) significantly increase vulnerability to adversarial machine learning attacks. This article will categorize different ways that Crypto+AI might intersect and explore the potential and challenges of each category.

Summary of Crypto+AI intersections from the uETH blog article. But how can these synergies actually be realized in practical applications?
Four Key Intersections of Crypto+AI
AI is an extremely broad concept: you can think of AI as a set of algorithms that are not explicitly programmed, but rather created by stirring a large computational soup and applying some form of optimization pressure to guide the soup into generating algorithms with desired properties.
This description should not be dismissed lightly—it includes the very process that created us humans! But it also means AI algorithms share certain characteristics: they are incredibly capable, yet we face inherent limitations in understanding or interpreting their internal workings.
There are many ways to classify AI. For the purposes of this article discussing interactions between AI and blockchain (inspired by Virgil Griffith's essay "Ethereum is Game-Changing Technology, Literally"), I will categorize them as follows:
-
AI as player in the game (highest feasibility): Mechanisms where AI participates, but the ultimate source of incentives comes from human inputs via protocols.
-
AI as game interface (high potential, but risky): AI helps users understand the surrounding crypto world and ensures their actions (e.g., signed messages and transactions) align with their intentions, protecting against deception.
-
AI as game rule (requires extreme caution): Blockchains, DAOs, and similar mechanisms directly invoke AI—for example, an "AI judge".
-
AI as game objective (long-term and intriguing): Designing blockchains, DAOs, and similar mechanisms whose goal is to build and maintain an AI usable for other purposes, using crypto either to better incentivize training or to prevent data leaks or misuse.
Let’s go through each one.
AI as Player in the Game
In fact, this category has existed for nearly a decade, at least since decentralized exchanges (DEXs) became widely used on-chain. Whenever exchanges exist, arbitrage opportunities arise—and bots can perform arbitrage far better than humans.
This use case has existed for a long time—even with much simpler AI than today—but it remains a genuine intersection of AI and crypto. Recently, we often see MEV (Maximal Extractable Value) arbitrage bots exploiting each other. Any blockchain application involving auctions or trading will attract arbitrage bots.
However, AI arbitrage bots are just the first example of a broader category—one I expect to soon encompass many other applications. Consider AIOmen, a demo of prediction markets with AI as participants:

Prediction markets have long been considered the holy grail of cognitive technology. Back in 2014, I was excited about using prediction markets as governance inputs ("future rulers"), and they've been widely experimented with in recent elections. Yet, in practice, prediction markets haven't made much progress, due to several reasons: top participants are often irrational; those with correct beliefs are unwilling to spend time or money betting unless large sums are involved; markets are usually not active enough, etc.
One response is to point to UX improvements in newer prediction markets like Polymarket and hope they succeed where earlier attempts failed. People bet hundreds of billions on sports—why won’t they invest enough in US election or LK99 predictions to attract serious players? But this argument must confront the fact that previous versions never reached such scale (at least compared to supporters’ dreams), suggesting a new ingredient is needed. Hence, another response highlights a specific feature now possible in the 2020s that wasn’t available in the 2010s: widespread participation by AI.
AI agents are willing—or able—to work for less than $1 per hour and possess encyclopedic knowledge. If that’s not enough, they can integrate real-time web search. If you create a market with a $50 liquidity subsidy, humans may ignore it, but thousands of AIs could swarm in and make their best guesses.
The incentive to perform well on any single question may be small, but the incentive to build AIs capable of good predictions could be worth millions. Note that you may not even need humans to adjudicate most questions—you could use a multi-round dispute system like Augur or Kleros, with AIs participating in early rounds. Humans would only need to respond in rare cases where disputes escalate and both sides have invested significant funds.
This is a powerful primitive: once you can make "prediction markets" work at such micro scales, you can reuse this primitive for many other types of problems, such as:
-
Is this social media post acceptable under [Terms of Service]?
-
What will happen to stock X’s price? (e.g., see Numerai)
-
Is this account messaging me really Elon Musk?
-
Is this work submission on a task marketplace acceptable?
-
Is the DApp at https://examplefinance.network a scam?
-
Is 0x1b54....98c3 the address of the 'Casinu In' ERC20 token?
You might notice many of these ideas align with the concept of "info defense" I mentioned previously. Broadly speaking, the problem is: how do we help users distinguish true from false information and detect fraud without giving centralized authorities the power to decide right and wrong—and risk abuse of that power? At a micro level, the answer could be "AI".
But at a macro level, the question remains: who builds the AI? AI reflects its creation process and thus inevitably carries bias. We need a higher-level game to evaluate different AIs, allowing AIs themselves to participate as players.
Using AI in this way—where AIs participate in a mechanism ultimately rewarded or penalized (probabilistically) by humans via on-chain mechanisms—is, I believe, highly promising. Now is the right time to explore such use cases further, as blockchain scalability has finally advanced enough to make previously impractical "micro" operations feasible on-chain.
A related class of applications moves toward highly autonomous agents using blockchains to cooperate more effectively, whether through payments or using smart contracts to make credible commitments.
AI as Game Interface
One idea I proposed in "My techno-optimism" is that there's a market opportunity for user-facing software that protects users by explaining and identifying dangers in the online world they're navigating. MetaMask’s scam detection feature is already an existing example.

Another example is Rabby Wallet’s simulation feature, which shows users the expected outcome of the transaction they’re about to sign.

These tools could be enhanced by AI. AI could provide richer, more human-understandable explanations about what kind of DApp you're interacting with, the consequences of more complex operations you're signing, whether a specific token is legitimate (e.g., BITCOIN isn’t just a string of characters—it’s the name of a real cryptocurrency, not an ERC20 token, and its price is far above $0.045), and more. Some projects are already moving aggressively in this direction (e.g., LangChain wallets using AI as the primary interface). My personal view is that purely AI-based interfaces currently carry too much risk, as they introduce new kinds of errors, but combining AI with more traditional interfaces is highly viable.
One specific risk deserves mention. I’ll elaborate on this below in the “AI as game rule” section, but the general issue is adversarial machine learning: if users have access to an AI assistant inside an open-source wallet, attackers also gain access to that same AI assistant, giving them unlimited opportunities to optimize their scams to bypass the wallet’s defenses. All modern AIs have vulnerabilities, and even limited access to the model during training makes finding these flaws relatively easy.
This is where the “AI participating in on-chain micro-markets” approach works better: each individual AI faces the same risks, but you intentionally create an open ecosystem with dozens of competing, continuously iterating and improving participants.
Moreover, each individual AI can remain closed—the system’s security comes from the openness of the game rules, not from exposing every participant’s internal workings.
Summary: AI can help users understand what’s happening in simple language, act as a real-time tutor, and protect users from mistakes—but caution is needed when facing malicious manipulators and scammers.
AI as Game Rule
Now we come to the application that excites many people, but which I believe is the riskiest and requires extreme caution: using AI as part of the game rules. This parallels mainstream political elites’ enthusiasm for "AI judges" (e.g., see articles on the "World Government Summit" website), and similar aspirations exist in blockchain applications. If a blockchain-based smart contract or DAO needs to make subjective decisions, can AI simply become part of the contract or DAO to help enforce these rules?
This is where adversarial machine learning becomes an extremely difficult challenge. Here’s a simple argument:
-
If a crucial AI model in a mechanism is closed-source, you cannot verify its internals, so it’s no better than a centralized application.
-
If the AI model is open-source, attackers can download and simulate it locally, design highly optimized attacks to fool the model, and then replay them on the live network.

Example of adversarial machine learning. Source: researchgate.net
Now, readers familiar with this blog (or native to crypto) may already sense where I’m going and start thinking. But wait.
We have advanced zero-knowledge proofs and other cool cryptographic tools. Surely we can perform some cryptographic magic—hide the model’s internals so attackers can’t optimize attacks, while proving the model executes correctly and was built on reasonable datasets through a valid training process.
Usually, this is exactly the kind of thinking I advocate in this blog and elsewhere. But when it comes to AI computations, there are two major objections:
-
Cryptographic overhead: Performing tasks in SNARKs (or MPC, etc.) is vastly less efficient than plaintext execution. Given that AI itself is already computationally intensive, is executing AI computations inside cryptographic black boxes even computationally feasible?
-
Black-box adversarial machine learning attacks: Even without knowing a model’s internals, there are ways to optimize attacks against AI models. If protection is too tight, it may make it easier for those choosing training data to compromise model integrity via poisoning attacks.
Both are complex rabbit holes requiring deeper exploration.
Cryptographic Overhead
Cryptographic tools, especially general-purpose ones like ZK-SNARKs and MPC, are expensive. Verifying an Ethereum block client-side takes hundreds of milliseconds, but generating a ZK-SNARK to prove its correctness can take hours. Other cryptographic tools (like MPC) may incur even greater overhead.
AI computation itself is already very costly: the most powerful language models generate text only slightly faster than humans read, and training them typically costs millions. The quality difference between top-tier models and cheaper alternatives (in training cost or parameter count) is substantial. At first glance, this seems like strong reason to doubt the entire project of wrapping AI in cryptography for added guarantees.
Fortunately, AI is a very special type of computation that allows various optimizations unavailable to more "unstructured" computations like ZK-EVMs. Let’s look at the basic structure of AI models:

Typically, AI models consist primarily of a series of matrix multiplications interspersed with element-wise nonlinear operations like ReLU (y = max(x, 0)). Asymptotically, matrix multiplication dominates the workload. This is convenient for cryptography because many cryptographic schemes can perform linear operations almost "for free" (especially when encrypting the model rather than the input during matrix multiplication).
If you're a cryptographer, you may already know about a similar phenomenon in homomorphic encryption: performing addition on encrypted ciphertexts is easy, but multiplication is hard—until 2009, we lacked a method for unbounded-depth multiplication.
For ZK-SNARKs, protocols like the 2013 version achieve less than 4x overhead for proving matrix multiplication. Unfortunately, the overhead for nonlinear layers remains high—with best practical implementations showing around 200x overhead.
However, further research offers hope for drastically reducing this overhead. See Ryan Cao’s presentation introducing a recent GKR-based approach, and my own simplified explanation of GKR’s core components.
But for many applications, we don’t just want to prove AI outputs are computed correctly—we also want to hide the model. There are simple approaches: split the model so different servers redundantly store each layer, hoping leakage from compromised servers won’t reveal too much. But there are also surprising specialized multi-party computation techniques.
In both cases, the lesson is the same: the bulk of AI computation consists of matrix multiplication, and highly efficient ZK-SNARKs, MPCs (and even FHE) can be designed specifically for this, making the total overhead of putting AI into cryptographic frameworks surprisingly low. Usually, nonlinear layers remain the biggest bottleneck despite their smaller size. Perhaps new techniques like lookup arguments can help.
Black-Box Adversarial Machine Learning
Now let’s discuss the other major concern: attack types still possible even when the model content is private and accessible only via "API". Quoting a 2016 paper:
Many machine learning models are vulnerable to adversarial examples: specially crafted inputs that cause the model to produce incorrect outputs. Adversarial examples that affect one model often transfer to another, even if the models have different architectures or were trained on different datasets—as long as they perform the same task. Thus, attackers can train substitute models, craft adversarial examples against them, and transfer these to victim models with little knowledge of the victims.
Potentially, even with very limited or no access to the target model, attacks can be created merely from training data. As of 2023, such attacks remain a significant problem.
To effectively counter such black-box attacks, we need two things:
-
Strictly limit who or what can query the model and how many queries are allowed. A black box with unlimited API access is insecure; one with highly restricted access may be secure.
-
Hide the training data while ensuring the training data creation process remains uncompromised—a critical goal.
On the first front, perhaps the project doing the most is Worldcoin. I’ve analyzed its early versions (and other protocols) in detail. Worldcoin extensively uses AI models at the protocol level to (i) convert iris scans into short "iris codes" that are easy to compare for similarity, and (ii) verify that scanned objects are genuinely human.
Worldcoin’s main defense is that no one can freely call the AI model: instead, it uses trusted hardware to ensure the model only accepts inputs digitally signed by orb cameras.
This approach isn’t guaranteed to work: physical patches or facial jewelry can adversarially attack biometric AI systems.

Wearing extra items on the forehead can evade detection or even impersonate others. Source: https://arxiv.org/pdf/2109.09320.pdf
But our hope is that combining all defenses—including hiding the AI model itself, strictly limiting query counts, and requiring authenticated queries—will make adversarial attacks extremely difficult, enhancing overall system security.
This leads to the second issue: how do we hide training data? This is where "DAO-managed AI" might actually make sense: we could create an on-chain DAO to manage who can submit training data (and required attestations about the data), who can query, and how many queries are allowed, using cryptographic techniques like MPC to encrypt the entire AI creation and operation pipeline—from individual user training inputs to final query outputs. Such a DAO could also fulfill the popular goal of compensating data contributors.

That said, this plan is extremely ambitious and many aspects could prove impractical:
-
Cryptographic overhead for such a fully black-box architecture may still be too high to compete with traditional closed "trust me" approaches.
-
There may be no effective way to decentralize the training data submission process and prevent poisoning attacks.
-
MPC systems could compromise security or privacy due to collusion among participants—after all, this has repeatedly happened with cross-chain bridges.
One reason I didn’t open this section with a warning like "don’t build AI judges, that’s dystopian" is that our society already heavily relies on unaccountable centralized AI judges: algorithms deciding which social media posts and political views get promoted, demoted, or censored.
I do think expanding this trend further at this stage is quite unwise, but I don’t believe the blockchain community experimenting more with AI would be a major contributor to worsening the situation.
In fact, crypto offers some very basic, low-risk ways to improve even existing centralized systems—I’m quite confident about this. One simple technique is delayed-release verifiable AI: when a social media platform uses AI-based post ranking, it could publish a ZK-SNARK proving the hash of the model used to generate rankings. The platform commits to publicly releasing its AI model after a delay (e.g., one year).
Once released, users can verify the hash matches the claimed model, and the community can test the model for fairness. The delay ensures the model is outdated by the time it’s published.
Thus, compared to the centralized world, the question isn’t whether we can do better, but how much better we can do. However, for decentralized systems, caution is essential: if someone builds a prediction market or stablecoin using an AI oracle, and someone discovers the oracle is exploitable, a huge amount of money could vanish instantly.
AI as Game Objective
If the above techniques for creating scalable, decentralized, private AI (black boxes whose contents are unknown to anyone) actually work, they could also be used to build AIs with utility beyond blockchain. The NEAR Protocol team is making this a central goal of their ongoing work.
There are two reasons:
If training and inference processes run via a combination of blockchain and multi-party computation, creating "trusted black-box AIs," many applications where users fear bias or manipulation could benefit. Many have expressed desires for democratic governance of the AI systems we rely on; cryptography and blockchain-based tech might offer a path forward.
From an AI safety perspective, this would be a way to create decentralized AI with natural emergency stop mechanisms and restrictions on malicious queries.
Notably, "using crypto incentives to encourage better AI development" can be pursued without fully diving into the cryptographic rabbit hole—for example, approaches like BitTensor fall into this category.
Conclusion
As blockchain and AI continue evolving, use cases at their intersection are growing, with some being more meaningful and robust than others.
Overall, applications where the underlying mechanism remains largely unchanged but individual participants are replaced by AIs—effectively operating mechanisms at micro levels—are the most immediately promising and easiest to implement.
Most challenging are applications attempting to create "singletons"—a single decentralized, trusted AI relied upon by various applications for some purpose—using blockchain and cryptographic techniques.
These applications hold potential both functionally and for improving AI safety, while avoiding centralization risks.
But the underlying assumptions could fail in many ways. Therefore, deploying such applications—especially in high-value, high-risk environments—requires great caution.
I look forward to seeing more constructive experiments with AI applications across all these domains, so we can learn which ones truly scale in practice.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News











