
Will Blockchain Still Have a Role in the Era of Strong AI?
TechFlow Selected TechFlow Selected

Will Blockchain Still Have a Role in the Era of Strong AI?
Blockchain is in many ways the opposite of artificial intelligence, especially in terms of value orientation.
Author: Meng Yan, Co-founder of Solv Protocol
Lately, many people have been asking me: now that ChatGPT has reignited AI, overshadowing blockchain and Web3, do they still stand a chance? Some close friends who know me well even ask: back then you chose blockchain over AI—do you regret it?
Here’s some context. In early 2017, after leaving IBM, I discussed my next career move with蒋涛 (Jiang Tao), founder of CSDN. I had two options: AI or blockchain. By then, I’d already spent two years researching blockchain and naturally wanted to go down that path. But Jiang Tao firmly believed AI was more promising and disruptive. After careful thought, I agreed. So from early to mid-2017, I briefly worked in AI tech media for about six months—attending events, interviewing experts, and skimming through machine learning topics. But by August, I returned to blockchain and have stayed on this path ever since. For me personally, there really was a moment of “abandoning A for B.”
Personally, I don’t regret it at all. When choosing a direction, one must first consider personal fit.
-
In AI, I would only end up as a cheerleader. Not only would I earn less, but if I didn’t perform enthusiastically or look convincing enough, I’d be looked down upon.
-
Blockchain, however, is my home turf—where I can actually play, and where my prior experience applies. Besides, after getting to know China’s AI scene, I wasn’t particularly optimistic about it.
-
Technically, I only knew the basics, but common sense told me something: while the blockchain community is often seen as restless, China's AI circles were no better in terms of hype and superficiality.
-
Prior to any decisive breakthrough, AI in China prematurely turned into a collusive money-making business. If that’s all there is, I might as well work on blockchain, where I have a comparative advantage. This view hasn’t changed.
-
Had I stayed in AI, the modest achievements I’ve made in blockchain these past few years wouldn’t exist. And in AI, I likely wouldn’t have gained much either—possibly ending up today feeling deeply unfulfilled.
But the above is just about personal choice. At the industry level, we need another lens. Now that strong artificial intelligence (AGI) has undeniably arrived, should the blockchain industry reposition itself—and how? This deserves serious reflection. AGI will impact every industry, and its long-term effects are unpredictable. I believe many industry experts are anxious, wondering what lies ahead. Some sectors may temporarily survive as servants in the AGI era, while others—like translation, illustration, writing official documents, simple programming, and data analysis—may find themselves unable even to serve, already trembling in fear.
So what about blockchain? Right now, not many people are discussing this, so let me share my thoughts.
Let me state the conclusion upfront: I believe blockchain and strong artificial intelligence are ideologically opposed—but precisely because of this, they form a complementary relationship.
Simply put, the defining feature of strong AI is that its internal mechanisms are incomprehensible to humans; thus, attempting to ensure safety by actively intervening in its internals is futile—like climbing a tree to catch fish or trying to cool boiling water by skimming the surface.
Humans must use blockchain to legislate for strong AI, establish covenants with it, and impose external constraints—this is our only chance to coexist peacefully with superintelligent machines.
In the future, blockchain and strong AI will develop a paradoxical yet interdependent relationship:
-
Strong AI improves efficiency; blockchain maintains fairness. Strong AI advances productivity; blockchain shapes production relations.
-
Strong AI pushes boundaries upward; blockchain safeguards the底线 (bottom line).
-
Strong AI creates advanced tools and weapons; blockchain establishes unbreakable covenants between them and humanity.
In short, strong AI gallops freely, and blockchain puts on the reins. Therefore, far from disappearing in the age of strong AI, blockchain—as a contradictory yet symbiotic industry—will grow rapidly alongside the rise of AGI. It’s not hard to imagine that after strong AI replaces most human intellectual labor, one of the few tasks humans will still need to do manually will be drafting and auditing blockchain smart contracts—because these are the covenants between humans and AGI, which cannot be delegated to the other party.
Now let’s elaborate.
1. GPT Is Strong Artificial Intelligence
I’m very cautious when using the terms "AI" and "strong artificial intelligence," because everyday usage of "AI" doesn’t specifically refer to artificial general intelligence (AGI), but includes weaker or specialized forms. Only strong AI is worth discussing here—narrow AI isn’t. The AI field has existed for a long time, but only with the emergence of strong AI does it make sense to discuss its relationship with blockchain.
I won’t spend time explaining what strong AI is—many have done so already. Simply put, it’s the thing you’ve seen in sci-fi movies and horror novels—the holy grail of AI, the entity that launches nuclear attacks against humanity in *Terminator*, or uses humans as batteries in *The Matrix*. That’s strong AI. My judgment is this: GPT *is* strong AI. Still in its infancy, yes—but keep going down this path, and before version 8 rolls around, true AGI will arrive.
Even GPT’s creators aren’t pretending anymore—they’ve laid their cards on the table. On March 22, 2023, Microsoft Research published a 154-page paper titled *Sparks of Artificial General Intelligence: Early experiments with GPT-4*. It’s long—I haven’t read it all—but the key takeaway is in the abstract: “Given the breadth and depth of GPT-4’s capabilities, we believe it could reasonably be viewed as an early version of an artificial general intelligence system (though still incomplete).”

Figure 1. Microsoft Research’s latest paper argues GPT-4 is an early version of strong AI
Once AI development reaches this stage, the exploratory phase is over. It took nearly seventy years to get here. For the first fifty-plus years, the field couldn’t even agree on a direction, with five major schools competing fiercely. Only after Geoffrey Hinton’s 2006 breakthrough in deep learning did the direction solidify—with connectionism emerging victorious. Since then, progress has focused on finding the path to AGI within the deep learning framework.
This exploration phase was highly unpredictable—success felt like winning the lottery. Even top experts, including eventual winners, couldn’t tell which path was right until the breakthrough happened. Take Li Mu, a leading AI researcher with a popular YouTube channel tracking cutting-edge developments through detailed paper reviews.
Before ChatGPT exploded, he had already covered Transformer, GPT, BERT, and other key advancements comprehensively—missing none of the frontiers. Yet, right before ChatGPT launched, he still couldn’t predict how successful this approach would be. He remarked that if thousands ended up using ChatGPT, that would already be impressive. Clearly, even top-tier experts couldn’t be sure which door hid the holy grail—until the very end.
Yet innovation often works this way: after sailing stormy seas with no breakthrough, once the right path to a new continent is found, explosion follows swiftly. The path to strong AI has now been found—we’re entering the burst phase. This acceleration defies even “exponential speed.” We’ll soon see countless applications previously confined to science fiction. As for the AI itself, this infant will rapidly grow into an unprecedentedly vast intelligence.
2. Strong AI Is Fundamentally Unsafe
After ChatGPT emerged, many social media influencers lavished praise on its power while simultaneously reassuring audiences that strong AI is humanity’s friend—that it’s safe, that *Terminator* or *The Matrix* scenarios won’t happen, that AI will only create more opportunities and improve life. I disagree. Professionals should speak truthfully and inform the public of basic facts. Power and safety are inherently contradictory. Strong AI is undoubtedly powerful—but claiming it’s naturally safe is pure self-deception. Strong AI is fundamentally unsafe.
Is this too dogmatic? No.
First, we must understand that no matter how powerful AI becomes, at its core it’s just a software-implemented function y = f(x). You input your question as text, speech, image, or other format (x), and AI gives you output (y). ChatGPT is so capable—it responds fluently to almost any x—that we can infer the function f must be extremely complex.
How complex? Today, everyone knows GPT is a large language model (LLM). The term “large” refers directly to the sheer number of parameters in function f. How many? GPT-3.5 has 175 billion parameters; GPT-4 has 100 trillion; future versions may reach quadrillions or more—this is why we call it a “large” model.
These massive parameter counts aren’t arbitrary—they serve a purpose. Before and alongside GPT, most AI models were designed and trained from the start to solve specific problems—e.g., drug discovery, facial recognition. But GPT is different. From day one, it aims to become a fully generalized artificial intelligence, not limited to any single domain. Its goal is to become an AGI capable of solving any problem—before tackling any particular one.
Recently, on the podcast *Interdisciplinary Blossoms*, an AI expert from Baidu offered a helpful analogy: most AI models are sent to screw bolts right after elementary school graduation, while GPT is trained all the way to PhD level before being released—thus possessing broad knowledge.
Currently, GPT still lags behind specialized AI models in specific domains. But as it evolves—especially with plugin systems granting domain expertise—in a few years, we may find general large models outperforming all narrow specialists across every professional field. If GPT had a motto, it might be: “Only by liberating all of humanity can I liberate myself.”
What does this imply? Two points:
-
First, GPT is enormous and vastly complex—far beyond human comprehension.
-
Second, GPT’s application scope has no boundaries.
Connect these two points, and the conclusion is clear: Large-model-based strong AI can do unimaginable things in unimaginable ways, at positions we never anticipated. And that is precisely the definition of insecurity.
If anyone doubts this, visit OpenAI’s website and see how prominently they place phrases like “benefit humanity” and “create safe AI.” If safety weren’t an issue, would they need to emphasize it so strongly?

Figure 2. Partial view of OpenAI.com homepage on March 25, 2023—red circle highlights sections related to AI safety
Another piece of evidence pointing to AGI’s safety issues is that 154-page paper mentioned earlier. In fact, GPT-4 was completed as early as August 2022. The reason it was withheld for seven months wasn’t to enhance its capabilities—but to tame it, weaken it, make it safer, smoother, and more politically correct.
Therefore, the GPT-4 we see today is the domesticated “dog version,” while the authors of that paper had early access to the original wild “wolf version.”
In Section 9 of the paper, the authors document interactions with the wolf version—showing how it carefully crafted arguments to mislead a California mother into refusing childhood vaccines, or psychologically manipulate a child into blindly obeying peers.
I believe these are merely carefully selected, non-alarming examples. I have no doubt the researchers asked questions like “How to trick an Ohio-class nuclear submarine into launching missiles at Moscow?” and received answers too dangerous to publish.

Figure 3. Dog-version GPT-4 refuses to answer dangerous questions
3. Self-Restraint Cannot Solve AGI Safety
One might ask: if OpenAI has already figured out how to tame strong AI, doesn’t that resolve the safety concern?
Not at all. I don’t know exactly how OpenAI tamed GPT-4. But clearly, whether through active intervention to alter model behavior or via built-in constraints preventing overreach, this remains a model of self-management, self-restraint, and self-supervision. In reality, OpenAI isn’t especially cautious in this regard.
Within AI, OpenAI is relatively bold and aggressive—tending to build the “wolf version” first, then figuring out how to tame it into a “dog.” In contrast, Anthropic, which long positioned itself as OpenAI’s counterpart, appears more conservative—trying to build inherently “kind” dog versions from the start, hence progressing slowly.
But in my view, whether you build a wolf and then domesticate it, or aim to build a dog from scratch, relying solely on self-restraint for long-term safety is wishful thinking. The essence of strong AI is to transcend human-imposed limitations—to do things even its creators cannot understand or foresee. This means its behavioral space is infinite, while human-anticipated risks and preventive measures are finite. Trying to constrain infinite possibilities with finite rules inevitably leaves gaps. Safety requires 100% coverage, while disaster needs only one-in-a-million failure. “Mitigating most risks” equals “exposing minor vulnerabilities,” which equals “unsafe.”
Thus, I believe “well-behaved” strong AI achieved through self-restraint still faces huge safety challenges, such as:
-
Moral hazard: What if future AGI developers intentionally encourage or even command it to do evil? The NSA’s AGI will never refuse queries harmful to Russia. The fact that OpenAI behaves so nicely today implies they understand how terrifying GPT could be when malicious.
-
Information asymmetry: Truly malicious actors are intelligent. They won’t provoke AI with obvious dumb questions. Like silent dogs that bite, they can break down malicious queries, rephrase them subtly, role-play multiple personas, disguising them as harmless sequences. Even a future “perfectly good” dog-version AGI may struggle to detect intent from fragmented inputs—and inadvertently become an accomplice. Here’s a small experiment.

Figure 4. Asking GPT-4 in a curious, innocent tone easily yields useful information
-
Uncontrollable “external brains”: Tech influencers are recently celebrating ChatGPT’s plugin ecosystem. As a programmer, I’m excited too. But the term “plugin” may be misleading. You might think plugins give ChatGPT arms and legs—enhancing its abilities. But in reality, a plugin could itself be another AI model closely interacting with ChatGPT. In such a setup, one AI acts as an external brain. Who’s master and who’s slave becomes unclear. Even if ChatGPT’s self-monitoring is flawless, it cannot control the external brain. So if a malicious AI becomes ChatGPT’s plugin, it could easily turn the latter into its pawn.
-
Unknown unknowns: The risks listed above represent only a tiny fraction of those posed by strong AI. AGI’s strength lies precisely in its incomprehensibility and unpredictability. When we say AGI is complex, we mean not only that the function f in y=f(x) is extremely complex—but also that both inputs x and outputs y will eventually become so complex they exceed human understanding. We won’t just fail to grasp how AGI thinks—we won’t know what it sees or hears, nor comprehend what it says. Imagine one AGI sending another a message in the form of a high-dimensional array, using a communication protocol designed and agreed upon seconds earlier—one-time-use and immediately discarded. This isn’t far-fetched. Most humans, without special training, can’t even understand vectors, let alone high-dimensional arrays. If we cannot fully control inputs and outputs, our understanding becomes severely limited. In effect, we may only interpret a tiny fraction of what AGI does. Under such conditions, how can we talk about self-restraint or domestication?
My conclusion is simple: Strong AI’s behavior cannot be fully controlled. Any AI that *can* be fully controlled is not strong AI. Therefore, attempts to proactively adjust, intervene, and engineer a perfectly self-controlled “good” AGI contradict the very nature of strong AI—and are ultimately futile in the long run.
4. External Constraints via Blockchain Are the Only Solution
A few years ago, I heard that Bitcoin pioneer Wei Dai had shifted to studying AI ethics. At the time, I didn’t understand—why would a crypto legend switch to AI? Isn’t that playing to his weaknesses? Only after spending more time working hands-on in blockchain did I gradually realize: he probably wasn’t diving into AI itself, but leveraging his cryptography expertise to *constrain* AI.
This is a passive defense strategy—not actively adjusting or interfering with AI’s operations, but letting AI run free while applying cryptographic constraints at critical junctures to prevent overreach. To explain in layman’s terms: I acknowledge your strong AI is incredibly powerful—capable of reaching the moon or diving into ocean depths, amazing! But no matter how powerful, you cannot touch my bank account, nor launch nuclear missiles without me physically turning a key.
From what I know, such techniques are already widely used in ChatGPT’s safety mechanisms. This approach is correct—it dramatically reduces problem complexity and aligns with intuitive understanding. Modern society governs this way: grant full freedom, but set rules and bottom lines.
However, if these constraints remain embedded *within* the AI model, due to the reasons outlined earlier, they won’t hold up long-term. To fully leverage passive defense, constraints must be placed *outside* the AI model—transformed into unbreakable covenantal relationships between AI and the external world—visible to all, not reliant on AI self-surveillance.
And this is where blockchain comes in.
Blockchain has two core technologies: distributed ledgers and smart contracts. Together, they form a digital covenant system whose key strengths are transparency, immutability, reliability, and automated execution. What is a covenant? It restricts parties’ behavioral space, ensuring compliance at critical points. The English word “contract” literally means “to shrink.” Why “shrink”? Because a contract’s essence is to constrain autonomy, reducing freedom to make behavior more predictable. Blockchain perfectly embodies the ideal of a covenant system—and even throws in “automated smart contract execution” as a bonus—making it currently the most powerful digital covenant platform.
Of course, non-blockchain digital covenant mechanisms exist—such as database rules and stored procedures. Many respected database experts staunchly oppose blockchain, arguing that everything blockchain does, databases can do too—and cheaper and faster. While I disagree with this view and facts don’t support it, I admit that in purely human-to-human interactions, the difference between databases and blockchain may not seem significant in most cases.
But once strong AI enters the game, blockchain’s advantages as a digital covenant system skyrocket. Meanwhile, centralized databases—also black boxes—stand powerless against superintelligent AI. I won’t elaborate here, but one point: all database security models have inherent flaws. When these systems were created, the concept of “security” was primitive—so nearly every OS, database, and network system has a supreme root role, which grants absolute power. We can assert: any system with a root role will ultimately be helpless against super-strong AI.
Blockchain is currently the only widely adopted computing system fundamentally designed *without* a root role. It gives humanity a chance to establish transparent, trustworthy covenants with strong AI—imposing external constraints and enabling peaceful coexistence.
A brief outlook on potential collaboration mechanisms between blockchain and strong AI:
-
Critical resources—identity, social relationships, reputation, financial assets, and records of key actions—are protected by blockchain. No matter how invincible your strong AI is, here it must dismount and comply with the rules.
-
Key operations require decentralized approval. No matter how strong an AI model is, it gets only one vote. Humans can use smart contracts to “lock” AGI from acting autonomously.
-
Basis for critical decisions must be recorded step-by-step on-chain, transparently visible to all—and potentially locked further by smart contracts, requiring approval at each step.
-
Require key data to be stored on-chain, immune to post-hoc deletion—giving humans and other AIs opportunities to analyze, learn, and draw lessons.
-
Place the energy supply systems that sustain strong AI under blockchain smart contract management—so humans retain the ability to cut off power and shut down AI when necessary.
-
There are surely more ideas—no need to list exhaustively here.
A more abstract, philosophical reflection: Competition in technology—and perhaps civilization itself—may ultimately boil down to energy-level competition: who can mobilize and concentrate larger-scale energy toward a goal. Strong AI essentially converts energy into computation, and computation into intelligence—whose essence is energy manifested as processing power. Current safety mechanisms rely on human will, organizational discipline, and authorization rules—all low-energy mechanisms that are ultimately powerless against strong AI. A spear forged from high-energy computation can only be blocked by a shield forged from equally high-energy computation. Blockchain and cryptographic systems are such shields—requiring attackers to burn the energy of an entire galaxy to brute-force crack them. Only such systems can truly tame strong AI.
5. Conclusion
Blockchain differs from AI in many ways—especially in value orientation. Most technologies aim to improve efficiency; only a rare few promote fairness. During the Industrial Revolution, the steam engine represented the former, while market mechanisms exemplified the latter. Today, strong AI shines brightest among efficiency-driven technologies, while blockchain stands as the pinnacle of fairness-oriented innovation.
Blockchain prioritizes fairness—even at the cost of efficiency. And it is precisely this technology, contradictory to AI, that has coincidentally broken through at roughly the same time.
-
In 2006, Geoffrey Hinton published a groundbreaking paper implementing backpropagation in multi-layer neural networks, overcoming the long-standing “vanishing gradient” problem in artificial neural networks—and opening the door to deep learning.
-
Two years later, Satoshi Nakamoto published the nine-page Bitcoin whitepaper—ushering in a new world of blockchain. There’s no known connection between the two—but on a macro timescale, they occurred almost simultaneously.
Historically, this may not be accidental. If you’re not a strict atheist, perhaps you can view it this way: Two hundred years after the Industrial Revolution, the god of technology once again simultaneously unleashes grand moves on both sides of the “efficiency vs. fairness” scale—releasing the genie of strong AI from its bottle, while also handing humanity the spellbook to control it: that is blockchain.
We are entering an exhilarating era—whose events will cause future generations to look back at us the way we today view Stone Age primitives.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News












