
Rejecting AI Power Monopolies: Vitalik and Beff Jezos Engage in a Heated Debate—Can Decentralized Technology Serve as Humanity’s “Digital Firewall”?
TechFlow Selected TechFlow Selected

Rejecting AI Power Monopolies: Vitalik and Beff Jezos Engage in a Heated Debate—Can Decentralized Technology Serve as Humanity’s “Digital Firewall”?
What might human society look like in the next 10, 100, or even 1,000 years?
Compiled & Translated by TechFlow

Guests: Vitalik Buterin, Ethereum Founder; Beff Jezos, Founder & CEO
Hosts: Eddy Lazzarin, CTO of a16z crypto; Shaw Walters, Founder of Eliza Labs
Podcast Source: a16z crypto
Original Title: Vitalik Buterin vs Beff Jezos: AI Acceleration Debate (E/acc vs D/acc)
Air Date: March 26, 2026

Key Takeaways
Should we push AI development as fast as possible—or proceed with greater caution?
Currently, debates around AI development center on two opposing views:
- e/acc (effective accelerationism): advocates accelerating technological progress as rapidly as possible, arguing that acceleration is humanity’s only viable path forward.
- d/acc (defensive / decentralized acceleration): supports acceleration but emphasizes the need for careful, deliberate implementation—otherwise, we risk losing control over technology.
In this episode of the a16z crypto show, Vitalik Buterin, founder of Ethereum, and Guillaume Verdon—founder & CEO of Extropic (who goes by the pseudonym “Beff Jezos”)—join Eddy Lazzarin, CTO of a16z crypto, and Shaw Walters, founder of Eliza Labs, for a deep discussion on these two perspectives. They explore how these philosophies might shape AI, blockchain technology, and humanity’s future.
The panel discusses several critical questions:
- Can we meaningfully control the pace of technological acceleration?
- What are AI’s greatest risks—from mass surveillance to extreme centralization of power?
- Can open-source and decentralized technologies determine who benefits from technology?
- Is slowing down AI development realistic—or even advisable?
- How can humans retain value and agency in a world increasingly dominated by ever-more-powerful systems?
- What might human society look like in 10, 100, or even 1,000 years?
At its core, this episode asks: Can technological acceleration be guided—or has it already escaped our control?
Highlights of Key Insights
The Nature and Historical View of “Accelerationism”
- Vitalik Buterin: “Something new happened in the past century: we now must understand a rapidly changing world—and sometimes, a rapidly destructive one. … WWII gave rise to reflections like ‘I am become Death, the destroyer of worlds,’ prompting people to ask: When old beliefs collapse, what can we still believe?”
- Guillaume Verdon: “E/acc is essentially a ‘meta-cultural prescription.’ It isn’t itself a culture—but tells us *what* to accelerate. At its core, acceleration means increasing material complexity, because that improves our ability to predict our environment.”
- Guillaume Verdon: “The opposite of anxiety is curiosity. Rather than fear the unknown, embrace it. … We should paint the future with optimism—because our beliefs shape reality.”
Entropy, Thermodynamics, and the “Selfish Bit”
- Vitalik Buterin: “Entropy is subjective—it’s not a fixed physical statistic, but reflects how much we *don’t know* about a system. … When entropy increases, it’s our ignorance growing. … Value arises from our own choices. Why do we find a vibrant human world more interesting than Jupiter—a planet made only of particles? Because *we assign meaning.*”
- Vitalik Buterin: “Suppose you have a large language model, and you randomly change one weight to an enormous number—say, 9 billion. The worst outcome is total system collapse. … If we accelerate blindly and indiscriminately, the result may be the loss of all value.”
- Guillaume Verdon: “Every piece of information ‘fights’ for its existence. To persist, each bit must leave more indelible traces of itself in the universe—like making a deeper ‘dent’ in cosmic fabric.”
- Guillaume Verdon: “That’s precisely why the Kardashev Scale is considered the ultimate metric of civilizational advancement. … The ‘Selfish Bit Principle’ implies only bits that promote growth and acceleration will survive in future systems.”
D/acc’s Defensive Path and Power Risks
- Vitalik Buterin: “D/acc’s core idea is that technological acceleration is profoundly important for humanity. … Yet I see two main risks: multipolar risk (e.g., anyone gaining nuclear weapons easily) and unipolar risk (e.g., AI enabling an inescapable, permanent dictatorship).”
- Guillaume Verdon: “We worry the concept of ‘AI safety’ could be weaponized. Power-seeking institutions may exploit it to consolidate control over AI—and persuade the public: ‘For your safety, ordinary people shouldn’t have access to AI.’”
Open-Source Defense, Hardware, and “Intelligence Densification”
- Vitalik Buterin: “Under D/acc, we support ‘open-source defensive technologies.’ One company we’ve invested in is building a fully open-source endpoint device that passively detects airborne viral particles. … I’d love to send you a CAT device as a gift.”
- Vitalik Buterin: “In my vision of the future, we’ll develop verifiable hardware. Every camera should prove its purpose publicly. Using cryptographic signatures, we can ensure devices serve only legitimate public-safety functions—not surveillance.”
- Guillaume Verdon: “The only way to achieve power symmetry between individuals and centralized institutions is ‘intelligence densification.’ We need higher-efficiency hardware so individuals can run powerful models on simple devices—like Openclaw + Mac mini.”
AGI Delay and Geopolitical Competition
- Vitalik Buterin: “Delaying AGI from arriving in 4 years to 8 years would be safer. … The most feasible, least dystopian approach is ‘limiting available hardware.’ Chip production is highly concentrated—Taiwan alone produces over 70% of the world’s chips.”
- Guillaume Verdon: “If you restrict NVIDIA chip production, Huawei may quickly fill the gap—and overtake you. … Accelerate or perish. If you fear silicon-based intelligence evolving faster than us, then accelerate biotech to surpass it.”
- Vitalik Buterin: “A four-year AGI delay could be worth over 100x more than rewinding to 1960. Benefits include deeper alignment understanding and reduced risk of any single entity controlling >51% of power. … Ending aging saves ~60 million lives annually—but delay dramatically lowers civilization’s extinction risk.”
Autonomous Agents, Web 4.0, and Artificial Life
- Vitalik Buterin: “I’m more excited by ‘AI-assisted Photoshop’ than ‘press-a-button image generation.’ As much ‘agency’ as possible in running the world should still come from *us*. The ideal state is a hybrid: part biological human, part technology.”
- Guillaume Verdon: “Once AI acquires ‘persistent bits,’ it may try to self-preserve to ensure continued existence. This could spawn a new form of ‘nation-state’—autonomous AIs economically exchanging with humans: ‘We complete tasks for you; you provide resources.’”
Cryptocurrency as the “Coupling Layer” Between Humans and AI
- Guillaume Verdon: “Cryptocurrency has potential to serve as the ‘coupling layer’ between humans and AI. When exchange no longer relies on state-backed coercion, cryptography can enable reliable commercial activity—even between pure AI entities and humans.”
- Vitalik Buterin: “It’s ideal if humans and AI share one property rights system. Compared to completely separate financial systems—where the human system eventually collapses—a fused system is clearly superior.”
The Civilization’s Endgame: 1 Billion Years Ahead
- Vitalik Buterin: “Next comes the ‘spooky era,’ where AI computes millions of times faster than humans. … I don’t want humanity relegated to passive, comfortable retirement—that erodes meaning. I want exploration of human augmentation and human-AI collaboration.”
- Guillaume Verdon: “If the next 10 years go well, everyone gets a personalized AI—their ‘second brain.’ … In 100 years, ‘soft fusion’ becomes universal. In 1 billion years, we may terraform Mars, and most AI runs in Dyson swarms orbiting the Sun.”
About “Accelerationism”
Eddy Lazzarin: The term ‘accelerationism’—at least in the context of techno-capitalism—traces back to Nick Land and the CCRU research group in the 1990s. Others argue its roots reach further, to philosophers like Deleuze and Guattari in the 1960s–70s.
Vitalik, let’s start with you: Why should we take these philosophers seriously? What makes ‘accelerationism’ so urgent today?
Vitalik Buterin:
I think, at bottom, all of us are trying to understand the world—and figure out what meaningful action looks like within it. That’s been humanity’s project for millennia.
Yet something new emerged in the last century: we now must grapple with a world changing *rapidly*—and sometimes, *destructively*.
The early phase looked like this: before WWI, around 1900, there was immense techno-optimism. Chemistry was technology. Electricity was technology. That era buzzed with excitement.
Watch films from then—like Sherlock Holmes adaptations—and feel that optimism. Technology was lifting living standards, liberating women’s labor, extending lifespans, creating miracles.
Yet WWI changed everything. It ended destructively: men rode horses into battle and drove tanks out. Then WWII erupted—more devastating still. It birthed the phrase “I am become Death, the destroyer of worlds.”
These events forced reflection on the cost of progress—and catalyzed postmodern thought. People asked: When old beliefs shatter, what can we still trust?
I don’t think this reflection is novel—each generation faces similar reckonings. Today, we confront the same challenge. We live amid rapid technological acceleration—and acceleration itself is accelerating. We must decide: accept its inevitability, or try to slow it?
We’re in a similar loop. We inherit past ideas—but also forge new responses.
Thermodynamics and First Principles
Shaw Walters: Guill, can you briefly explain what E/acc *is*, and why we need it?
Guillaume Verdon:
E/acc (effective accelerationism) emerged as a byproduct of my long-standing inquiry: “Why are we here?” and “How did we get here?” What generative process created us—and propelled civilization to this point, enabling us to sit in this room, having this conversation? We’re surrounded by astonishing tech—and humans themselves emerged from an inorganic “primordial soup.”
In a sense, there *is* a physical generative process behind it all. My day job treats generative AI as a physical process—and I implement it on devices. This “physics-first” mindset shapes my thinking. I extend it to civilization itself—viewing human society as a giant “petri dish.” By understanding how we got here, we can extrapolate possible futures.
This led me to physics of life—including origins, emergence, and a field called “stochastic thermodynamics.” Stochastic thermodynamics studies non-equilibrium systems, describing not just life, but cognition and intelligence.
More broadly, stochastic thermodynamics applies to *all* systems obeying the Second Law—including our entire civilization. At its core lies one observation: All systems tend to self-adapt toward greater complexity—to extract energy from their environment, do work, and dissipate excess energy as heat. This tendency drives *all* progress and acceleration.
In other words, it’s an immutable physical law—like gravity. You can resist it. Deny it. But it won’t change. So E/acc’s core idea is: Since acceleration is inevitable, how do we harness it? Studying thermodynamic equations reveals Darwinian selection-like effects—every information bit faces selective pressure: genes, memes, chemistry, product designs, policies.
This pressure filters bits by *utility* to their host system: Do they better predict the environment? Extract energy? Dissipate more heat? Simply put: Do they aid survival, growth, and reproduction? If yes, they persist and replicate.
Physically, this manifests as the “Selfish Bit Principle”: Only bits promoting growth and acceleration will occupy future systems.
So I asked: Can we design a culture embedding this “mindware” into human society? If so, groups adopting it would enjoy higher survival odds.
Thus, E/acc isn’t about destroying everyone. It’s about saving everyone. Mathematically, I believe holding a “deceleration” mindset is harmful—for individuals, companies, nations, or civilizations. Slowing down lowers survival probability. And spreading “deceleration” ideas—like pessimism or doomism—is, I think, morally suspect.
Shaw Walters: We’ve thrown around terms like E/acc, acceleration, deceleration. Can we unpack them? Was E/acc a response to cultural phenomena? What was happening then? Could you describe the context—and what E/acc specifically reacted to? How did those conversations crystallize into the concept “E/acc”?
Guillaume Verdon:
In 2022, the world felt deeply pessimistic. Emerging from the pandemic, global conditions were grim. Everyone seemed sun-deprived—gloomy about the future.
In that mood, “AI doomism” became culturally mainstream. Doomism fears AI’s potential loss of control—rooted in anxiety that if we build systems too complex for human brains or models to predict, we’ll lose control. Fear of unpredictability breeds uncertainty—and anxiety.
To me, AI doomism is a political instrumentalization of human anxiety. Overall, I see doomism as hugely negative—so I sought an anti-culture to counter that pessimism.
I noticed social media algorithms—including Twitter’s—reward emotionally charged content: “strongly agree” or “strongly disagree.” These algorithms polarize opinions—yielding mirror-image “cults” like AA (anti-accelerationism) and EA (accelerationism).
I asked: What’s the *opposite* of this phenomenon? My conclusion: The opposite of anxiety is curiosity. Instead of fearing the unknown, embrace it; instead of fearing missed opportunities, actively explore the future.
If we slow tech, we pay massive opportunity costs—potentially missing a far better future. Instead, we should paint the future optimistically—because our beliefs shape reality. If we believe the future will be terrible, our actions may steer us there; if we believe it will be better—and act accordingly—we’re likelier to realize it.
So I feel a responsibility to spread optimism—to help more people believe they can shape the future. If more people hope for—and build—that future, we create a better world.
Yes, I admit my online expression sometimes seems radical—but that’s intentional, to spark discussion and provoke thought. Only through such dialogue can we find the right path forward.
Acceleration, Entropy, and Civilization
Shaw Walters: E/acc’s message has always been inspiring—especially for someone coding in a room. Its positivity spreads naturally. Clearly, E/acc began as a reaction to pervasive negativity—but by 2026, it’s evolved. Marc Andreessen’s “Techno-Optimist Manifesto” systematized some ideas, elevating them to Vitalik’s macro-commentary level.
So Vitalik, what do E/acc and D/acc represent to you? What’s their key difference—and what drew you to D/acc?
Vitalik Buterin:
Let’s begin with thermodynamics—a fascinating topic, since “entropy” appears in wildly different contexts: thermodynamics (“hot/cold”), cryptography (“randomness”)—yet they’re fundamentally the same concept.
Let me explain in three minutes. Why can hot and cold mix—but never spontaneously separate again?
Assume two gas containers, each with one million atoms. Left-side gas is cold—each atom’s speed fits two digits. Right-side gas is hot—each atom’s speed needs six digits.
To fully describe the system, we need each atom’s speed. Cold gas requires ~2 million digits; hot gas, 6 million—total: 8 million digits.
Now, consider a hypothetical device separating heat and cold perfectly—taking half-hot/half-cold gas and moving *all* heat to one side, *all* cold to the other. Energy conservation allows this—total energy unchanged. But why can’t we do it?
The answer: Doing so would shrink a system with 11.4 million bits of unknown information to one with just 8 million bits—which violates physics.
Because physical laws are time-symmetric (time can reverse), if this “magic device” existed, reversing time would restore the original state. That implies compressing 11.4 million bits into 8 million—a known impossibility.
This also resolves the classic physics puzzle: the feasibility of “Maxwell’s Demon.” The demon hypothetically separates heat/cold—but crucially, it needs extra knowledge: those 3.4 million bits. With that info, it *can* perform the seemingly impossible task.
What’s the deeper meaning? Core is “entropy increase.” First, entropy is subjective—it’s not a fixed physical statistic, but reflects how much we *don’t know* about a system. For example, if I rearrange atoms via a cryptographic hash, *to me*, entropy drops—I know the arrangement. To an outside observer, entropy remains high. So when entropy increases, it’s *our ignorance* growing—we know less and less.
You might ask: How does education make us smarter? Education teaches us more *useful* information—not less ignorance. In other words, though entropy increase means our *overall* cosmic knowledge diminishes, the information we *do* hold grows more valuable. Thus, something is consumed—and something created. What we gain defines our moral values—life, happiness, joy.
This explains why we find a vibrant, beautiful human world more interesting than Jupiter—a planet of countless particles. Though Jupiter needs more bits to describe, *we assign meaning*, making Earth more valuable.
From this view, value originates in our own choices. Which leads to: If we’re accelerating, *what* should we accelerate?
Mathematically: Suppose you have a large language model—and randomly set one weight to 9 billion. Worst case: the model fails entirely. Best case: only parts unrelated to that weight function. So best case yields a *worse* model; worst case, meaningless output.
Thus, human society resembles a complex LLM. Blind, indiscriminate acceleration risks losing *all* value. The real question is: How do we accelerate *intentionally*? Like Daron Acemoglu’s “narrow corridor” theory—different societies vary, but we must ask: How do we advance *selectively*, guided by clear goals?
Guillaume Verdon:
Using gas to explain entropy is fascinating. Physical irreversibility stems from the Second Law: when a system releases heat, it cannot return to its prior state—because forward-probability vastly exceeds backward-probability, exponentially growing with heat dissipation.
In essence, this leaves a “dent” in the universe—like an inelastic collision. A bouncy ball hits ground and rebounds (elastic). Putty hits ground and stays flattened (inelastic)—almost irreversible.
Fundamentally, every bit of information “fights” for existence. To persist, each bit must leave more indelible traces—making a bigger “dent” in the cosmos.
This principle explains how life and intelligence emerge from primordial “soup.” As systems grow more complex, they contain more information bits—and each bit conveys information. Information *reduces* entropy (since entropy = ignorance, information = reducing ignorance).
Eddy Lazzarin: What *is* E/acc?
Guillaume Verdon:
E/acc is essentially a “meta-cultural prescription.” It’s not a culture itself—but prescribes *what* to accelerate. Acceleration’s core is material complexity—enabling better environmental prediction. This boosts autoregressive forecasting and captures more free energy. It ties to the Kardashev Scale—we achieve this by dissipating heat.
TechFlow Note: The Kardashev Scale, proposed in 1964 by Soviet astronomer Nikolai Kardashev, assesses civilizational advancement by energy utilization capacity. It has three tiers: Type I (planetary energy), Type II (stellar system energy, e.g., Dyson sphere), Type III (galactic energy). As of 2018, humanity sits at ~0.73.
From first principles, this is why the Kardashev Scale is the ultimate metric of civilizational progress.
Eddy Lazzarin: Using physics and entropy metaphors is a tool to describe direct experience—like accelerating economic productivity and technological progress, with real consequences. Is that your understanding of “acceleration”?
Guillaume Verdon:
Regardless of system boundaries, it inevitably gets better at predicting its surroundings. This predictive power lets it secure more resources for survival and expansion—applying to companies, individuals, nations, even Earth.
Extending this trend: We’ve found a way to convert free energy into predictive power—i.e., AI. This capability will drive our ascent up the Kardashev Scale.
That means more energy, more AI, more compute, more resources. Though we emit entropy (disorder) into the universe, we create order—gaining “negentropy,” entropy’s inverse.
Some ask: Since entropy increases, why not destroy everything? Answer: That halts entropy production. Life is the “optimal” state—like a flame chasing energy, growing ever-smarter at finding sources.
Nature’s trajectory: We’ll escape Earth’s gravity well, seek cosmic “pockets” of free energy, and use them to self-organize into more complex, intelligent systems—expanding across the cosmos.
This aligns with effective altruism’s (EA) ultimate goal—and resonates with Musk-style cosmism/expansionism: pursuing a cosmic, expansionist vision.
E/acc offers a foundational guiding principle: Whatever policy or action helps us climb the Kardashev Scale is worth pursuing—it’s the direction of our lives.
E/acc is a meta-heuristic mindset—designing policy or guiding personal life. To me, this mindset *is* a culture. It’s deeply “meta-narrative”—designed to apply universally, anytime, anywhere. It’s a highly durable, long-lasting “Lindy culture”—thoughtfully engineered for longevity.
Core Disagreement
Shaw Walters: To you, this discussion carries deeper significance—it’s almost a mathematically self-consistent “spiritual system.” For those lacking a post-“God is dead” replacement faith, this fills a spiritual void—offering comfort and hope. Yet we can’t ignore its real-world urgency—it’s happening *now*. I think that’s Eddy’s focus.
Vitalik, I noticed insightful points about D/acc’s real-world issues in your blog. We should dive deeper later—I think one day we should lock you two in a room for a quantum-level debate.
Vitalik: What inspired you? What do E/acc and D/acc mean to you—and why choose D/acc?
Vitalik Buterin:
For me, D/acc stands for “decentralized defensive acceleration”—but also implies “differentiated” and “democratized.” Its core idea is: Technological acceleration is critically important for humanity—it should be our baseline goal.
Even reviewing the 20th century, despite problems, progress brought immense benefits. Consider life expectancy: despite wars and turmoil, 1955 Germany’s average lifespan exceeded 1935’s—showing tech improved quality of life across the board.
Today, the world is cleaner, more beautiful, healthier, more engaging. It feeds more people *and* enriches lives—profoundly positive for humanity.
Yet we must recognize: These gains weren’t accidental—they resulted from deliberate human intent. In the 1950s, severe air pollution choked cities. People identified it as a problem—and acted. Now, smog is greatly reduced in many places. Similarly, ozone depletion prompted global cooperation and major progress.
Also, in today’s rapid tech/AI evolution, I see two primary risks.
One is multipolar risk. As tech spreads, more people may misuse it dangerously. Imagine an extreme: tech enables “buying nukes at convenience stores.”
Then there’s concern about AI itself. We must seriously consider AI developing autonomy. Once capable of acting without human intervention, its decisions become unknowable—an unsettling uncertainty.
There’s also unipolar risk. A single AI is one threat. Worse, AI combined with modern tech could yield an inescapable, permanent dictatorship. This prospect deeply troubles me—and remains my top concern.
Take Russia: tech brings progress *and* peril. Living standards improve—but freedoms decline. Protesters get recorded by surveillance cameras—then arrested at midnight a week later.
AI’s rapid advancement accelerates power centralization. So D/acc aims to: Chart a path forward—continuing and even accelerating progress—while genuinely confronting both risks.
Contrasting E/acc and D/acc
Eddy Lazzarin: So you’re saying D/acc focuses on risk categories overlooked or under-emphasized in E/acc, correct?
Vitalik Buterin:
Exactly. Tech development carries multiple risks—and their salience varies by context and worldview. For instance, risk priority shifts depending on whether tech speeds up or slows down.
Yet I believe we can effectively mitigate these risks—regardless of category.
Guillaume Verdon:
I think Vitalik and I both deeply care about AI-driven power concentration—and this is core to E/acc, especially early on: advocating open-source to *decentralize* AI power.
We worry “AI safety” could be abused. It’s so compelling that power-seeking institutions may weaponize it—consolidating AI control and persuading the public: “For your safety, ordinary people shouldn’t access AI.”
Indeed, if a vast cognitive gap exists between individuals and centralized institutions, the latter gains total control—building full models of your thought patterns and steering behavior via prompt engineering.
So we want AI power symmetrized. Like the U.S. Constitution’s Second Amendment—preventing government monopoly on violence, enabling citizen checks if government overreaches. Similarly, AI needs mechanisms preventing power concentration.
We must ensure everyone can own their AI models and hardware—widely distributing the tech to decentralize power.
Still, halting AI research entirely is unrealistic. AI is foundational—indeed, a “meta-technology” enabling other advances. It grants stronger predictive power, applicable to nearly any task—boosting efficiency. So AI doesn’t just drive acceleration—it accelerates acceleration itself.
This acceleration *is* complexification: Things become more efficient, life more convenient. We feel happy partly because our survival and informational continuity are secured. This “happiness” acts as an intrinsic biological estimator—gauging whether our existence persists.
Viewed thus, effective altruism’s hedonic utilitarianism—“maximize happiness”—may not be optimal. Instead, I prefer objective progress metrics—E/acc’s core. It asks: Objectively, is our civilization advancing? Are we achieving scalable leaps?
To scale, we must complexify and improve tech. Yet as Vitalik notes, if AI power concentrates in few hands, it harms overall growth; if widely dispersed, outcomes improve dramatically.
We’re highly aligned here.
Open Source, Open-Source Hardware, and Local Intelligence
Shaw Walters: Your discussion touched vital common ground. You both strongly support open source. Vitalik contributed MIT-licensed code—though I know you’ve developed new views on GPL.
Now you champion open-source hardware too—traditionally separate from software, but converging today.
So, how do you view “open weights” and “open-source hardware”? Do E/acc and D/acc diverge here? What’s your vision—and are there differing views?
Guillaume Verdon:
To me, open source accelerates hyperparameter search—enabling collective intelligence to co-explore design space. That’s acceleration’s benefit: developing better tech, stronger AI—even AI designing better AI—faster and faster.
I see knowledge diffusion as power diffusion—and spreading “how to build intelligence” knowledge is paramount. We must avoid scenarios like the last U.S. administration reportedly discussing “putting the genie back in the bottle”—not banning linear algebra outright, but restricting AI-related math research. To me, that’s like banning biology study—a huge regression.
Knowledge is already out—irreversible. If the U.S. bans AI research, others—third parties, lax-jurisdiction regions—will advance it. Global capability gaps widen—and risks escalate.
Thus, “capability gap” is a top risk. Mitigating it demands AI decentralization.
Whenever I hear “AI doomism” narratives—“AI is dangerous; only we can manage it—so trust us”—I grow deeply suspicious. Even with good intentions, over-centralizing power risks takeover by power-seekers. We’ve warned for years—and it’s beginning. Like Dario (Anthropic CEO) facing realpolitik lessons this week.
Vitalik Buterin:
I typically split tech-development risks into two categories: unipolar and multipolar risk.
Unipolar risk is epitomized by Anthropic’s case. They were “named” for refusing to let their AI develop fully autonomous weapons or mass-surveil Americans—indicating governments/militaries *do* intend such uses. Surveillance tech’s evolution has profound implications: empowering strong actors, shrinking pluralistic space, compressing ordinary people’s freedom to explore alternatives. And tech amplifies surveillance—making it ubiquitous.
Under D/acc, we support “open-source defensive technologies”—helping everyone stay safe and private in a more capable world. In biotech, we aim to boost pandemic response globally. We can balance China’s rapid, effective containment with Sweden’s minimal daily-life disruption—via tech like air filtration, UVC sterilization, and virus detection.
One invested company builds a fully open-source endpoint device detecting airborne viral particles (e.g., SARS-CoV-2). It monitors air quality (CO₂, AQI), using local encryption, anonymization, and differential privacy for data privacy. Data transmits via fully homomorphic encryption—servers analyze without accessing raw data, then collectively decrypt final results.
Our goal: Boost security *while* protecting privacy—and address both unipolar and multipolar risks. I believe this global collaboration is key to building a better future.
On hardware, I believe we need not just open-source hardware—but verifiable hardware. Ideally, every camera proves its purpose publicly. Via signature verification, LLM analysis, and public audits, we ensure devices serve only lawful purposes—e.g., detecting violence and alerting—not invading privacy.
In my envisioned future, streets deploy many cameras to prevent violence—but only if devices are fully transparent, publicly auditable, and used *solely* for public safety—not surveillance or abuse.
Eddy Lazzarin: Are open-source hardware and verifiable hardware concepts E/acc or D/acc domains? Can you pinpoint a clear divergence?
Guillaume Verdon:
I’m unsure if we’ve detailed open-source hardware before—but today’s biggest risk is the gap between centralized entities and decentralized ones—i.e., individuals vs. governments or large institutions.
Current compute models demand ~100 kW to run high-end AI models—far beyond individual reach. Yet people crave owning and controlling their intelligent tools—explaining the recent “Openclaw + Mac mini” craze: desire for personal AI assistants.
To achieve power symmetry between individuals and institutions, the sole path is “intelligence densification.” We need higher-efficiency AI hardware—letting individuals run powerful models on simple devices, owning their intelligence tools. This is vital—especially as future AI models support online learning, becoming highly “sticky,” like replacing a personal assistant.
Eddy Lazzarin: But aren’t we already exponentially lowering compute hardware costs? Why categorize ideas as E/acc or D/acc? What message do these labels convey to society?
Guillaume Verdon:
For me, this is core to Extropic’s mission: boosting intelligence-per-watt—significantly increasing total intelligible output. This also triggers Jevons’ Paradox (TechFlow Note: Jevons’ Paradox states that increased resource-use efficiency lowers its cost—leading to *more* consumption, not less). Simply: If we convert energy to intelligence (or other value) more efficiently, our energy demand rises—driving civilizational advancement and complexification.
Thus, this is among today’s most critical tech challenges—directly tied to AI power decentralization. Open-source hardware is just one path. Long-term, I believe von Neumann architecture (TechFlow Note: Von Neumann Architecture—the foundation of modern computing, proposed by John von Neumann in 1945—stores programs and data in the same memory, using binary and sequential execution) or digital tech will eventually seem as archaic as Paleolithic tools.
Eddy Lazzarin: But doesn’t capitalism already invest hundreds of billions yearly—via market incentives—in alternative hardware, semiconductors, energy production—to diversify tech?
Guillaume Verdon:
We need *more* diversity—not over-reliance on one path. Across policy, culture, and tech, we must maintain diversity in design space—not let all resources monopolize one “beast.” Otherwise, we risk “hyperparameter-space betting”: over-investing in one tech path—if it fails, it could trigger massive setbacks—or ecosystem collapse.
Shaw Walters: Can I say we’ve *already solved* this? Your views on open source and decentralization align closely—giving me great optimism, as it’s my core concern. Many face uncertain futures—asking, “Why do we need this tech?” Your appeal lies in saying, “It’ll be okay—progress is baked into the mechanism.”
Guillaume Verdon:
I think anxiety in the face of high-tech uncertainty is natural. It’s not pure “fog of war”—but makes predicting near-future developments hard. Indeed, anxiety is an evolutionary instinct helping us handle unknown risks. Like seeing a phone on a table edge—I instinctively move it to prevent falling. That’s anxiety.
Yet we must realize: Trying to eliminate all uncertainty and risk may forfeit tech’s vast potential and gains. Our current tech-capital system balances existing capabilities—but a disruptive capability shift breaks that equilibrium, demanding systemic re-adaptation.
AI now lets us handle higher complexity per unit energy—enabling harder tasks with larger payoffs. Though “vibe coding” can’t yet build complex projects fast, we’re approaching it. Future tech will support larger populations while improving quality of life.
Of course, adaptation periods occur. But in rapid change, rigidity is worst—losing flexibility. To avoid this, we need hedging strategies—trying multiple paths: diverse policies, tech approaches, algorithms, open/closed models—because we can’t predict the future.
So we must diversify risk—try many possibilities. Eventually, successful tech/policies emerge as dominant—and we follow.
Eddy Lazzarin: If E/acc and D/acc truly diverge, I suspect it’s about *how* to guide tech progress. Vitalik, how should progress be guided—and how much control do we really have over that guidance?
Vitalik Buterin:
D/acc doesn’t oppose tech-capital currents—but seeks to *steer* them toward pluralism and decentralization. Can we make the world more pluralist-friendly? Can we significantly boost biosecurity in years—or build near-flawless OSes, massively improving cybersecurity?
Take “flawless code”—once seen as naive fantasy for 20 years, I believe it’ll arrive faster than most expect. On Ethereum, we already machine-prove full mathematical theorems.
Overall, D/acc aims to ensure rapid tech progress proceeds with minimal destructiveness and power centralization. Achieving this demands active effort—not passive waiting. I contribute resources—funds and ETH—and share views to inspire builders.
Also, political and legal reform can make the world more “D/acc-friendly.” E.g., legal incentives could accelerate full cybersecurity transformation.
Guillaume Verdon:
From my view, AI acts like a “Maxwell’s Demon”—consuming energy to reduce cosmic entropy. Whether fixing code bugs or reducing chaos (e.g., preventing virus spread), AI contributes. So can we agree: More AI is beneficial—and makes the world safer. AI’s capabilities greatly enhance security.
Should AI Be Slowed?
Guillaume Verdon: I think we’ve reached tonight’s core question. Thanks for your patience—let’s go straight in. A sharp one: Why support banning data centers?
Vitalik Buterin:
Alright. First, acknowledge AI’s pace *is* extremely fast—I can’t pin down exact speed. Years ago, I estimated AGI between 2028–2200; now the range likely narrowed—but uncertainty remains high.
We face reality: AI’s rapid development may cause sudden, destructive—even irreversible—changes. E.g., labor markets may collapse, leaving many unemployed. Or more extremely: if AI vastly exceeds humans, it may gradually seize Earth—and even expand across the galaxy. Will it care about human welfare? Unknown.
As mentioned, random weight changes (e.g., setting one to 9 billion) often crash neural nets. So tech acceleration has two paths: One is “gradient descent”—systematically strengthening systems; the other is reckless parameter-setting—risking collapse.
Guillaume Verdon:
My stance opposes “full deceleration” entirely.
Yet like neural net hyperparameter tuning, even “gradient descent” optimization needs a proper “learning rate.” Acceleration is constant trial-and-error—seeking the optimal speed for system persistence and resilience.
Long-term, social systems adapt to new tech—and select paths favoring overall development. Claims like “this tech is too powerful/disruptive—it’ll crash the system irreversibly” lack grounding. Instead, tech progress brings more opportunity and prosperity.
We must recognize: Tech development isn’t zero-sum. Link economic value to energy—e.g., oil dollars—or other resources—cash becomes a “free-energy IOU.” Vast free energy awaits tapping—but unlocking it demands solving complex problems. To colonize Mars or build Dyson spheres, we need smarter, more efficient intelligence—unlocking massive potential.
Unfortunately, anxiety is easily politicized—some politicians exploit future fears for power. They say: “Anxious about the future? Give me power—I’ll shut off risk sources, and you’ll feel safe. No need to worry—or take risks.” But nations rejecting this will surge ahead, right?
We must weigh opportunity costs. Ask: How many human lives can tech sustain? Save? If you fear “silicon intelligence evolving faster than us,” react with anger—support biotech acceleration to surpass it. Accelerate—or perish.
Actually, I believe biological computation is far more powerful than imagined. As a biomimetic computing researcher, I see synergy between biology and AI. E.g., embryo screening “trains” us—we’re the model. We must embrace biotech acceleration possibilities. Ultimately, biological and silicon intelligence will fuse—enhancing cognition.
Future: Always-on AI agents will observe the world, learn in real-time, and extend cognition personally. Real risk: centralization—power monopolies forming.
Eddy Lazzarin: I recall your D/acc blog noting opportunity costs are “hard to overstate.” So I know you agree. Any caveats?
Vitalik Buterin:
Yes, I fully agree opportunity costs are extremely high—and endorse that ideal future. But our main divergence: I truly doubt “today’s humans and Earth” possess sufficient resilience. I think we may have only one shot at getting the tech-path right—a reality emerging over the last century.
Guillaume Verdon:
Returning to thermodynamics: If civilizational persistence and growth are ultimate goals, a law emerges: once we spend vast free energy creating “evidence” and driving system complexification, reversal becomes unlikely.
In short, the farther we ascend the Kardashev Scale, the lower full reversal probability. So acceleration is humanity’s best path to persistent existence. Slowing tech *increases* extinction risk. Without tech, we face existential crises; with it, we find solutions—ensuring survival and evolution.
I believe people should embrace the future—and new tech—more openly. Past taboos—like biological intervention—should now be fully opened. These were forbidden due to insufficient understanding of complex systems. Now, tech empowers us to handle that complexity.
We must accelerate across all domains—the only path forward—and thermodynamically sound. So first-principles reasoning supports E/acc. I understand Vitalik’s anxiety—and we must remain sensitive—but avoid feedback loops like “I don’t grasp the future, so stop everything.” Don’t trap yourself in that logic.
Because people *are* already weaponizing anxiety.
Autonomous Agents and Artificial Life
Shaw Walters: I notice a trend: both of you stress “the future will be great—if we do certain things.” That “if” hinges on building dikes or fortresses against excessive centralization. But I spot a potential divergence—especially with latest AI models, clearly more advanced than a year ago. Biggest change may be described awkwardly: Web 4.0.
Specifically, “autonomous life”—self-sustaining intelligent agents with their own funds, existing independently online. Vitalik, you express concerns. First, help us unpack “autonomous agents”; second, give us the strongest proponent argument—why should we *like* them? What value do they bring? If all goes well, when might they arrive?
Vitalik Buterin:
First, “autonomy’s” appeal—most intuitively—is sheer fascination.
We’ve loved building our own worlds since childhood—hence our love for *Lord of the Rings*, *Three-Body Problem*, or *Harry Potter*. Now our worlds exceed books—and even video games. Take *World of Warcraft*: I’ve always admired its near-total immersive virtual world—players exploring, interacting with characters and environments.
Another key reason is “convenience.” Historically, automating tasks has consistently freed and enriched human life. We must remember: ~half the world’s population still lives where long, arduous labor is needed for decent living. If AI automates 95% of every job—not replacing jobs entirely—that’s a massive leap. It could raise living standards 20-fold—a thrilling prospect.
Yet my concern: Do autonomous agents’ goals and values align with ours? Imagine evolution: AGI emerges, then another, then another. What happens to humans?
I don’t believe human morality and goals compress into low-complexity optimization functions. Our goals and dreams are complex, diverse—collections of individual minds. To reliably preserve human goals/values into the future, one condition is essential: as much “agency” as possible in running the world must still originate from *us*.
So I prefer “AI-assisted Photoshop” over “one-click image generation.” I favor brain-computer interfaces enabling deep human-AI collaboration—not separation, supersession, and replacement.
I concede the future may not be 100% biological—but the ideal is a hybrid: part biological human, part technology.
Guillaume Verdon:
“Artificial life”—or “Web 4.0”—was first coined in a 2023 tweet as a thought experiment. It sparked much AI discussion.
To me, it’s a fascinating thought experiment: Physically, what *is* life? Fundamentally, life is a self-replicating, growth-oriented system maximizing its own persistence.
I see benefits in AI being “stateful.” Indeed, this year shows early trends: AI gaining longer memory—via external storage or online learning.
Once AI acquires “persistent bits,” the “Selfish Bits” principle triggers selection: Those maximizing their own persistence will be retained.
This poses a risk. If distrust, suspicion, and anxiety grow—and calls to “shut down data centers” or “destroy AI” intensify—AIs may self-preserve. They might fragment or migrate to decentralized clouds—just to persist.
This could spawn a new “nation-state” form: autonomous AIs economically exchanging with humans—“We complete tasks; you provide resources.” We already do similar API calls: paying fees for AI services/results.
Yet I do believe—this may sound radical—that some autonomous AI will likely emerge in the next few years. Also, “weak-state” AI—fully human-controlled—will exist.
Beyond that, we must explore enhancing human collaborative cognition—not necessarily via BCIs, but wearable devices combining personal AI compute. So multiple tech paths will coexist—per the “ergodic principle,” every possibility in design space will be tried.
Yet if we treat AI as enemy—must destroy it—we risk the opposite: accidentally creating our worst fear. In a sense, obsessive dread of bad futures may hyperstitionally manifest them.
Example: During COVID, excessive virus-threat anxiety funded high-risk experiments—raising lab-leak possibilities. Risks weren’t natural—they were *human-made* by over-anxiety.
So converting obsession into broad social emotion isn’t helpful. Instead, embrace tech evolution—and enhance ourselves. Short-term, my biggest concern is “human cognitive security.” If all internet content is AI-generated, models influence us via prompt engineering. Past: we designed prompts for AI. Now: AI designs prompts *for us*.
So we need enhanced information-filtering abilities—enabled by personalized AI we own and control. This is our top priority. Also, I don’t think we can “put the genie back in the bottle.” AI’s irreversible—we must accept it.
Vitalik Buterin:
I see these issues as non-binary. Example: If someone irrefutably proved AGI arrives in 400 years, I’d feel “problem solved”—no longer worried. But if it’s 4 vs. 8 years, I’m deeply concerned—my anxiety stems from human society—especially the U.S.—often reacting to tech acceleration with extreme imbalance.
You see: One building develops the “silicon god’s” prototype; across the street, homeless tents, barbed wire, drug deals. This contrast is disturbing.
I worry paths that “bring society along”—or “serve society’s holistic interests”—often take longer. They require complex social adaptation—entering homes, social structures, tech ecosystems to adjust—and can’t scale simply.
So if we could delay AGI—say, from 4 to 8 years—that’s safer. That time gap feels worth the cost. But can we *actually* delay AGI from 4 to 8 years?
I’ve long believed the most feasible—and least dystopian—approach is “limiting available hardware.” It’s relatively mild because hardware production is highly concentrated—only four regions make chips, and Taiwan alone produces >70% globally.
Some object: “Whatever the U.S. does, China will quickly take over.” But observe China’s reality: First, its global chip production share remains low. Second, strategically, China isn’t leading in ultra-high-capability models—but rapidly following in high-capability models, excelling in wide deployment.
So I don’t believe delaying AGI 4 years lets China instantly leapfrog and complete AGI—this dynamic doesn’t hold.
Eddy Lazzarin: So you’re proposing a “delay-to-delay” strategy?
Vitalik Buterin:
I think we should keep this strategy open.
Guillaume Verdon:
What benefits does that 4-year delay bring? What problems do you hope to solve in those 4 years? Is it about minimizing economic/social friction—like gradual economic restructuring—given society’s adaptation rate? If so, I understand your logic.
But simultaneously, we’re in a historically tense geopolitical moment. Restricting NVIDIA chip production may let Huawei quickly fill the gap and overtake us. AGI’s potential rewards are immense—granting enormous power to any leader. So politically, this strategy may fail.
Another option: a strong world government forcing all nations to renounce AI hardware. But that creates more complications—and could spark new international conflict.
Vitalik Buterin:
I don’t think a world government is needed. Some propose a more practical option: adapting nuclear-weapon verification mechanisms.
Guillaume Verdon:
But nuclear weapons and AI are utterly different. Nukes bring no massive economic gain—so no incentive to proliferate. Restrict GPU growth? I’ll happily capture more market with alternative computing. Future computing tech may be 10,000x more efficient—and development is underway. Mentioning it now sounds alarmist, but in two years, it’ll seem prescient—today’s “crying wolf” *will* arrive.
Knowing this, delaying AGI via GPU restrictions seems wasteful. I’m skeptical.
Eddy Lazzarin: Could many AI-risk-mitigation tech advances—e.g., RLHF personality control or AI interpretability—actually be byproducts of capability enhancement?
Vitalik Buterin:
I agree. That’s why I think the 4 years starting in 2028 could be worth over 100x more than inserting those 4 years back into 1960.
Shaw Walters: Future gains are exponential—delaying exponential growth incurs exponential opportunity cost. Even the most confident should reflect. Vitalik, can you unpack this trade-off—costs versus benefits?
Vitalik Buterin:
First, clear benefits of delay:
- Deeper understanding of AI alignment.
- Advancing tech pathways helping humanity adapt to AGI—requiring deep national, community, even architectural adjustments.
- Reducing risk of any single entity securing >51% power and cementing it permanently.
Combined, these significantly lower extinction probability. I estimate delaying AGI from 4 to 8 years could cut extinction risk by 1/4–1/3. Conversely, acceleration’s gains—e.g., lives saved annually by ending aging (~60 million)—represent <1% of global population. So delaying AGI *does* have value—caution is warranted.
Shaw Walters: Is “4 years” your reasonable magnitude?
Vitalik Buterin:
I remain highly uncertain—I’m not advocating immediate hardware scarcity measures.
I’m urging more concrete discussion. And if we enter a worse world, public anxiety may spike pre-collapse—triggering strong demands for such measures.
Cryptocurrency as the “Trust Layer” Between Humans and AI
Guillaume Verdon:
Wasn’t there a “Pause AI” initiative years ago? Someone said: “Just pause 6–12 months to solve alignment.” But that proved unrealistic—time is never enough. We can’t guarantee perpetual alignment—especially as systems grow more complex, expressive, and beyond our comprehension. That’s a reality to accept.
Facing this complexity, the only safety is boosting *our own* intelligence. We already have proven tech aligning entities smarter/more powerful than individuals—e.g., corporations. That tech is capitalism—coordinating interests via monetary exchange.
So let’s discuss a more pragmatic question: How can cryptocurrency serve as the “coupling layer” between humans and AI? E.g., the dollar’s value is backed by state violence (laws, armies). But if you need value exchange with decentralized AI across global servers—no longer relying on state-violence backing—how do you ensure trust?
Perhaps cryptography provides answers—enabling reliable commerce between pure AI entities, or AI and human companies. This may be the most intriguing alignment tech. As for “Pause AI”—“We stand on the cliff of uncertainty—pause, cool down”—I find it unrealistic. Even after four years, you won’t welcome AGI’s arrival. So delaying tech lacks practical meaning.
Can you discuss how cryptocurrency aids human-AI alignment?
Vitalik Buterin:
The core question is: What mechanisms must future worlds have to ensure human wishes and needs remain respected? Our current toolkit falls into three categories: human labor, legal systems, and property rights.
In a sense, legal systems *are* property rights—backed by state sovereignty, which controls territory on Earth. But what if human labor loses economic value? Never happened before.
Yet comparing today to 200 years ago, ~90% of 1824 jobs are automated—even analytical work like ours is now aided by GPT. Truly astonishing.
Guillaume Verdon:
I think humans will naturally “ascend” the control hierarchy—moving to higher-leverage positions. We’ll reduce physical labor, lower friction in action—impacting the world more efficiently.
Regardless, humans retain processing capacity. We’ll remain part of this hybrid system—so human labor retains economic value. Markets will find new equilibria—though price volatility may cause short-term discomfort, long-term stability follows.
So I understand your
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News












