
Sequoia’s Interview with Hassabis: Information Is the Essence of the Universe, and AI Will Launch an Entirely New Branch of Science
TechFlow Selected TechFlow Selected

Sequoia’s Interview with Hassabis: Information Is the Essence of the Universe, and AI Will Launch an Entirely New Branch of Science
Deep Interview with Nobel Laureate and DeepMind CEO Demis Hassabis: Predicting AGI by 2030, Revealing How AI Will Shorten Drug Discovery to Days and Spawn a New Scientific Framework for Decoding the Universe’s Fundamental Nature
Original compilation: Guā Gē AI Insights
This article compiles insights from Demis Hassabis’s interview on Sequoia Capital’s YouTube channel, publicly released on April 29, 2026.
Summary: Demis Hassabis’s Interview at Redpoint AI Ascent 2026
- AI and Games: Games are an ideal testbed for artificial intelligence. Embedding AI as a core gameplay mechanic not only effectively validates algorithmic concepts but also provides early computational support for R&D.
- The “Timing Theory” of Entrepreneurship: Founding a company should mean being “five years ahead of your time—not fifty.” One must keenly identify the delicate balance between technological breakthroughs and real-world application needs; moving too far ahead often leads to failure.
- The Evolutionary Path Toward AGI: DeepMind’s mission is clear and unwavering—first, build Artificial General Intelligence (AGI); second, deploy AGI to solve all complex challenges, including those in science and medicine.
- The Core Value of “AI for Science”: AI is the perfect language for describing biology and other complex natural systems. With AI-powered simulation, drug discovery timelines could shrink from years to weeks—and even enable truly personalized medicine.
- The Emergence of New Scientific Disciplines: The inherent complexity of AI systems themselves will give rise to entirely new engineering sciences—such as mechanistic interpretability. Meanwhile, AI-driven simulation will allow controlled experiments on complex social systems like economics, thereby opening up entirely new scientific branches.
- Information as the Essence of the Universe: Matter, energy, and information are interconvertible. The universe itself may fundamentally be a vast information-processing system—endowing AI with profound significance for understanding its deepest operating principles.
- The Computational Limits of the Turing Machine: Modern AI systems—including neural networks—have demonstrated that classical Turing machines can simulate problems once thought solvable only by quantum computing (e.g., protein folding). The human brain is likely itself a highly approximate Turing machine.
- Philosophical Reflections on Consciousness: Consciousness may consist of components such as self-awareness and temporal continuity. On our path toward AGI, we should first treat it as a powerful tool—and use that tool to explore the grand philosophical question of consciousness.
Overview
Demis Hassabis, Co-Founder and CEO of Google DeepMind—and Nobel Laureate in Chemistry (2024) for AlphaFold—engaged in a remarkably broad and deep dialogue with Sequoia Capital Partner Konstantine Buhler at the AI Ascent 2026 Summit, exploring both the road to AGI and the future beyond it.
In the conversation, Hassabis explained why he firmly believes AGI could be achieved by 2030, why the decade-long drug discovery cycle might collapse to just days, and why “information”—rather than matter or energy—should be regarded as the most fundamental essence of the universe. He also explored how Einstein, were he alive today, might critique the limitations of current AI models—and why the next one or two years may constitute a pivotal inflection point for humanity.
Full Transcript
Moderator: Demis, thank you so much for joining us.
Demis Hassabis: It’s a pleasure to be here. Thank you all for coming—it’s wonderful to connect with everyone.
Moderator: It’s truly an honor to welcome you to our chocolate factory.
Demis Hassabis: I just heard about this—I’m really looking forward to tasting some chocolate later.
Moderator: Fantastic. Demis, let’s dive right in. Today, we’re privileged to host a true OG—a pioneering original thinker, founder, and visionary who has shaped every facet of AI. Demis is both a pure believer and a pure scientist.
Demis’s Origin Story and Unifying Thread
Our discussion today will begin with the early story of DeepMind’s founding, then delve into science and technology, and conclude with audience Q&A. Let’s begin.
Demis, you were a chess prodigy, a game studio founder, and a neuroscientist. You founded DeepMind and now lead a large, influential organization. These identities may seem unrelated—but you’ve said there’s a single unifying thread running through them all. Would you share that with us?
Demis Hassabis: Yes—there is indeed a unifying thread, though perhaps with a touch of post hoc reasoning. My desire to work in AI began very early. I long believed it would be the most important—and most fascinating—work I could ever pursue. From age 15 or 16 onward, every academic choice I made and every step I took was oriented toward building a company like DeepMind one day.
Games: The Training Ground for AI
I entered the gaming industry “indirectly,” because in the 1990s, cutting-edge technologies were incubating there—not just AI, but graphics rendering and hardware development. GPUs, which we all rely on today, were originally designed for graphics engines—and I was already using early GPUs in the late 1990s. Every game I helped develop—whether for Bullfrog Productions or my own company Elixir Studios—treated AI as a core gameplay mechanism.

My best-known title is Theme Park, developed when I was around 17—a theme park simulation where thousands of tiny characters enter the park, ride attractions, and decide what to buy in shops. Beneath its surface lay a full-fledged economic AI model. Like SimCity, it was a genre-defining pioneer. When I saw it sell over 10 million copies—and witnessed players’ sheer delight interacting with its AI—I became even more certain that dedicating my life to AI was the right path.
Later, I turned to neuroscience, hoping to draw inspiration from how the brain works and derive novel algorithmic ideas. When the optimal moment finally arrived to found DeepMind, integrating all these accumulated experiences felt natural and inevitable. And unsurprisingly, we later adopted games as an early training ground for validating AI concepts.
Elixir Studios: Lessons from First-Time Entrepreneurship
Moderator: We have many founders in the room today—you’ll surely resonate deeply, since you didn’t just found one company, but two. Let’s revisit your first venture: Elixir Studios. What was that experience like? Though less widely known than your later work, it was hugely successful. How did you lead that company—and what did it teach you about building a company?

Demis Hassabis: Yes—I founded Elixir Studios right after university. I was fortunate to have previously worked at Bullfrog Productions. Anyone familiar with gaming knows Bullfrog was legendary in the industry’s early days—perhaps the top game studio in the UK, if not all of Europe, at the time.
At the time, I wanted to push the boundaries of AI. In fact, during that era, I used game development as an “indirect route” to fund AI R&D—to continually challenge technical frontiers while merging them with extreme creativity. I believe this ethos remains equally relevant today for blue-sky research.
The most profound lesson I learned may be this: aim to be five years ahead of your time—not fifty. At Elixir Studios, we attempted to develop Republic, a game simulating an entire nation. Players could topple a dictator through various means—and we rendered vivid, breathing cities in stunning detail.
Remember: this was the late 1990s. Computers ran on Pentium processors. We had to render graphics and execute AI logic for one million simulated citizens—all on consumer-grade PCs. That ambition was simply too great—almost hubristic—and triggered cascading problems.
I’ve never forgotten that lesson: yes, you must stay ahead of your time—but if you’re fifty years ahead, you’ll almost certainly fail. Of course, waiting until an idea becomes obvious to everyone means you’ve missed the window entirely. So the key lies in finding that subtle balance point.
Founding DeepMind in 2009
Moderator: Okay—so don’t get too far ahead of your time. Now fast-forward to 2009, when you were convinced AGI was inevitable. That time, you were perhaps only ten years ahead—not fifty. Tell us about 2009. How did you convince the first wave of brilliant talent to join you? Because you truly assembled an elite team of world-class researchers and engineers. Back then, AGI sounded like pure science fiction—how did you persuade them it was real?
Demis Hassabis: We sensed some intriguing signals. We thought we were only five years ahead—but in reality, it turned out to be closer to ten. Deep learning had just been invented by Geoff Hinton and colleagues, yet almost no one grasped its significance. Meanwhile, we had deep expertise in reinforcement learning—and we believed combining the two would yield breakthrough results. Before then, they’d rarely been integrated—even in academia, only for trivial “toy problems.” In AI, they were completely isolated silos.
Also, we foresaw the promise of compute: GPUs were about to shine. Today we use TPUs, but back then, accelerated computing was set to become a massive catalyst. And near the end of my PhD and postdoc, having gathered a team of computational neuroscientists, we’d extracted enough valuable principles from brain mechanisms—including a core conviction: reinforcement learning, scaled up, could ultimately lead to AGI.
We felt we’d assembled all the essential pieces. We even felt like guardians of a monumental secret—because virtually no one in academia or industry believed AI could achieve any major breakthrough. In fact, when we announced our focus on AGI—or “Strong AI,” as it was sometimes called—many academics openly rolled their eyes. To them, it was a dead end; hadn’t everyone tried and failed in the 1990s?
I was doing my postdoc at MIT—the epicenter of expert systems and first-order logic language systems. Looking back, it seems incredible—but even then, I found those approaches rigid and outdated. Yet whether in Cambridge or at MIT—the traditional strongholds of AI research—people still clung to those old methods. That only strengthened my conviction that we were on the right track. At least, if we failed, we’d do so in a fresh way—not by repeating the same mistakes that doomed 1990s AGI efforts. That made the endeavor feel worth pursuing—even if success wasn’t guaranteed. Even if we failed, it would be a failure rich in originality.
DeepMind’s Mission and the Bet on AGI
Moderator: Did your early convictions encounter widespread resistance? To attract early believers, did you need to prove anything—to yourself or to them?
Demis Hassabis: Regardless of circumstances, I would devote my life to AI. Its progress has far exceeded even our most optimistic expectations—yet still falls within our 2010 forecast: a twenty-year journey.
I believe our pace aligns precisely with expectations—and we’ve clearly played our part.
Even if things hadn’t unfolded this way—if AI remained a niche field—I’d still follow this path, because it’s the most important technology in human history. My goal is crystal clear: DeepMind’s founding mission statement was—and remains—step one: crack intelligence, i.e., build AGI; step two: use it to solve everything else. I’ve always believed this could be humanity’s most important—and most captivating—creation.
It’s both a scientific instrument and a fascinating artifact in its own right—and arguably our best pathway to understanding the human mind itself: consciousness, dreams, creativity. As a neuroscientist, I’d long felt something was missing—an analytical tool like AI. It offers a comparative framework, letting us study and contrast two distinct systems, like a controlled experiment.
The Culture of “AI for Science”
Moderator: Comparing systems. Let’s talk about “AI for Science.” You’ve been involved since its earliest days—a steadfast believer and a pure idealist. This is your driving mission. How did the model and culture you established when founding DeepMind keep you at the forefront of “AI for Science”?
Demis Hassabis: This is our ultimate objective. For me personally, the fundamental motivation is building AI to advance science, medicine, and our understanding of the world. That’s how I fulfill my mission—through a “meta approach”: first build the ultimate tool, then deploy it to achieve scientific breakthroughs. We’ve already delivered milestones like AlphaFold—and I’m confident many more lie ahead.
DeepMind has always prioritized this goal. In fact, we have an “AI for Science” division led by Pushmeet Kohli, now nearly a decade old. We launched it almost immediately upon returning from Seoul after the AlphaGo match—exactly ten years ago.
I’d been waiting—waiting for algorithms to mature and ideas to generalize. For me, conquering Go was a historic turning point; that’s when we realized the timing was right to apply these ideas to real-world challenges—and to start with the biggest scientific problems.
We’ve always believed this is AI’s most beneficial destination. What could be more meaningful than using it to cure disease, extend healthy human lifespans, and transform medicine? Next come materials science, environment, and energy—critical domains where AI will shine in the coming years.
Breakthroughs in Biology and Isomorphic Labs
Moderator: How has AI broken through in biology? You’re deeply involved with Isomorphic Labs—a passion project for you. From the outset, you’ve held firm belief in AI’s potential to cure disease. When will biology experience its “moment”—like language and programming have?
Demis Hassabis: I believe AlphaFold marked biology’s “moment.” Protein folding and its 3D structure was a fifty-year scientific puzzle. Solving it is essential for drug design and deciphering biology’s foundational code. Of course, it’s just one piece of drug discovery—albeit a crucial one.
Our newly spun-out company, Isomorphic Labs—which I personally enjoy leading—is focused on building core technologies in biochemistry and chemistry. These tools automatically design compounds that perfectly fit specific sites on proteins. Since we now know protein shapes and surface structures, we’ve locked onto the targets. Next, we must generate compounds that bind strongly to those targets—ideally avoiding off-target effects that cause toxicity.
Our ultimate dream is to shift 99% of today’s labor- and time-intensive exploration into computational simulation (in silico), reserving physical wet-lab experiments solely for final validation. If we achieve this—and I’m confident we will within the next few years—we’ll compress the average ten-year drug discovery timeline down to months, weeks, and eventually days.
I believe once we cross this threshold, curing all diseases will become attainable. Concepts like personalized medicine—for instance, drugs tailored to individual patients—will become reality. I expect the entire landscape of healthcare and drug development to be radically reshaped in the coming years.
New Sciences Born from Simulators
Moderator: Incredible. You’ve mentioned “AI for Science” repeatedly. Do you foresee AI giving rise to entirely new scientific disciplines—just as the Industrial Revolution gave birth to thermodynamics? Will our education systems incorporate fundamentally new fields? If so, what might they look like?
Demis Hassabis: Regarding this, I foresee several developments.
First, understanding and analyzing AI systems themselves will evolve into a full-fledged discipline—an engineering science. These artifacts we’re building are profoundly fascinating—and immensely complex. Eventually, their complexity will rival that of the human mind and brain. Thus, we must study them deeply to fully grasp how they operate—a task far beyond our current comprehension. I’m certain a new field will emerge; mechanistic interpretability is merely the tip of the iceberg, with vast uncharted territory remaining.
Second, I also believe AI itself will open entirely new scientific doors. Most exciting to me is “AI for Simulations.” I’m obsessed with simulation; every game I’ve written isn’t just AI-driven—it’s fundamentally a simulator. I believe simulators are our ultimate path to cracking hard problems in social sciences and humanities—like economics.
These fields are difficult because, like biology, they’re emergent systems—nearly impossible to subject to repeatable, controlled experiments. Suppose you raise interest rates by 0.5%. You can only act in the real world and observe outcomes; you may have theories, but you can’t run the experiment thousands of times. However, if we can simulate these complex systems with high fidelity, rigorous sampling and inference based on precise simulators could establish an entirely new science. I believe this will empower better decision-making in domains currently rife with uncertainty.
Moderator: To achieve such high-fidelity simulations, what conditions will we need? World models—what scientific and engineering breakthroughs are required to reach this stage?
Demis Hassabis: I’ve been thinking deeply about this. In our work, we heavily use learning-based simulators—deployed in domains where either the underlying mathematics is poorly understood or the systems are too complex. We can’t solve problems by writing direct, case-specific simulation code alone—because it’s insufficiently precise and fails to capture all variables.
We’ve already applied this to weather forecasting. Our “WeatherNext” simulator is the world’s most accurate—and runs far faster than tools used by meteorologists today. I’m not sure we’ll ever fully understand everything—or whether that’s even desirable—but the first step is deeper understanding of these complex systems.
Even in biology, we’re studying “virtual cells”—an extremely dynamic emergent system. Just as mathematics is physics’ perfect descriptive language, machine learning will become biology’s perfect descriptive language. Biology and many natural systems involve vast amounts of weak signals, faint correlations, and enormous datasets—far exceeding the human brain’s analytical capacity. Yet within this deluge lie genuine connections, correlations, and insightful causal relationships.
Machine learning is the perfect tool for describing such systems. Even today, mathematics hasn’t achieved this—either because systems are too complex for even top mathematicians, or because math lacks the expressive power to capture highly emergent, dynamic systems—partly due to their inherent messiness and stochastic nature.
Ultimately, once we master these simulators, new scientific branches may emerge. You could attempt to extract explicit equations from these implicit or intuitive simulators. Since you can sample the simulator endlessly, perhaps one day you’ll discover fundamental scientific laws—like Maxwell’s equations.
Maybe. I don’t know if such laws exist for emergent systems—but if they do, I see no reason why this method couldn’t uncover them.
Moderator: That would be extraordinary. You’ve touched on a theory suggesting the universe’s fundamental building blocks may resemble information—a more theoretical layer. How do you view this—and what does it imply for classical Turing machines?
Demis Hassabis: Of course, you can cite Einstein’s famous E=mc² and his broader work showing energy and matter are equivalent. But I actually believe information is equivalent too. You can view the organization of matter and structure—especially in entropy-resisting systems like biology—as fundamentally information-processing systems. Thus, I believe these three are interconvertible.
Yet I sense information is most fundamental—precisely opposite the view of classical physicists in the 1920s, who held energy and matter as primary. I actually think viewing the universe first and foremost as composed of information is a better way to understand reality.
If this holds—and I believe mounting evidence supports it—then AI’s significance extends even further than we imagine. It’s already profoundly significant because its core is organizing, understanding, and constructing informational objects.
To me, AI’s core is information processing. If you adopt information processing as your primary lens for understanding the world, you’ll discover deep intrinsic connections across seemingly disparate domains.
Moderator: So do you believe classical Turing machines can compute everything?
Demis Hassabis: Sometimes I reflect on our work and consider myself a “defender of Turing,” because Alan Turing is one of my greatest scientific heroes. I believe his work laid the foundation not only for computers and computer science—but for AI itself. The Turing machine concept is among history’s most profound achievements: anything computable can be computed by a relatively simple machine. Thus, I believe our brains are likely approximate Turing machines.
Exploring links between Turing machines and quantum systems is fascinating. Yet through AlphaGo—and especially AlphaFold—we’ve shown that classical Turing machines, dressed in modern neural network clothing, can model problems once thought to require quantum mechanics. Protein folding, for example, involves quantum-scale particles; one might assume hydrogen bonds’ quantum effects and other complex interactions must be modeled explicitly.
Yet it turns out a classical system can yield a near-optimal solution. So we may find many things we once believed required quantum systems for simulation or operation can, with proper methodology, be modeled classically.
Consciousness Philosophy
Moderator: You’ve consistently viewed AI as a tool—like telescopes, microscopes, or astrolabes over past centuries. But when facing a machine capable of simulating almost anything—even quantum systems, as you say—when does it transcend the realm of tool? Will that day truly arrive?
Demis Hassabis: I strongly feel that, in our shared mission to build AGI—including many in this room—the best path forward is first to build a tool: an exceptionally intelligent, practical, and precise one—and then cross the next threshold. That achievement alone is already deeply consequential. Of course, this tool may grow increasingly autonomous and agent-like—exactly what we’re witnessing today. We’re riding the wave of the Agent Era.
Yet deeper questions remain: Does it possess agency? Does it have consciousness? These are questions we’ll inevitably confront. But I suggest treating them as step two—and perhaps using the tool built in step one to help us explore these profound questions.
Ideally, this process will also deepen our understanding of our own brains and minds—and allow us to define concepts like “consciousness” more precisely than we can today.
Moderator: What’s your rough prediction for how consciousness might be defined in the future?
Demis Hassabis: Not much beyond what philosophers have debated for millennia. But for me, certain components are clearly necessary—though perhaps not sufficient. Self-awareness, the distinction between self and other, and some form of temporal continuity appear indispensable for any entity that appears conscious.
But what constitutes a complete definition remains an open question. I’ve discussed this extensively with great philosophers. A few years ago, I had a deep exchange on this topic with Daniel Dennett, who sadly passed away recently. A core issue is behavioral manifestation: does the system behave like a conscious one? You might argue that as certain AI systems approach AGI, they’ll eventually do so.
But then comes the follow-up: why do we believe each other is conscious? Partly because of our behavior—our actions resemble those of conscious beings. But another factor is that we all run on the same substrate.
So I believe if both conditions hold—behavioral similarity and shared substrate—then assuming identical subjective experience is logically the most parsimonious explanation. That’s why we normally don’t debate each other’s consciousness. But obviously, we can never achieve substrate equivalence with artificial systems. So I think bridging that gap fully is extremely difficult. You can examine behavior—but what about experience? There may be ways to address this post-AGI—but that’s likely beyond today’s discussion, even within “AI and Science.”
Moderator: Wonderful. We’ll soon open the floor for audience Q&A—please prepare your questions. You mentioned philosophers earlier—particularly Kant and Spinoza—as your two favorite thinkers. Kant was a classic deontologist, emphasizing duty above all; Spinoza held a near-deterministic view of the universe. How do you reconcile these two starkly different philosophies—and what is your fundamental view of how the world operates?
Demis Hassabis: I admire both philosophers—and was particularly struck by Kant’s idea—deeply resonant during my neuroscience PhD—that “the mind creates reality.” I believe this is essentially correct. It provides another superb rationale for studying how the mind and brain function. Since my ultimate quest is to understand reality’s nature, I must first understand how the mind interprets reality. That’s Kant’s gift to me.
Spinoza, meanwhile, speaks more to the spiritual dimension. If you use science as a tool to understand the universe, you’re already touching the deepest mysteries of cosmic operation.
That’s precisely how I experience our current endeavor. When I immerse myself in scientific research, AI development, and building these tools, I feel as though we’re reading the universe’s language—in some sense.
Moderator: Beautiful. That’s the most poetic description of your daily work: Demis, you embody scientist, orator, and philosopher. Before we close, let’s do rapid-fire Q&A. He hasn’t seen these questions beforehand. Predict the year AGI will be achieved: earlier or later than expected—or you may decline to answer.
Demis Hassabis: I choose 2030. I’ve held this prediction consistently.
Moderator: Okay—2030. Then, when AGI arrives, what book, poem, or paper do you recommend as essential reading?
Demis Hassabis: For the world after AGI, my favorite book is David Deutsch’s The Fabric of Reality. I believe its ideas remain relevant. I hope to use AGI to answer the profound questions raised in that book—and that will be my central focus in the AGI era.
Moderator: Wonderful. What’s been your proudest moment at DeepMind so far?
Demis Hassabis: We’ve been fortunate to experience many peak moments. I’d say AlphaFold’s emergence is the proudest.
Moderator: Okay. Finally, a few game-related questions. If you were playing a high-stakes turn-based strategy game—like Civilization or Polytopia—and could recruit one historical scientist as your teammate—Einstein, Turing, or Newton—who would you pick for your squad?
Demis Hassabis: I’d choose von Neumann. In such a scenario, you need a game theory expert—and I think he’s the greatest.
Moderator: Absolutely a legendary teammate. Demis, you’re truly a polymath. Thank you so much for joining us today. Please join me in giving Demis a round of applause for this brilliant session. Thank you.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News













