
Interview with a16z Founders: AI Makes Breakthroughs While Bitcoin Development Stalls
TechFlow Selected TechFlow Selected

Interview with a16z Founders: AI Makes Breakthroughs While Bitcoin Development Stalls
ChatGPT is just one example of the broader phenomenon of large language models (LLMs). Not only people outside the tech industry are amazed by its capabilities, but many within the industry are also surprised by its functions.
Compiled by: Qiyuan Society

Marc Andreessen | Co-founder of venture capital firm Andreessen Horowitz
Reason: I'm always skeptical of people who claim this time is different—whether in technology or cultural trends. So, with artificial intelligence (AI), is it really different this time?
Andreessen: AI has been a core dream of computer science since the 1940s. Historically, there have been five or six AI booms when people believed this was finally the moment AI would deliver on its promise. But each wave ended in an "AI winter," proving that success had not yet been achieved. We are now in another such boom.
This time, however, things really are different. We now have clear tests for measuring human-like intelligence, and computers are actually beginning to outperform humans on them—not just on tasks like "can you do math faster?" but more importantly on real-world interactions like "can you make better sense of reality?"
In 2012, computers surpassed humans at recognizing objects in images—a major breakthrough. This enabled self-driving cars. What is a self-driving car fundamentally? It’s processing vast numbers of images and deciding: “Is that a child or a plastic bag? Should I brake or keep going?” Tesla's Autopilot isn't perfect, but it's already highly capable. Our portfolio company Waymo is already operating commercially.
About five years ago, we began seeing breakthroughs in natural language processing—computers started becoming genuinely good at understanding written English. They also excelled at speech synthesis, which is an extremely difficult problem. Recently, ChatGPT marked a significant leap forward.
ChatGPT is just one example of the broader phenomenon of large language models (LLMs). Not only people outside the tech industry are amazed by their capabilities—many within the industry are also stunned.
Reason: For those of us unfamiliar with how they work, ChatGPT truly feels like magic. As Arthur C. Clarke’s third law says: “Any sufficiently advanced technology is indistinguishable from magic.” Sometimes it really is astonishing. What do you think of ChatGPT?
Andreessen: Well, it's both a trick and a breakthrough. It raises deep questions: What is intelligence? What is consciousness? What does it mean to be human? Ultimately, these big questions aren’t just about “what can machines do?” but rather “what do we want to achieve?”
LLMs can basically be seen as a very advanced form of autocomplete. Autocomplete is a common computing function. If you use an iPhone, when you start typing a word, it completes the rest to save effort. Gmail now autocompletes entire sentences—you type part of one, like “Sorry I can’t make your event,” and it suggests the rest. An LLM is like autocomplete across paragraphs, across 20 pages, or even, eventually, across an entire book.
When you’re writing a book, you type the first sentence, and the LLM suggests the rest. Will you accept its suggestions? Probably not entirely. But it gives you ideas—chapter outlines, themes, examples, even phrasing. With ChatGPT, you can already do this. You input: “Here’s my draft, here are five paragraphs I wrote. How can I rewrite this better? More concisely? In a way younger people can understand?” And it autocompletes in all kinds of interesting ways. Then it’s up to the user what to do with it.
Is it a trick or a breakthrough? It’s both. Yann LeCun, an AI legend working at Meta, thinks it’s not a breakthrough but more of a trick. He compares it to a clever dog—it autocompletes text to please you but doesn’t understand any of it. It doesn’t know what humans are or the laws of physics. It produces so-called hallucinations—when no accurate completion exists, it still wants to make you happy, so it fabricates one. It starts inventing names, dates, and historical events that never happened.
Reason: You mentioned “hallucinations,” but I’m also thinking of another concept—imposter syndrome. I’m not sure whether it applies to humans or AI, but sometimes we all just say what we think others want to hear, right?
Andreessen: That touches on a fundamental question: What do people actually do? And that’s precisely what makes many people uneasy—what is human consciousness? How do we form thoughts? I don’t know about you, but in my experience, many people spend their days saying what they think others want to hear.
Life is full of such autocompletions. How many opinions are truly original, deeply held beliefs? How many are simply what people think others expect them to say? We see this in politics—of course you’re the exception—where most people hold identical views on nearly every imaginable issue. We know these individuals haven’t reasoned through all these topics from first principles. We know it’s social reinforcement at work. Is that really more powerful than machines trying to do something similar? It feels a bit like it. I think we’ll discover we’re more like ChatGPT than we imagined.
Alan Turing invented what’s known as the Turing Test. He essentially said: “Suppose we develop a program we believe has artificial intelligence. Suppose we develop one that seems as intelligent as a human. How do we confirm it’s truly intelligent?” So you have a human evaluator chatting in a chat room with another person and a computer. Both try to convince the evaluator they’re human, while the other is the machine. If the computer convinces you it’s human, then it passes as AI.
One obvious problem with the Turing Test is that people are easily fooled. Is that a computer skilled at deception? Or does it reveal underlying weaknesses in what we consider profound humanity?
Intelligence isn’t a single metric. Humans and computers each excel or falter at different things. But computers have become exceptionally good at what they’re good at.
Try Midjourney or DALL-E—they produce art more beautiful than most human artists. Two years ago, did we expect computers to create stunning artwork? No. Can they do it now? Yes. What does that mean for human artists? If only a few humans can produce such beauty, perhaps we’re not very good at art after all.
Reason: Humanity is often tied to culture. Should we care whether AI comes from Silicon Valley or elsewhere?
Andreessen: I think we should. One topic we’re discussing here is the future of war. You can see hints of it in self-driving cars. If you have a self-driving car, you can have a self-flying plane, a self-navigating submarine, intelligent drones. Now we have the concept of “loitering munitions,” as seen in Ukraine—essentially suicide drones. They hover until they find a target, then drop a grenade or explode themselves.
I recently watched the new Top Gun movie, which touched on this: Training an F-16 or F-18 pilot costs millions of dollars—and the pilots themselves are invaluable. We put people in metal tubes flying at supersonic speeds. The maneuvers planes can perform are limited by human physiology. By the way, aircraft designed to sustain human life are massive and expensive, filled with systems just to support the pilot.
But supersonic AI drones face none of these limits. Their cost is a fraction. They don’t need to resemble anything we currently imagine. They can take any aerodynamic shape—no need to accommodate a human. They can fly faster, maneuver more aggressively, execute high-G moves no human could survive. They can make decisions far more quickly. They process orders of magnitude more information per second than any human. You wouldn’t have just one—you’d have 10, 100, 1,000, 10,000, even 100,000 operating simultaneously. The nation with the most advanced AI will have the strongest defense.
Reason: Will our AI reflect American values? Does AI have a cultural component? Should we worry about this?
Andreessen: Look at debates around social media. There’s intense controversy over embedded values in social platforms—content moderation, ideologies allowed to spread.
In China, there’s the so-called Great Firewall, which constantly sparks debate. If you’re a Chinese citizen, it restricts what you can see. Cross-cultural issues arise too. TikTok, a Chinese platform operating in the U.S., has many American users, especially children. Many speculate: Is TikTok’s algorithm deliberately nudging American kids toward destructive behaviors? Could this be a form of hostile action?
In short, all these issues in the social media era will be amplified millions of times over in the AI era. They become vastly more urgent and important. Human-created content is limited—but AI will be applied everywhere.
Reason: What you’ve described suggests we may need preemptive, careful regulation. Or is this unregulatable?
Andreessen: I wonder how Reason magazine feels about government?
Reason: Ha! Well, even though some of us are skeptical of government, they still think, “Maybe now is the time to draw a line.” For example, they might want to limit how states use AI.
Andreessen: Let me counter with your own argument: “The road to hell is paved with good intentions.” Like saying, “Wow, this time if we regulate carefully, thoughtfully, rationally, reasonably, effectively—wouldn’t that be great?”
“Maybe this time rent control could work, if we’re just smarter about it.” Clearly, your own argument is that it doesn’t actually happen—because of all the reasons you’ve always talked about.
So yes, there’s a theoretical case for such regulation. But what we face isn’t abstract theory—it’s actual, real-world regulation. And what do we get? Regulatory capture, corruption, barriers to entry for early innovators, political manipulation, and distorted incentives.
Reason: You've talked about how innovative tech startups often end up being absorbed into existing enterprises—an issue involving not just state relations but broader business practices. Recent disclosures in the Twitter Files and voluntary cooperation between companies have drawn much attention, but collaboration with government agencies may also pose imminent threats. It seems to me we’ll face more such challenges. Is the blurring line between public and private inevitable? Do you see this as a threat to innovation, or could it potentially foster innovation?
Andreessen: The textbook view of the U.S. economy is that it’s based on free-market competition. Companies compete to solve problems. Different toothpaste brands try to sell you their product—that’s competitive. Occasionally, externalities require government intervention, and you see oddities like “too big to fail” banks—but those are exceptions.
After 30 years in startups, my experience is the opposite. James Burnham was right. Decades ago, we shifted from primitive capitalism (which he called bourgeois capitalism) to a different model—managerial capitalism. The actual operating model of the U.S. economy is that large corporations form oligopolies, cartels, and monopolies, then collectively corrupt and control regulatory and government processes. They ultimately capture the regulators.
Thus, most economic sectors are essentially collusions between big companies and regulators—designed to ensure the longevity of monopolies and block new competitors. To me, this fully explains the education system (K–12 and universities), healthcare, housing crisis, financial crises and bailouts, and the Twitter Files.
Reason: Are there industries less affected by the market dynamics you just described?
Andreessen: The question boils down to: Is there real competition? Capitalism is essentially applying evolution to economics—natural selection, survival of the fittest, the idea that better products should win in the marketplace. Markets should be open: New companies can launch superior products and displace incumbents because their offerings are better and more popular.
So—is real competition possible? Do consumers truly have meaningful choices among alternatives? Can you actually bring a new product to market, or will existing regulatory barriers shut you out?
Banking is a prime example. During the 2008 financial crisis, a key argument was “We must bail out these banks—they’re too big to fail.” So Dodd-Frank was enacted. Yet the result of this act—call it the Big Bank Protection Act—is that “too big to fail” banks are now bigger than ever, and the number of new banks founded in the U.S. has plummeted.
A cynical answer: This doesn’t happen in trivial areas. Anyone can launch a new toy. Anyone can open a restaurant. These are beloved consumer categories. But compared to healthcare, education, housing, or legal systems—
If you want freedom, avoid serious businesses.
If you’re irrelevant to the structures of societal power, go ahead and do whatever you want. But if your business affects government and major policy issues, then no—this kind of freedom won’t exist.
It’s obvious. Why are all universities so similar? Why is their ideology so uniform? Why is there no marketplace of ideas in higher education? Because there aren’t more universities. Why aren’t there more? Because you need accreditation. And accreditation bodies are run by existing universities.
Why is healthcare so expensive? A major reason is that it’s largely paid for by insurance—private and public. Private insurance prices mirror public ones because Medicare is a giant buyer.
How are Medicare prices set? A division within the Department of Health and Human Services operates a pricing committee for medical goods and services—reminiscent of the Soviet Union. Every year, doctors gather in a conference room—say, at a Hyatt in Chicago—and do the same thing. The USSR had a central pricing bureau—and it failed. We don’t have one for the whole economy, but we do for the entire healthcare system. It fails for the same reasons the Soviet system failed. We’ve fully replicated the Soviet model—and somehow expect better outcomes.
Reason: About ten years ago, you compared Bitcoin to the internet. How accurate do you think that prediction was in hindsight?
Andreessen: I still stand by the core arguments in that article. But I’d revise one point: Back then, we thought Bitcoin itself would evolve broadly, like the internet, spawning many applications. That hasn’t happened. Bitcoin itself has largely plateaued. But meanwhile, many alternative projects emerged—with Ethereum being the largest. So if I rewrote that article today, I might highlight Ethereum instead of Bitcoin—or just discuss crypto broadly.
Beyond that, all the original ideas still hold. The points I made cover crypto, Web3, blockchain—I call it the other half of the internet. When we first built what we now know as the internet, we wanted full functionality: commerce, transactions, trust. But in the 1990s, we didn’t know how to achieve that online. Blockchain breakthroughs have now given us the tools.
We now have the technical foundation to build trust networks atop the internet. The internet itself is inherently untrusted—anyone can impersonate anyone. Web3 adds a trust layer. On these layers, you can represent not just money, but ownership claims—real estate, cars, insurance contracts, loans, digital asset rights, unique digital art. You can have universal internet contracts—legally binding agreements signed online. You can even have internet-native escrow services for e-commerce, where two parties transact via a trusted intermediary built into the internet.
You can build everything needed for a full, global, internet-native economy on top of an untrusted network. It’s a grand vision, boundless in potential. We’re in the process of realizing it. Many things have already succeeded; others haven’t yet—but I believe they eventually will.
Reason: Which industries do you currently see as worth investing in?
Andreessen: Research and development are often paired, but they’re distinct. Research funds brilliant people to explore deep scientific and technical questions—even when it’s unclear what products might emerge or whether something is feasible.
We focus on development. When we invest in a company for product development, foundational research should already be complete. There shouldn’t be unresolved basic research questions—otherwise, as a startup, you wouldn’t even know if a viable product is possible. Also, the product should be close enough to commercialization that you can realistically bring it to market within about five years.
This formula works well in computing. Government-funded research in information and computer science over 50 years—from WWII onward—led to the computer, software, and internet industries. It also works in biotech.
I think these are the main areas where basic research has yielded tangible results. Should basic research receive more funding? Almost certainly. Yet today, basic research faces a severe crisis—the replication crisis. Much work thought to be foundational turns out to be invalid—or even fraudulent. One major problem with modern universities is that much of their research appears fake. Would you recommend pouring more money into a system producing false results? No. But do we need basic research to generate new products downstream? Absolutely.
On the development side, I’m more optimistic. We generally don’t lack capital. Essentially, every strong entrepreneur can get funded.
The real bottleneck isn’t money. It’s competition and how markets operate. In which economic sectors can startups actually exist? Can you really have an education startup? A healthcare startup? A housing startup? A financial services startup? Can you build a new internet bank that works differently? In fields where we want rapid progress, the constraint isn’t funding—it’s whether such companies are allowed to exist.
I think in some areas, even though conventional wisdom says you can’t start a company, you actually can. I’m talking about space, certain niches in education, and crypto.
SpaceX is perhaps the best example. It’s a government-dominated, heavily regulated market. I can’t even recall the last time someone tried to build a new launch platform. Deploying satellites involves countless regulations. Then there’s complexity. Elon Musk wanted reusable rockets—autonomous landing—which was deemed impossible. Past rockets were disposable; his land and fly again. SpaceX climbed the wall of skepticism—Musk and his team prevailed through sheer determination.
In business, we often discuss how hard this entrepreneurial journey is. This is the deal entrepreneurs sign up for—much riskier than starting a new software company. It demands higher capability, greater risk tolerance.
Such companies face more failures—they get blocked somehow. You also need a founder willing to bear this burden. Someone like Elon Musk, Travis Kalanick (Uber), or Adam Neumann (WeWork). In the past, figures like Henry Ford. You need someone akin to Attila the Hun, Alexander the Great, Genghis Khan—someone exceptionally intelligent, resolute, aggressive, fearless, able to endure relentless attacks, hatred, abuse, and security threats. We need more such people. I hope we can find a way to cultivate them.
Reason: Why is there such intense anger toward billionaire entrepreneurs? For instance, U.S. senators tweet that billionaires shouldn’t exist.
Andreessen: I think this traces back to Nietzsche—he called it “ressentiment,” a toxic mix of resentment and suffering. It’s foundational to modern culture, Marxism, and progressivism. We hate those better than us.
Reason: This also ties into Christianity, right?
Andreessen: Exactly, Christianity. The last shall be first, and the first shall be last. It’s easier for a rich man to pass through the eye of a needle than enter the kingdom of God. Christianity is sometimes described as the last religion—the final possible religion on Earth—because it appeals to victims. Life being what it is, victims outnumber victors, so victims are the majority. A religion that captures all victims—or everyone who sees themselves as victims—typically the lower strata of society—can dominate. In social science, this is sometimes called the “crab bucket” effect: when one crab climbs out, the others pull it back down.
It’s a problem in education too—when a child excels, others bully him until he’s no longer exceptional. In Scandinavian cultures, there’s “tall poppy syndrome”—the tall poppy gets cut down. Resentment is like poison. It gives satisfaction because it absolves us: “If they’re more successful than me, they must be worse. Clearly, they’re immoral. They must be criminals. They must be making the world worse.” This mindset runs deep.
I’ll say this: The best entrepreneurs we meet are utterly unaffected by these notions. They find the whole idea absurd. Why waste time worrying about what others do or think of you?
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News













