
Marc Andreessen's Latest Interview: DeepSeek, Unitree, and the Power Structure Under AI's Influence
TechFlow Selected TechFlow Selected

Marc Andreessen's Latest Interview: DeepSeek, Unitree, and the Power Structure Under AI's Influence
At this stage, the winners in AI are all users, while the losers are companies with proprietary models.

Author: MD
Produced by: Bright Company
Recently, the well-known American podcast Invest Like the Best interviewed Marc Andreessen, co-founder of Andreessen Horowitz, once again. During the interview, Marc and host Patrick delved into how AI is reshaping technology and geopolitics, discussed DeepSeek's open-source artificial intelligence and its significance in great power tech competition, shared views on the evolution of global power structures, and explored the transformation of the venture capital industry as a whole.
Bright Company used AI tools to summarize the core content of the interview第一时间, with full details available via the "Original Link" at the end of the article.
Below is the interview content (edited):
On DeepSeek, AI Winners and Losers
Patrick: Marc, I think we have to start with the most central question. Can you talk about your thoughts on DeepSeek's R1?
Marc: There are many dimensions to this. (I believe) the U.S. remains the acknowledged scientific and technological leader in artificial intelligence. Most ideas within DeepSeek originated from work done over the past 20 years—or surprisingly, even 80 years ago—in the U.S. or Europe. Initial research on neural networks began as early as the 1940s at research universities in the U.S. and Europe.
So, from the perspective of knowledge development, the U.S. is still far ahead.
But DeepSeek has executed exceptionally well in applying this knowledge. They've also done something remarkable: they've made it openly available to the entire world in open source. This is actually astonishing because it represents a reversal of trends. You have American companies like OpenAI that are essentially completely closed.
Elon Musk’s lawsuit against OpenAI partly demands they change their name from OpenAI to Closed AI. OpenAI was originally conceived so that everything would be open-sourced, but now everything is closed. Other major AI labs, such as Anthropic, are also entirely closed. In fact, they’ve even stopped publishing research papers, treating everything as proprietary property.
Yet for their own reasons, the DeepSeek team has genuinely fulfilled the promise of open source. They released the code for their LLM (called V3) and their reasoning engine (called R1), along with detailed technical papers explaining how they built it—essentially providing a blueprint for anyone else who wants to do similar work.
So it’s public. There's a false narrative out there that if you use DeepSeek, you’re giving all your data to the Chinese. That’s true if you use the service on DeepSeek’s website. But you can download the code and run it yourself. Let me give an example: Perplexity is an American company—you can use DeepSeek R1 on Perplexity, fully hosted in the U.S. Microsoft and Amazon now both have cloud versions of DeepSeek—you can run it on their platforms. Obviously, these are both American companies using American data centers.
This is very important. You can now download this system and actually run it on $6,000 worth of hardware at home or in your office. Its capabilities are comparable to the cutting-edge systems from companies like OpenAI and Anthropic.
These companies spent massive amounts of money building their systems. Now, you can buy it for $6,000 and have complete control. If you run it yourself, you have full control. You can see exactly what it’s doing transparently, modify it, and do all sorts of things to it.
It also has a fantastic feature called distillation. You can compress the large model requiring $6,000 in hardware into smaller versions. People online have already created smaller optimized versions that you can run on a MacBook or iPhone. These aren’t as intelligent as the full version, but still quite smart. You can create customized, domain-specific distilled versions that perform exceptionally well in specific areas.
This is a huge leap toward democratizing large-model inference and reasoning models like R1 in programming and science. Six months ago, these were extremely esoteric, prohibitively expensive, and proprietary. Now, they’re free and permanently accessible to everyone.
Every major tech company, internet company, every startup—we had dozens, even hundreds of startups this week alone—either rebuilding around DeepSeek, integrating it into their products, studying their techniques, or using them to improve existing AI systems.
Mark Zuckerberg of Meta recently mentioned that the Meta team is taking apart DeepSeek, legally borrowing these ideas since it’s open source, and ensuring the next version of Llama matches or exceeds DeepSeek in reasoning ability. This truly pushes the world forward.
Two main takeaways: AI will be everywhere. Many AI risk analysts, safety experts, regulators, officials, governments, EU, Brits... all want to restrict and control AI, which basically guarantees none of that will happen—and I think that’s good. It aligns perfectly with the free tradition of the internet. Then, this achieves a 30x reduction in the cost of reasoning.
Perhaps finally, this shows reasoning will work. Reasoning will work in any human endeavor where answers can be generated and later verified for correctness by technical experts.
We’ll have AIs capable of human and superhuman reasoning, operating effectively in critically important fields: coding, math, physics, chemistry, biology, economics, finance, law, and medicine.
This basically ensures that within five years, everyone on Earth will have a superhuman-level AI lawyer and AI doctor on standby, just a standard feature on their phone. This will make the world better, healthier, and more wonderful.
Patrick: But it's also the most unstable—one model becomes obsolete within two months. Massive innovation occurs at every technical layer. But purely from this moment onward, entering this new paradigm, if you were writing a column on winners and losers among all stakeholders—new app developers, existing software developers, infrastructure providers like Nvidia, open vs. closed model companies—who do you think are the winners and losers after R1’s release?
Marc: If you took a snapshot today, from a zero-sum game perspective, the winners at one point in time are all users, all consumers, every individual, and every business using AI.
Some startups, like those offering AI legal services, last week paid 30 times the current cost for AI usage.
For example, for a company building an AI lawyer, if the key input cost drops 30-fold, it’s like gasoline costs dropping 30-fold while driving. Suddenly, you can go 30 times farther on the same dollar, or use the extra spending power to buy more things. All these companies will either dramatically expand their AI use across domains or offer services much cheaper or free. So for users and the world, it’s an excellent outcome on a fixed-sized pie.
The losers are companies with proprietary models, like OpenAI, Anthropic, etc. You’ll notice OpenAI and Anthropic issued rather strong, almost provoked statements in the past week explaining why this isn’t their end. There’s an old saying in business and politics: when you’re explaining, you’re losing.
Then there’s Nvidia. There’s been a lot of commentary about this, but Nvidia makes the standard AI chips people use. There are some alternatives, but Nvidia is what most people use. Their chips have profit margins up to 90%, reflected in their stock price. (Nvidia) is one of the most valuable companies in the world. One thing the DeepSeek team did in their paper was figure out how to use cheaper chips, actually still using Nvidia chips, but using them more efficiently.
Part of the 30x cost reduction is simply needing fewer chips. By the way, China is building its own chip supply chain, and some companies are starting to use China-derived chips—this is a more fundamental threat to Nvidia. So that’s a snapshot at one point. But your question implies another way to view it—over time, you want to see the elasticity effect. Satya Nadella used the phrase Jevons Paradox.
Imagine gasoline. If gasoline prices drop sharply, suddenly people drive more. This happens frequently in transportation planning. So you get a city like Austin with traffic congestion, someone gets the idea to build a new highway beside the existing one. Within two years, the new highway is jammed too, maybe even harder to get from place to place. The reason is reduced prices for key inputs induce demand.
If AI suddenly becomes 30 times cheaper, people might use it 30 times more—or perhaps 100 or 1,000 times more. This economic term is called elasticity.
So lower prices equal explosive growth in demand. I think there’s a very plausible scenario that on the other side, with explosive usage growth, DeepSeek does very well. By the way, OpenAI and Anthropic also do very well, Nvidia does very well, Chinese chip makers do very well.
Then you see a tidal effect—the entire industry explodes in growth. We’re really only beginning to understand how to use these technologies. Reasoning only started working effectively in the past four months. OpenAI only released their o1 reasoning model a few months ago. It’s like bringing fire down from the mountain and handing it to all humanity. And most humans haven’t used fire yet—but they will.
Then frankly, it’s also the old notion of creativity—that if you’re OpenAI or similar, what you did last week isn’t good enough anymore. But that’s the way the world works. You must get better. These are races. You must evolve. So it’s also a powerful catalyst pushing many existing companies to seriously up their game and become more aggressive.
……
Patrick: ……if a Chinese company uses models developed in the U.S., which required massive investment, leading to this rich technology benefiting the world—it’s hard to grasp. I’d love to hear your reaction from both angles.
Marc: Yes, so there are real issues here. There’s irony in this argument—you do hear it. Of course, the irony is OpenAI didn’t invent the Transformer. The core algorithm behind large language models is called the Transformer.
It wasn’t invented at OpenAI, but at Google. Google invented it, published the paper, and incidentally, didn’t productize it. They continued researching it but didn’t bring it to market, deeming it potentially unsafe for “security” reasons. So they let it sit on the shelf for five years, then the OpenAI team understood it, picked it up, and pushed forward.
Anthropic is a spin-off from OpenAI. Anthropic didn’t invent the Transformer either. So neither of these two companies nor any other U.S. lab currently researching large language models, nor any other open-source project, built upon something they themselves created and developed.
By the way, Google invented the Transformer in 2017, but the Transformer itself builds on the concept of neural networks. The idea of neural networks dates back to 1943. So 82 years ago was when the original neural network paper was published, and the Transformer is built on 70 years of research and development, mostly funded by federal and European governments at research universities.
So it’s a very long lineage of intellectual development—most ideas going into all these systems weren’t developed by the companies currently building them. No company, including our own, sits here with any special moral claim that we built it from scratch and should have total control. That’s simply not true.
So I’d say arguments like this stem from present frustration. By the way, they’re also meaningless because China has already done it—it’s out, it happened. Now there’s a copyright debate. If you talk to experts in the field, many have tried to understand why DeepSeek is so exceptional. One theory—an unproven theory, but believed by experts—is that Chinese companies may have trained on data unused by American companies.
Particularly surprising is DeepSeek’s excellence in creative writing. DeepSeek might currently be the best AI in the world for English creative writing. This is odd because China’s official language is Chinese. While there are excellent Chinese novelists writing in English, generally you might expect the best creative writing to come from the West. Yet DeepSeek might currently be the best—that’s shocking.
So one theory is DeepSeek may have trained on sites like Libgen—massive internet repositories filled with pirated books. I certainly wouldn’t use Libgen myself, but I have a friend who uses it frequently. It’s like a superset of Kindle Store—has every digital book in PDF format, downloadable for free. It’s like the Pirate Bay for books.
U.S. labs might not feel comfortable simply downloading all books from Libgen and training, but perhaps Chinese labs felt they could. So there might be this differential advantage. That said, there’s an unresolved copyright dispute hanging over this. People need to be careful because there’s an ongoing copyright battle—some publishers basically want to stop generative AI companies like OpenAI, Anthropic, and DeepSeek from using their content.
One argument says this material is copyrighted and can’t be freely used. Another argument says AI training on books doesn’t copy books—you’re reading books. Reading books with AI is legal.
You and I are allowed to read books, by the way. We can borrow books from libraries. We can pick up a friend’s book. These are legal. We’re allowed to read books, learn from them, then go about our lives discussing ideas we learned. The other argument is training AI is more like human reading than stealing.
Then there’s the practical reality—if their AI can train on all books, and if U.S. companies end up legally barred from training on books, the U.S. might lose the AI race.
Practically, this could be a fatal blow—as if they win and we lose. There might be some tangled debates here. DeepSeek hasn’t disclosed the data used for training. So when you download DeepSeek, you don’t get the training data—you get what’s called the weights. You get the neural network after it’s been trained on materials. But from that, it’s hard or impossible to examine and derive the training data.
By the way, Anthropic and OpenAI haven’t disclosed their training data either. Then there’s intense speculation in the field about what’s in and out of OpenAI’s training data. They consider it a trade secret. They won’t disclose it. So Chinese DeepSeek may differ from these companies—or may not. Their training approach may differ from Chinese companies. We don’t know.
We don’t know OpenAI and Anthropic’s exact algorithms because they’re not open source. We don’t know how much better or worse they are compared to the publicly available DeepSeek algorithms.
On Closed Source vs. Open Source
Patrick: Do you think the closed-source models entering the competition, like OpenAI and Anthropic, will eventually resemble Apple versus Google’s Android?
Marc: I support maximum competition. By the way, this aligns with my role as a venture capitalist. So if you’re a company founder, if I ran an AI company, I’d need a very specific strategy with pros and cons, trade-offs to make.
As a venture capitalist, I don’t need to do that. I can make multiple contradictory bets. This is what Peter Thiel calls definite optimism versus indefinite optimism. Company founders, CEOs must be definite optimists. They must have a plan and make tough trade-offs to execute it. Venture capitalists are indefinite optimists. We can fund a hundred companies with 100 different plans, conflicting assumptions.
The essence of my job is I don’t need to make the kind of choice you just described. That makes it easy for me to make a philosophical argument—which I personally and sincerely agree with—that I support maximum competition. So digging deeper, this means I support free markets, maximum competition, and maximum freedom.
Essentially, if as many smart people as possible can devise as many different approaches as possible and compete in a free market to see what happens. Specifically in AI, this means I support large labs developing as fast as possible.
I 100% support OpenAI and Anthropic doing whatever they want, launching whatever products they want, striving to grow as hard as possible. As long as they don’t receive preferential policy treatment, subsidies, or support from governments, they should be able to operate as companies doing anything they want.
Of course, I also support startups. We’re actively funding various sizes and types of AI startups. So I want them to grow, and I want open source to grow—partly because I believe if things appear in open source, even if it means certain business models can’t work, the benefits to the world and the entire industry are so great that we’ll find other ways to make money. AI will become more widespread, cheaper, more accessible. I think that’s a good outcome.
Another critical reason for open source is without it, everything becomes a black box. Without open source, everything becomes a black box owned and controlled by a few companies, which might eventually collude with governments—we can discuss that. But you need open source to see inside the box.
By the way, you also need open source for academic research, so you need open source for teaching. So before open source, two years ago, there were no basic open-source LLMs—Meta released Llama, then Mistral from France, now DeepSeek.
Before these open-source models emerged, university systems faced a crisis—researchers at Stanford, MIT, Berkeley couldn’t afford billion-dollar Nvidia chips to truly compete in AI.
So if you spoke to computer science professors two years ago, they’d be deeply concerned. First concern: my university lacks funds to compete in AI and stay relevant. Second concern: collectively, universities lack funds to compete because no one can match large companies’ fundraising capacity.
Open source brings universities back into the game. It means if I’m a professor at Stanford, MIT, Berkeley, or any state school—University of Washington or elsewhere—I can now use Llama, Mistral, or DeepSeek code for teaching. I can conduct research, achieve breakthroughs, publish findings so people truly understand what’s happening.
Then every new generation of kids coming to college, taking computer science courses, will be able to learn how to do this—whereas if it’s a black box, they can’t. We need open source like we need free speech, academic freedom, and research freedom.
So my model is basically letting big companies, small companies, and open source compete. That’s what happened in the computer industry. It worked well. That’s what happened in the internet industry. It worked well. I believe it will happen in AI, and I think it will work well.
Patrick: Is there a limit to wanting maximum evolutionary speed and maximum competition? Maybe. If I said, we know the best things are being made in China, ……, is there a case where you say, yes, I want maximum evolution and competition, but national interest somehow overrides the desire for maximum evolutionary speed and development?
Marc: This argument is very real. It’s frequently raised in AI. In fact, as we speak today, two things exist. First, there are actual restrictions on Western and U.S. companies selling cutting-edge AI chips to China. For example, Nvidia today cannot legally sell its top-tier AI chips to China. We live in a world where such decisions have been made and policies implemented.
Then the Biden administration issued an executive order—now reportedly revoked—but they did issue an order that would impose similar restrictions on software. This is a very active debate. With the DeepSeek event, another round of such debates is underway in Washington, D.C.
Then when you get into policy debates, you encounter a classic situation: you have a rational version of the debate—what’s in the national interest theoretically. Then you have a political version—what the political process actually does to rational arguments. Let me put it this way—we’ve all seen rational arguments meet the political process, and usually rational arguments don’t win. After processing through the political machine, the output usually isn’t what you initially expected.
Then there’s a third factor we always need to discuss—particularly the corrupting influence of large corporations. If you’re a large company seeing changes in Chinese companies (more competitive), threats from open source, of course you’ll try to leverage the U.S. government to protect yourself. Maybe it serves national interest, maybe not. But you’ll push it regardless. That’s what complicates the debate.
You can’t sell cutting-edge AI chips to China. This definitely hampers them in some ways. Some things they won’t be able to do. Maybe that’s good because you’ve decided it serves national interest. But let’s examine three other interesting consequences.
One consequence is giving Chinese companies massive incentive to design ways to accomplish things on cheaper chips. This is a key part of DeepSeek’s breakthrough—they figured out how to do what U.S. companies do with larger chips using legally compliant, cheaper chips. That’s also why it’s so cheap. One reason you can run it on $6,000 hardware is they invested immense time and effort optimizing code to run efficiently on unsanctioned, cheaper chips. You force an evolutionary response.
So that’s the first reaction—maybe it’s already backfired somewhat. Second consequence: you incentivize Chinese state and private sectors to develop a parallel chip industry. If they know they can’t get U.S. chips, they’ll develop their own. They’re doing it now. They have a national plan to build their own chip industry so they won’t depend on U.S. chips.
So counterfactually, maybe they’d buy U.S. chips. Now they’ll figure out how to manufacture them themselves. Maybe in five years they can do it. But once they reach self-sufficiency, we’ll have a direct competitor in the global market we wouldn’t have had if we just sold them chips. And by then, we’ll have no control over their chips. They can fully control them, sell below cost, do whatever they want.
How AI Reasoning Ability Will Transform VC and Investment Industry
Patrick: How do you think all this will impact capital allocation? What interests me most is how your firm, Andreessen Horowitz (A16Z), might be affected in five years. If I think of investment firms as combinations of raising capital, performing outstanding analytical work, and assessing people—especially at early stages—how do you think these functions will change due to the emergence of "o7" (AI reasoning capability)?
Marc: I hope the analytical part undergoes massive change. We assume the world’s best investment firms will excel at leveraging this technology for their analytical work.
That said, there’s a saying “the shoemaker’s son has no shoes”—perhaps the venture capital firms most aggressively investing in AI might be among the least aggressive in applying it practically. But we have multiple internal initiatives I’m very excited about. Our kind of firm needs to keep up, so we must truly do this.
Are some efforts already underway internally? Probably not enough. Maybe not sufficient. That said, for late-stage or public market investments, many we speak with have a highly analytical perspective. Even great investors—I think of Warren Buffett. I don’t know if it’s true, but I’ve heard Warren never meets CEOs.
Patrick: He wants the “ham sandwich company.”
Marc: Yes, yes, he wants companies as simple as a ham sandwich. And I think he worries about being swayed by a good story. You know, many CEOs are very charismatic. They’re always described as having “great hair, white teeth, polished shoes, sharp suits.” They’re superb salespeople. You know, one thing CEOs are good at is selling—especially selling their own stock.
So if you’re Buffett, sitting in Omaha, you read annual reports. Companies list everything in annual reports, bound by federal law to ensure truthfulness. That’s your analysis method. So do reasoning models like o1, o3, o7, or R4 analyze annual reports better than most investors manually? Probably.
As you know, investing is an arms race, like everything else. So if it works for one person, it’ll work for everyone. It’ll be an arbitrage opportunity briefly, then close and become standard. So I expect the investment management industry to adopt this technology this way. It’ll become standard operating procedure.
I think early-stage venture capital is a bit different. What I’m about to say might just be wishful thinking on my part. I might be the last Japanese soldier in 1948 on a remote island, saying what I’m about to say. I’ll take the risk. But I’ll say this: look, in early stages, much of what we do in the first five years is deeply evaluating individuals, then working very closely with them.
This is also why venture capital is hard to scale, especially geographically. Geographic scaling experiments often fail. The reason is you ultimately need to spend extended time face-to-face with these people—not just during evaluation, but during building. Because in the first five years, these companies usually aren’t on autopilot.
You actually need to work closely with them to ensure they achieve everything needed for success. This involves deep personal relationships, conversations, interactions, mentorship—and by the way, we learn from them, they learn from us. It’s a two-way exchange.
We don’t have all the answers, but we offer perspective because we see the broader landscape while they focus on granular details. So there’s massive two-way interaction. Tyler Cowen talked about this—I think he calls it “project picking.”
Of course, “talent scouting” is another version—basically, if you review human history in any new domain, you almost always find a phenomenon where unique individuals try new things, supported by professional enablers who fund and back them. In music, David Geffen discovered early folk artists and turned them into rock stars. In film, David O. Selznick discovered early movie actors and turned them into stars. Or 500 years ago in a tavern in Maine, someone discussed which whaling captain could catch whales.
You know, it’s Queen Isabella in her palace hearing Columbus’s proposal and saying, “Makes sense. Why not?” This alchemy developed over time between people doing new things and professional support layers backing them has existed for centuries, even millennia.
You might have seen tribal chiefs thousands of years ago sitting by fires, young warriors approaching saying, “I want to lead a hunting party to that region to see if there’s better game.” And the chief by the fire trying to decide whether to approve. So it’s a very human interaction. My guess is this interaction will continue. Of course, if I meet an algorithm better at this than me, I’ll retire immediately. We’ll see.
Patrick: You’re building one of the largest firms in this space. How are you adjusting your firm’s growth strategy to respond to this new technology? Have you made adjustments in operations or strategic direction? How are you adapting your firm’s trajectory to address this new technology?
Marc: An important part of running a venture capital firm, in our view, is having a set of values and behaviors we call timeless. For example, respect for entrepreneurs. You need great respect for entrepreneurs and their journey. You need deep understanding of what they do. You can’t skim the surface.
You build deep relationships. You work with these people long-term—by the way, these companies take a long time to build. We don’t believe in overnight success. Most great companies are built over 10, 20, 30-year spans. Nvidia is a great example. Nvidia is approaching its 40th anniversary, and I think one of Nvidia’s original VCs is still on the board. A great example of long-term building.
So there’s a core set of beliefs, perspectives, and behaviors we won’t change—related to what we just mentioned. Another is the face-to-face interaction. You know, these things can’t be done remotely, that’s one. But on the other hand, you need to stay current because technology changes so fast, business models change so fast, competitive dynamics change so fast.
If anything, the environment is more complex now because you have many countries involved, plus all these political issues making things more complicated. We never really worried about political systems pressuring our investments until about eight years ago. Then about five years ago, this pressure intensified. But in the first ten years of our firm, and the first 60 years of venture capital, this was never a big deal—but now it is.
So we need to adapt. We need to engage politically, which we didn’t do before. Now we need to adapt, figure out maybe AI companies will be fundamentally different. Maybe their organizational structures will be completely different. Or as you said, maybe software companies will operate completely differently.
One question we often ask ourselves—for example, what does the organizational structure of a company fully leveraging AI look like? Does it resemble existing structures, or will it be very different? There’s no single answer, but we’re seriously considering this.
So a subtle balancing act we do daily is figuring out what’s timeless versus timely. Conceptually, this is a major part of how I think about the firm—we need to navigate between these and ensure we can distinguish them.
Patrick: Your firm is now very large, resembling KKR or Blackstone in some ways. You and Ben Horowitz, as founders, were experienced founders yourselves when you started. Like Blackstone, Schwarzman never really did investing before founding Blackstone. Look at its evolution.
It seems this founder-led approach to building asset management investment firms ultimately evolves into truly massive, ubiquitous platforms. You have vertical businesses covering most exciting tech frontiers. Do you think this view holds merit? Will the best capital allocation platforms be founded more by founders than investors?
Marc: Yes, so several points. First, I think this observation has merit. In the industry, people often describe many investment operations as partnerships. Many venture capital firms operate this way. Historically, it’s a small team sitting in a room exchanging ideas, then investing. By the way, they have no balance sheet. It’s a private partnership. They pay out funds annually in compensation form. That’s the traditional venture capital model.
A traditional venture capital operation has six GPs sitting around a table doing this. They have assistants, a few associates. But the point is, it’s entirely people-based. By the way, you’ll actually find in most cases, people don’t particularly like each other.
Mad Men portrayed this well. Remember in Mad Men, in season three or four, members leave to start their own firms—they don’t actually like each other. They know they need to gather to start a company. That’s how many firms operate. So it’s a private partnership, and it’s what it is.
But then you see these firms struggle to endure. They have no brand value. No underlying enterprise value. They’re not a business. You see this pattern—when original partners retire or move on, they pass it to the next generation. Most times, the next generation can’t sustain it. Even if they can, there’s no underlying asset value. Next generation passes it to third. It might fail in the third generation, ending up on Wikipedia—“Yes, this firm once existed, then disappeared, replaced by others like ships passing in the night.”
So that’s the traditional way. By the way, if you receive traditional investment training, you’re trained on the investment side, but never on how to build a business. So it’s not your natural strength, you lack the skills or experience, so you don’t do it. Many investors operated this way for a long time, made lots of money. So it works well.
The other way is building a company, building an enterprise, building something with lasting brand value. You mentioned firms like Blackstone and KKR—these massive public companies. Apollo too—these huge firms. You probably know, original banks were all private partnerships. Goldman Sachs and JPMorgan 100 years ago were more like today’s small venture capital firms than what they are now. But over time, their leaders transformed them into massive enterprises. They’re also large public companies.
So that’s another way—building a franchise. To do this, you need a theory for why a franchise should exist. You need a conceptual theory for why this makes sense. Then, yes, you need business skills. Then at that point, you’re running a business—like running any other business: okay, I have a company. It has an operating model, operating rhythm, management capability, employees, multiple layers, internal specialization and division of labor.
Then you start thinking about scaling, then over time, think about underlying asset value—that this thing’s value isn’t just the people currently there. It’s not like us, eager to distribute profits or whatever. But one big thing we’re trying to do is build something with this durability.
By the way, we’re not rushing to go public or anything, but one big thing we’re trying to do is build something with this durability.
Patrick: What new differences do you hope your firm will have in the next 10 years that don’t exist now? Are there uncompromising ways you hope your firm will never evolve into a traditional large asset manager?
Marc: We rapidly evolve in who we invest in, what companies do, models, and founders’ backgrounds—these constantly change. For example, for 60 years, venture capital consensus was you’d never back researchers starting companies—they’d just research, burn money, and you’d end up with nothing.
Yet now, many top AI companies are founded by researchers. This shows supposedly “timeless” values need adjustment for changing times. We need high flexibility toward these changes. So as these change, the help and support needed for company success also shifts.
One of the most significant changes in our firm, which I mentioned earlier, is we now have a large and increasingly complex political operations department. Four years ago, we had zero presence in politics. Now it’s a major part of our business—something we never anticipated.
I’m sure in another 10 years, we’ll invest in unimaginable areas and have unimaginable operating models. So we’re completely open to changes in these aspects. However, there are core values I hope remain unchanged in the next 10 years because they’re carefully considered and foundational to our firm.
But I always emphasize to our team and limited partners—we don’t scale for scale’s sake. Many investment firms, upon reaching a certain size, prioritize growing assets under management from billions to tens or hundreds of billions, even trillions of dollars. This is often criticized as prioritizing management fees over investment performance. That’s not our goal.
The only reason we scale is to support the founders we want to help build companies. We scale because we believe it helps us achieve this goal.
However, I must stress that our core has always been early-stage venture capital. No matter how large we grow—even with growth funds enabling larger checks (some AI companies do need substantial capital). We didn’t start with growth funds but established them gradually based on market demand and company development.
But the core business remains early-stage venture capital. This might confuse outsiders since we manage massive funds. Why would an early startup founder believe you’d spend time on me? You Andreessen Horowitz invest hundreds of millions in late stages—while only putting $5 million in my Series A. Would you still spend time on me?
The reason is our core business has always been early-stage venture capital. Financially, early-stage investment return opportunities are comparable to late-stage company returns—this is characteristic of startups. But more importantly, all our knowledge, relationship networks, and what makes our firm unique come from our deep insights and connections built during early stages.
So I always tell people—if forced by circumstances, if the world is in trouble and we must sacrifice, early-stage venture capital will never be sacrificed. It will always be the core. That’s why I spend so much time with early founders. Partly because it’s very interesting; partly because it’s where you learn the most.
Shifts in Global Power Structure: Elites vs. Anti-Elites
Patrick: If we consider changes in global power structures, ……, which power centers changing—gaining or losing power—are you most focused on?
Marc: The Machiavellians. I’m sure you’ve had a dozen people recommend this book on your show. It’s one of the greatest books of the 20th century. It outlines theories on political, social, and cultural power. A key idea from this book I now see everywhere is the concept of elites and anti-elites.
Join TechFlow official community to stay tuned Telegram:https://t.me/TechFlowDaily X (Twitter):https://x.com/TechFlowPost X (Twitter) EN:https://x.com/BlockFlow_News












