
A Victory of Values: How Anthropic Overtook OpenAI
TechFlow Selected TechFlow Selected

A Victory of Values: How Anthropic Overtook OpenAI
All victories are victories of values.
By Xiao Bing, TechFlow
This may be the most electrifying AI revenge drama of the year.
OpenAI—the former titan of large language models—has lost its luster. Anthropic, founded by six ex-OpenAI employees led by Dario Amodei, is steadily eroding OpenAI’s dominance across revenue, valuation, and enterprise market share.
The temperature difference in secondary markets is the most telling indicator. Ken Smythe, founder of Next Round Capital, faces a pile of $600 million worth of OpenAI secondary stock transfer requests—six hedge funds and venture capital firms queue up to offload their shares. Last year at this time, those shares would have sold out within days. Now? He’s combed through hundreds of institutional investors—and found not a single buyer.
Meanwhile, $2 billion in cash stands ready to buy Anthropic shares.
On the on-chain derivatives platform Ventuals, Anthropic’s implied valuation briefly surpassed OpenAI’s—$86.36 billion versus $84.61 billion.
Goldman Sachs’ stance says even more. Selling OpenAI secondary shares to high-net-worth clients no longer earns Goldman a profit share—it’s effectively a discounted fire sale. Selling Anthropic stakes? Still commands a 15–20% carry fee. Take it or leave it.
How did Anthropic—just five years old—manage to overtake its former employer?
The Departure
The story begins in 2020.
That year, Dario Amodei served as OpenAI’s Vice President of Research and helped build GPT-2 and GPT-3. Why he left remains debated in Silicon Valley: some say Microsoft’s investment fundamentally altered OpenAI’s nature; others cite irreconcilable differences over AI safety philosophy.
Dario himself discussed this on Lex Fridman’s podcast, paraphrasing: “Arguing with others about vision is extremely inefficient. Rather than trying to change them, take people you trust and build what you believe in.”
In 2021, Dario left OpenAI with his sister Daniela and five other core OpenAI researchers to found Anthropic.
Sam Altman likely paid little attention at the time. OpenAI was riding high—losing a few researchers seemed trivial.
But during the peak of OpenAI’s infamous “boardroom coup” in November 2023, the board even approached Dario—asking whether he’d replace Altman as CEO and merge the two companies.
Dario declined. He didn’t want OpenAI’s CEO seat—he wanted to build something entirely new, from first principles, on his own terms.
From 2021 to 2024, Anthropic was nearly invisible to the outside world.
When ChatGPT exploded globally at the end of 2022, Claude remained in internal testing. The Anthropic team judged it insufficiently safe—and held back launch. While competitors raced for users and headlines, Dario’s team obsessively refined “Constitutional AI,” a novel training methodology that required models to self-govern according to a written “constitution” of principles.
Many then thought Anthropic overly rigid—the market window was narrow, and if they didn’t move fast, others would.
Yet in hindsight, Anthropic made a pivotal choice during this “invisible period”: It focused exclusively on APIs and enterprise customers from day one—spending virtually no resources promoting consumer-facing products.
When Claude launched in 2023, its C-side recognition lagged far behind ChatGPT—most everyday users had never heard of it.
Dario’s logic ran like this: Consumer attention fades quickly—but signed enterprise contracts deliver real revenue.
At the time, this felt conservative. By 2026, it proved prescient. Of course, whether Anthropic “strategically chose the enterprise path” or “was forced into B2B after failing to compete with ChatGPT in consumer markets” may both contain elements of truth.
By early 2025, Anthropic’s annualized revenue quietly reached $1 billion—a figure that drew little attention, given OpenAI’s already $10-billion-plus scale. Few foresaw what would follow.
The Comeback
Numbers tell the whole story.
Anthropic’s Annual Recurring Revenue (ARR): $1 billion in January 2025, $9 billion by year-end, $14 billion in February 2026, $19 billion in March, and over $30 billion by early April.
OpenAI同期: ~$13 billion in 2025, reaching ~$25 billion by April 2026.
In just 15 months, Anthropic grew 30-fold—from trailing OpenAI by an order of magnitude to surpassing it by 20%. OpenAI’s growth remains robust, yet side-by-side, the contrast is stark: “steady growth” versus “exponential explosion.”
The biggest structural difference lies in revenue composition: Over 80% of OpenAI’s income comes from ChatGPT consumer subscriptions. Its 900 million weekly active users sound impressive—but only ~5% pay, while the remaining 95% consume compute for free.
Anthropic flips this exactly: 80% of its revenue comes from enterprise clients and API calls.
Enterprise and consumer revenue are fundamentally different asset classes.
Enterprise contracts are sticky—switching carries high integration costs, renewal rates are high, and contract values grow annually.
Consumer subscriptions can be canceled anytime—users vanish overnight when a new product emerges.
In finance terms: one is a long-duration asset; the other, short-duration.
Consider some concrete figures. By April 2026, Anthropic had over 1,000 enterprise clients paying >$1 million annually—doubling in just two months. Eight of the Fortune 10 use Claude. In code generation—the most critical battleground—Claude captured 42–54% of global market share, while OpenAI held just 21%. Ramp’s corporate spending data shows Anthropic’s share of enterprise AI budgets surged from 10% in early 2025 to over 65% by February 2026.
Do these numbers mean OpenAI has “failed”? Not necessarily. But they do reveal something critical: The once-unassailable first-mover advantages—brand, user base, ecosystem—carry almost no weight in enterprise procurement. Enterprise buyers follow a completely different decision logic.
Claude Code
The catalyst behind Anthropic’s revenue explosion was a product called Claude Code.
Launched in May 2025, it crossed $1 billion ARR by November—and surpassed $2.5 billion by February 2026. A product going from zero to $2.5 billion in nine months.
Scanning SaaS history, no faster case exists. Cursor took over a year to hit $500 million; GitHub Copilot took even longer.
So what distinguishes Claude Code from prior AI coding tools?
Simply put: GitHub Copilot helps complete your next line of code—you remain the primary coder. Claude Code lets you say, “Build me a user login module,” and then writes the code, creates files, runs tests, and submits changes—all while you watch.
This sounds like a mere degree of difference—but it’s a paradigm shift: one is a “better tool”; the other, a “colleague who replaces you.”
Internal Anthropic data underscores this further.
Boris Cherny, head of Claude Code, says he now writes 100% of his daily code via Claude Code—and the engineering team generates 70–90% of its code using it. Even Claude Code’s own codebase is 90% self-written.
Pragmatic Engineer’s February 2026 survey of 15,000 developers ranked Claude Code #1 among “most popular AI coding tools.” By early 2026, 4% of public GitHub commits originated from Claude Code—projected to exceed 20% by year-end.
Claude Code’s success reveals an uncomfortable truth many in AI refuse to face: The chatbot category itself may have a low commercial ceiling. What truly commands enterprise dollars is AI tools embedded in workflows—tools that replace specific job functions.
ChatGPT opened the door to AI—but which direction you turn afterward determines who converts users into revenue. Anthropic turned right—straight into the enterprise production process.
In January 2026, Anthropic launched Cowork—extending the same concept from developers to all white-collar roles. Built by four engineers in ten days, most of its code was self-written by Claude Code.
Since Claude Cowork’s release, the global SaaS sector has collectively lost roughly $2 trillion in market value.
People
Products and strategy represent visible differences—but the real key lies in people.
First, OpenAI’s side: Between 2024 and 2025, the company experienced systematic executive attrition.
Co-founder and Chief Scientist Ilya Sutskever departed to found Safe Superintelligence. CTO Mira Murati left to launch Thinking Machines Lab. Co-founder John Schulman and Head of Superalignment Jan Leike joined Anthropic.
Chief Research Officer Bob McGrew exited; VP of Research Barret Zoph left; co-founder and president Greg Brockman went on indefinite leave. In summer 2025, at least seven researchers were poached by Meta’s Superintelligence Lab.
Of OpenAI’s original 11 co-founders, only Sam Altman and researcher Wojciech Zaremba remained full-time by end-2025. A former employee told Fortune: “OpenAI without Ilya is a different company; OpenAI without Greg is a very different company.”
Anthropic presents a contrasting picture.
All seven co-founders—Dario Amodei, Daniela Amodei, Jared Kaplan, Jack Clark, Sam McCandlish, Ben Mann, and Tom Brown—remain. No senior-level departure has occurred in five years.
This contrast is so stark it demands scrutiny: What, exactly, does Anthropic do to retain talent?
Forbes estimated in early 2026 that each co-founder holds ~1.8% equity—nearly equal. At a $38 billion valuation, each stake is worth ~$680 million. This near-equal equity structure defies Silicon Valley norms—where CEOs typically hold significantly larger shares, with others decreasing proportionally. Equal ownership eliminates the most common source of founding-team friction: perceived inequity.
Equity is surface-level. More revealing is Dario Amodei’s personal time allocation to management.
On the Dwarkesh Podcast, he stated: “I spend roughly one-third to 40% of my time ensuring Anthropic’s culture is healthy.” For an AI CEO, this ratio is unusually high. As the company scaled to 2,500 people, he could no longer weigh in on every technical or product decision—so he prioritized higher-leverage work: aligning everyone’s direction.
How does he execute this?
Every two weeks, he hosts a company-wide meeting internally dubbed “DVQ”—Dario Vision Quest. Employees coined the name; Dario initially resisted it, fearing it sounded like a psychedelic experience. Each session features a 3–4 page document he delivers live for an hour—covering everything from product strategy to geopolitics to macro AI trends—attended in-person or remotely by most staff.
More routinely, Anthropic cultivates a “notebook channel” Slack culture. Every employee—including Dario—maintains a public Slack channel to post ideas, progress updates, and even uncertainties.
Amol Avasare, Head of Growth, likened it to “an internal Twitter feed” on Lenny’s Podcast—you can jump into any team’s channel and see what they’re thinking. Dario encourages employees to “argue with him directly.”
In a Fortune interview, he said: “My goal is to cultivate a reputation for ‘telling the company the truth’—calling out problems directly, avoiding ‘corpo-speak’ (defensive, politically correct corporate jargon). If you hire people you trust, you can communicate without filters.”
This “anti-PR” internal communication style starkly contrasts OpenAI. During OpenAI’s late-2023 board crisis, internal information flow broke down so severely that even the CTO wasn’t sure what was happening.
Anthropic’s cultural filtering begins at hiring. Every candidate—regardless of role—undergoes a standardized “culture interview.” Only employees who’ve completed 30 days onboarding plus multi-stage cultural training qualify to conduct such interviews. Logic: “Cultural transmission is too important to entrust to someone who hasn’t yet grasped what our culture actually is.”
Reportedly, one culture-interview question goes like this: “If Anthropic decides not to release a model due to unmet safety standards—and your equity therefore becomes worthless—would you accept that?”
This isn’t rhetorical. Technical brilliance won’t save candidates who answer incorrectly.
One more detail: All Anthropic technical roles—from newest hires to founding executives—share the same title: “Member of Technical Staff.” No “Senior,” “Chief,” or “Distinguished” tiers exist. Colleagues call each other “ants” (from Anthropic).
Anthropic even employs a full-time philosopher, Amanda Askell, whose job is shaping Claude’s moral judgment framework. She told Time: “Sometimes it feels like you have a six-year-old child—you’re teaching them kindness—but by age fifteen, they’ll outthink you in every way.”
Daniela Amodei’s role in this system is often underestimated.
Dario sets the technical vision and serves as external spokesperson; Daniela oversees execution, culture, talent, and operational infrastructure. Reports indicate that heads of research, product, sales, and operations all report directly to her. Her hiring preference is explicit: “Seek communicators—emotionally intelligent, kind, curious, and eager to help others.” In a tech-founder-dominated industry, prioritizing “soft skills” is uncommon.
All seven Anthropic co-founders pledged to donate 80% of their wealth. Nearly 30 Anthropic employees attended the 2026 Effective Altruism (EA) conference in San Francisco—the total attendance exceeding that of OpenAI, Google DeepMind, xAI, and Meta’s Superintelligence Lab combined by more than double.
A core AI company’s most vital asset is human cognition. Code can be copied; compute bought—but researchers’ intuition and judgment are irreplaceable.
When your chief scientist, CTO, and chief research officer depart within two years, what you lose cannot be quantified by funding rounds. Anthropic’s stability in talent retention may be its hardest-to-replicate advantage.
All victories are ultimately victories of values.
What Happened to OpenAI?
Before proceeding, let’s grant OpenAI some fair acknowledgment.
Yes, Anthropic’s revenue has surpassed OpenAI’s—and secondary-market sentiment is shifting. But OpenAI hasn’t collapsed. It just closed a $122 billion funding round backed by Amazon, NVIDIA, SoftBank, and Microsoft. ChatGPT still commands 900 million weekly active users.
In consumers’ minds, “AI” and “ChatGPT” are nearly synonymous. Yet OpenAI does face structural issues—and they’ve all converged in 2026.
Financial pressure is the most immediate.
OpenAI projects a $14 billion loss in 2026. Cumulative losses between 2023 and 2028 could reach $44 billion. HSBC analysts don’t expect profitability before 2030. The Wall Street Journal estimates OpenAI’s annual training cost will hit $125 billion by 2030—versus ~$30 billion for Anthropic. That fourfold gap demands explanation: part reflects OpenAI’s more aggressive compute infrastructure investment; part may signal efficiency gaps. Capital markets clearly care deeply—Anthropic expects positive cash flow by 2027, while OpenAI pushes breakeven to 2030.
Product missteps have also emerged.
Sora shut down in March 2026. This video-generation tool reportedly cost $15 million daily to operate—generating just $2.1 million in total revenue. Its closure also derailed a rumored $1 billion investment deal with Disney. OpenAI’s new AGI Deployment Lead, Fidji Simo, bluntly told staff the company “can’t afford distractions from side projects.”
Then came ads. In February 2026, OpenAI introduced advertising into ChatGPT’s free and Go tiers. Not inherently shocking—many products monetize via ads. But for OpenAI, it stung. Sam Altman explicitly called ads a “last resort” in 2024—and said the fusion of ads and AI left him “uniquely uneasy.” From “uniquely uneasy” to “officially launched” took just 15 months. With only 5% of 900 million users paying, the math left him no choice.
Corporate governance adds further complexity. Its nonprofit-to-profit restructuring dragged on for nearly a year—embroiled in Elon Musk’s lawsuit, ex-employee open letters, Nobel laureate-signed petitions, and dual investigations by California and Delaware attorneys general. It finally concluded in October 2025: the nonprofit foundation retained 26% equity and control. Critics dismiss this arrangement as toothless.
None of these issues alone is fatal. Together, however, they paint a troubling portrait: A company once synonymous with industry imagination now dominates headlines for governance infighting, product shutdowns, and ad rollouts.
The War Continues
Anthropic’s momentum is undeniable: revenue lead, secondary-market enthusiasm, and global PR windfall from the Pentagon incident. Yet remember this: In late 2023, ask any industry analyst if OpenAI could be overtaken—and 99% would have said impossible. Such rapid consensus reversal should itself prompt caution toward today’s new consensus.
Some certainties stand out: Anthropic’s enterprise-first path proved correct. An 80% enterprise revenue structure is fundamentally healthier than ChatGPT’s consumer model—a conclusion fully supported by financial metrics. Claude Code represents a genuine product breakthrough—achieving $2.5 billion ARR in nine months, a pace that speaks for itself.
Yet uncertainty abounds. OpenAI commands 900 million weekly active users and the world’s strongest AI brand recognition. If it unlocks effective consumer monetization—even lifting its paid conversion rate from 5% to 10%—the entire narrative rewrites. AI harbors a trait that makes forecasting perilous: A single major model breakthrough can reshuffle the entire deck.
Secondary-market capital flows point in one direction—but secondary markets also cheered WeWork.
A measured conclusion: In AI’s first commercial round, Anthropic’s path is validated; OpenAI’s path is under scrutiny. Yet declaring “game over” remains premature—the battle is only halfway through.
When Dario Amodei walked out of OpenAI in 2021 with six colleagues, few imagined today’s reality. A safety researcher, in an industry obsessed with speed, used less capital and stricter self-discipline to force his former employer into writing investor memos defending its competitive position.
The most fascinating part? This story still has no ending.
Disclaimer: This article does not constitute investment advice. Valuation figures cited originate from secondary-market platforms and public reports, and may differ from actual transaction prices.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














