
xAI Leases Compute Power to Anthropic: Musk’s Compute Empire Begins to Leak
TechFlow Selected TechFlow Selected

xAI Leases Compute Power to Anthropic: Musk’s Compute Empire Begins to Leak
xAI transformed from an adversary into a supplier in just three months.
By Xiao Bing, TechFlow
If you had told a Silicon Valley investor three months ago that Elon Musk would lease xAI’s largest training cluster—Colossus 1—in its entirety to Anthropic, they’d likely have burst out laughing.
After all, in February, Musk was still calling Anthropic “anti-Western civilization” on X; in March, he even nicknamed it “misanthropic.” To Musk, the company was practically synonymous with politically correct AI—a direct rival to OpenAI that had to be defeated.
Then, on May 6, Anthropic and SpaceX jointly announced: Anthropic would gain access to the full compute capacity of Colossus 1—more than 220,000 NVIDIA GPUs and 300 megawatts of power—with delivery completed within one month. Anthropic explicitly stated this compute would be used directly to improve service quality for Claude Pro and Claude Max subscribers.
Musk posted a message on X that left everyone stunned: He said he’d been in deep contact with Anthropic’s executives over the past week, found them “impressive,” and described them as “highly capable people who are earnestly doing the right thing.” He even called Claude “probably be good.”
On the same day, he announced xAI would dissolve as an independent company and rebrand as SpaceXAI.
This Is a Transfer of Production Capacity
Mainstream English-language media framed the deal as a “landmark event in AI compute sharing”—but they missed a critical fact:
Colossus 1 is xAI’s most core training infrastructure—not some “surplus capacity.”
Let’s recap the timeline. Colossus 1 was completed in Memphis in September 2024, going from groundbreaking to energization in just 122 days—a miracle in data center construction history. It served as xAI’s primary cluster for training Grok 3 and Grok 4, and embodied Musk’s “compute is power” narrative physically. Equipped with over 220,000 GPUs—including H100s, H200s, and the latest GB200s—its scale ranked among the world’s top three by end-2025.
Handing over such a massive training cluster wholesale to a direct competitor is equivalent to TSMC leasing its entire 5-nanometer production line to Samsung. This has never happened in the semiconductor industry. Anyone familiar with industrial cycles knows such a move occurs only under one condition: you can’t use it all yourself.
SpaceXAI’s official statement confirms Anthropic’s compute will “directly benefit Claude Pro and Claude Max subscribers.” That means Anthropic will use this capacity for inference—running models for Claude’s paying users, powering requests for the very AI Musk once despised most.
Calling this merely a “customer collaboration” is inaccurate. In effect, control over Colossus 1 has, to some degree, changed hands.
Grok’s Story Can’t Fill Colossus’s Scale
So why is there “excess capacity”?
The most direct answer lies in Grok’s user metrics.
According to Similarweb’s April report, Grok’s global mobile app daily active users (DAU) fell from 13.9 million in March to 12.2 million in April—a 12.5% sequential decline. The drop was steeper in the U.S., falling from 1.4 million to 1.1 million—a 15.6% sequential decline. Just one year earlier, Grok ranked second globally among AI apps, trailing only ChatGPT; by April, it had slipped to fifth place—surpassed by Claude, Gemini, and DeepSeek.
In contrast, Claude’s DAU rose from 16 million to 23 million during the same period—a +44% sequential jump.
This is a brutal comparison: In 2026, when most AI applications are experiencing rapid growth, Grok is one of the few top-tier products losing users. The reason is straightforward: Grok’s core use case remains tightly bound to the X (formerly Twitter) platform, functioning primarily as a tool for “real-time search + spicy commentary.” Yet it has failed to develop the kind of “workflow stickiness” seen with Claude on standalone apps and web interfaces. Numerous Reddit users complain that Grok has progressively moved image and video generation features behind paywalls—and regulatory investigations across multiple countries, along with Apple’s threat of banning the app, have nearly extinguished its growth engine.
What’s more alarming is what’s happening internally at xAI.
Per a Fast Company report in April, over 80 xAI employees—including several co-founders—have departed in recent months. A February Financial Times report noted Musk has persistently imposed “unreasonable technical performance targets” on his team in a bid to catch up with competitors—a classic leadership response during periods of product retreat.
Putting these two facts together reveals why Colossus 1 has surplus capacity: It was originally built for a Grok far larger than today’s.
SpaceXAI’s Real Challenge Lies in Its Valuation Narrative
“Insufficient Grok demand” is only the surface layer.
A deeper logic is this: Musk needs a new story to justify SpaceXAI’s $1.25 trillion valuation.
Recall what happened this February: SpaceX acquired xAI via an all-stock transaction, valuing the combined entity at $1.25 trillion—the largest merger in history. Prior to the merger, xAI’s most recent funding round was Series E in January, raising $20 billion at a $230 billion valuation. Merging xAI into SpaceX was, in essence, using SpaceX’s rocket business cash flow to keep feeding xAI’s money pit—which continues burning $1.46 billion per quarter.
Yet even with SpaceX’s financial lifeline, SpaceXAI still faces a sharp question: Why is it worth this much?
OpenAI’s most recent valuation stands at $85.2 billion, with ARR around $24–25 billion—yielding a valuation-to-revenue ratio of ~35x. Anthropic, meanwhile, is negotiating a $90 billion valuation against $3 billion ARR—a 30x ratio.
xAI? Q3 2025 revenue stood at $107 million, with a net loss of $1.46 billion. Even assuming an optimistic $2 billion revenue projection for Grok in 2026, SpaceXAI’s implied valuation-to-revenue ratio would vastly exceed those of OpenAI and Anthropic. In other words, Musk urgently needs a new cash-flow story for SpaceXAI. Grok’s user growth can’t deliver it; enterprise API revenue can’t either.
Renting out Colossus to Anthropic is the opening chapter of that story.
It instantly repositions SpaceXAI—from a “model company” to an “AI cloud infrastructure provider”: a player reminiscent of CoreWeave, but with greater scale and power supply. In the world of narrative-based valuation, cloud providers command higher valuations than pure model companies. Cloud vendors can produce long-term contracts and predictable cash flows—something pure model firms struggle to offer.
Add in the vague “Orbital Compute Center” memorandum between Anthropic and SpaceX—where both parties agreed to “explore” deploying multi-gigawatt AI data centers in space—and the direction becomes clear. This is part of preparing a new balance sheet for SpaceX’s IPO. Rockets, Starlink, terrestrial data centers, orbital compute—all bundled into one mega-infrastructure story. Grok itself has become irrelevant; what matters now are Musk’s GPUs, power capacity, and launch pads.
The Real Meaning Behind Musk’s 180-Degree Shift Toward Anthropic
Within this framework, Musk’s dramatic reversal toward Anthropic takes on another meaning.
It’s a transaction.
Beyond rent, Anthropic delivers something else to SpaceXAI: credibility. By publicly endorsing Colossus 1’s availability, scalability, and operational reliability, Anthropic effectively issues SpaceXAI a membership card to the “Compute Infrastructure Club”—whose members include AWS, GCP, Azure, and CoreWeave. Until now, xAI’s reputation in the cloud services market was virtually nonexistent—it had only ever used compute to train its own models, never operated a commercial external service.
For Anthropic, the deal is also highly advantageous. It’s fundraising at a $90 billion valuation and may go public in October. Its publicly disclosed need is 5 gigawatts of training compute; SpaceX’s 300 megawatts may seem modest, but its value lies in “immediate delivery”: energization within one month, directly relieving Claude’s current inference bottlenecks. Anthropic openly acknowledged in April that Claude’s “reliability and performance” suffered during peak hours due to “infrastructure strain.” That 300-megawatt emergency capacity is worth far more than its face value.
This is a two-way narrative transaction. Anthropic gains service stability; SpaceXAI gains a valuation story.
Who conceded? Musk himself did—he struck a deal with a longtime rival and even praised them. But at a deeper level, it’s Grok that conceded. As a product, as a model company, and as Musk’s flagship weapon against OpenAI/Anthropic, Grok is being quietly downgraded to just another ordinary business unit within SpaceXAI’s portfolio. Freeing up a core strategic asset like Colossus for customer use signals that Musk no longer treats “in-house model development” as his primary battlefield.
In that sense, May 6 marks the end of Grok’s era as a “cutting-edge model company.”
An Industry Inflection Point: Compute Capacity Is Concentrating Among Few Players
Zooming out further, the industry-wide significance of this event may surpass what we see today.
Throughout 2024 and 2025, the AI compute market was characterized by “industry-wide frenzy”: OpenAI, Anthropic, xAI, Mistral, and national sovereign funds all scrambled for resources. GPUs were hard currency; data center site selection became a geopolitical issue; power supply turned into a matter of national strategy. Under conditions of universal scarcity, no one would lease their training clusters to rivals—because every GPU-hour rented out today could be the decisive compute needed to close the gap tomorrow.
Now, xAI has done exactly that.
This signals the emergence of the first major split in the AI compute market: Top-tier model companies (OpenAI, Anthropic, Google DeepMind) continue experiencing exponential compute demand growth, while mid-tier and below begin showing signs of capacity slack. Such splits inevitably appear in the mid-to-late stages of any capacity-expansion cycle—from solar cells to EV batteries to Bitcoin miners—the script is nearly identical. Early on, everyone is short; mid-cycle, excess capacity begins spilling over to secondary players; late-cycle, top players vertically integrate, while secondary players either pivot to infrastructure services, get acquired, or perish.
CoreWeave serves as the clearest parallel. Originally an Ethereum mining operation, it seized the GPU oversupply window in 2018 to pivot to AI cloud—and reached a $60 billion IPO valuation in 2024. Its existence proves the “if your models fail, sell compute” path is viable. SpaceXAI is walking the same road—but more aggressively: not only selling terrestrial compute, but aiming to sell it in orbit.
The true signal of the AI bubble’s peak may be precisely this: Mid-tier model companies pivoting en masse to cloud services. When an industry’s core narrative shifts from “I have the best model” to “I have the most GPUs,” differentiation has typically run its course.
A telling detail worth noting: At Memphis—the site of Colossus 1—xAI deployed dozens of natural-gas turbine generators to meet tight deadlines, claiming “temporary use” exempted them from federal permitting. Local residents have protested continuously over air pollution, and the issue remains unresolved.
Now, those gas-turbine-powered GPUs will soon run Anthropic’s Claude—one of the AI labs most rigorous on AI ethics and climate issues.
Even more surreal: In their joint announcement, Anthropic and SpaceX “expressed interest” in deploying multi-gigawatt AI compute infrastructure in orbit. Musk’s logic is simple: Earth’s power and thermal management capacity will eventually run out—AI’s future lies in space.
Bridging the gap between Memphis’s gas turbines and the orbital solar panels on Musk’s slides lies a massive valuation expectation. Leasing Colossus 1 to Anthropic is Musk’s first new story told to justify that expectation.
xAI transformed from antagonist to supplier in just three months. Who will be next to be repriced?
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














