
Cursor to Anthropic and OpenAI: “Thank you for raising us—now we’re coming for your market.”
TechFlow Selected TechFlow Selected

Cursor to Anthropic and OpenAI: “Thank you for raising us—now we’re coming for your market.”
Competitors, grown large on APIs, are now biting back at AI platforms.
Author: Daniel Barabander
Translated and edited by TechFlow
TechFlow Intro: Three years ago, Cursor was a VS Code extension running entirely on the OpenAI API. Today, it has launched its own proprietary model—Composer 2—which outperforms Claude Opus 4.6 on key benchmarks at one-tenth the cost.
This article uses that case as a starting point to systematically address the internet’s most critical strategic question: When should a company open its capabilities via an API—and when should it close them? The conclusion serves as a warning to all platform builders.
Full text below:
Co-authored with Elijah Fox (@PossibltyResult).
In early March, Cursor launched Composer 2—a proprietary programming model built atop an open-source base model—that outperforms Claude Opus 4.6 on key benchmarks at one-tenth the price. Three years ago, Cursor was merely a VS Code fork fully dependent on the OpenAI API.
Cursor’s transformation—from API-dependent customer to genuine competitor—epitomizes the internet’s most vital strategic question: When should a company open its capabilities via an API, and when should it keep them closed?
We’ve developed a framework for answering this question—one that hinges on two things. First: Does opening your API erode your moat? And if so: Can you find a moat elsewhere?
Every time a company exposes its intellectual property through an API, it risks moat erosion via demand aggregation. Put simply: Competitors can leverage that IP to bootstrap their own products’ early stages; once they’ve accumulated enough demand, they can vertically integrate and cut off the API. Netflix did exactly this—first licensing content, then, once it had amassed a large enough user base to amortize massive fixed costs, producing its own hit series, House of Cards.
But the truly dangerous scenario arises when API outputs can serve directly as inputs—compounding improvements in competitors’ product quality. This is a double blow: Competitors not only use the API to guide and aggregate demand, but also directly enhance their own production processes. This is precisely what’s happening in AI today. Though OpenAI and Anthropic explicitly prohibit API customers from using outputs to train competing models, they cannot prevent companies like Cursor from leveraging state-of-the-art models to guide workflows that collect proprietary product data—and iteratively improve their own models over time.
This appears to be exactly what underpins Composer 2. Cursor used foundational models like Claude and GPT to aggregate sufficient demand—reaching roughly $2 billion in annualized revenue—then built a frontier-level programming model atop the open-source base model Kimi K2.5, enriched by continuous pretraining and reinforcement learning using data drawn from its IDE.
When such output-as-input dynamics exist, API providers face only two choices: either shut down the API to stem the bleeding—or keep it open while cultivating complementary assets that reinforce their moat.
Twitter exemplifies the first path. Initially renowned for its generous, freely accessible API—at its peak allowing developers to pull half a million tweets per month—it progressively shut down most endpoints because the API leaked its core moat: its proprietary social graph. Today, the API is effectively closed: access is severely rate-limited, prohibitively expensive at meaningful scale, and serious product development requires tightly controlled B2B integrations.
The second path is to keep the API open—and supplement it with another source of power. No industry understands this better than crypto, where APIs are mandated to be open, and survival depends entirely on finding moats elsewhere.
Lending protocol Morpho offers a representative example. It launched by building optimizer products atop the open APIs of Aave and Compound. Then it used those protocols’ outputs—their aggregated liquidity—as inputs to bootstrap its own platform. Thus, Cursor and Morpho follow identical paths in leveraging APIs to build competitive products.
Yet the truly fascinating dynamic lies in what Morpho did next. Because Morpho itself offers an open API, it needed to compensate for its lack of switching costs. So it deliberately maximized composability—and instead built moats elsewhere—such as the Lindy effect, and network effects arising from deep liquidity across diverse lenders and borrowers.

Applying this framework forward, we predict that foundational model companies will likely choose the first path—progressively restricting API access to their most advanced models over time.
To believe in the second path, you must assume models like Opus and GPT are already powerful and trusted enough to remain open—even permitting competitors to use their outputs as inputs—while third parties still won’t leave. That implies model companies are betting on alternative sources of power: the Lindy effect (if they believe users won’t want to rebuild trust in new models), developer network effects (if they believe users will build ecosystems tightly coupled to their API openness), or economies of scale (if they believe maximizing API call volume lets them amortize the fixed costs of training frontier models).
But current evidence points the other way. The “hottest model this month” dynamic remains strong—users switch without hesitation to whichever model is best right now, as we saw again in the recent surge in Claude usage following the release of Opus 4.5. At the model layer, developer network effects show no clear signs yet—interoperability among APIs is increasing, not decreasing, and the surrounding tooling ecosystem actively resists lock-in, deliberately making vendor switching easier. And economies of scale during training are no longer sufficient as a moat—distillation techniques now allow competitors to train models of comparable performance at far lower cost. Without alternative sources of power, foundational AI companies will likely restrict API access to hobbyists only, focusing instead on tightly governed, monitored B2B deployments. Increasingly, the winning move will be to opt out of the game entirely.
This is a troubling outcome—because today’s explosion of consumer AI products rests squarely on these model providers. It also opens a door for reverse positioning: If top-tier labs increasingly restrict access, competitors with weaker moats—but who make a strong, credible commitment to sustained openness—can capture real value.
Thanks to @systematicls (@openforage) and @AlexanderLong (@Pluralis) for their thoughtful feedback on this piece.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News













