
Silicon Valley is giving rise to the "OpenAI mafia"
TechFlow Selected TechFlow Selected

Silicon Valley is giving rise to the "OpenAI mafia"
OpenAI's path of fission.
Author: Flagship

Image source: Generated by Wujie AI
Just how valuable is the title of "former OpenAI employee" in today's market?
On February 25, local time, according to Business Insider, Mira Murati—former CTO of OpenAI—has just launched her new company, Thinking Machines Lab, which is raising $1 billion at a $9 billion valuation.
Currently, Thinking Machines Lab has not disclosed any product roadmap, technical timelines, or specific details. The only publicly available information about the company includes its team of over 20 former OpenAI employees and their vision: to build a future where "everyone can access knowledge and tools so that AI serves people’s unique needs and goals."

Mira Murati and Thinking Machines Lab
The fundraising power of OpenAI alumni entrepreneurs has created a “snowball effect.” Before Murati, Ilya Sutskever, former chief scientist at OpenAI, founded Safe Superintelligence Inc. (SSI), securing a $30 billion valuation based solely on his OpenAI pedigree, an idea, and nothing more.
Since Elon Musk left OpenAI in 2018, former OpenAI employees have founded more than 30 companies, collectively raising over $9 billion. These startups have formed a comprehensive ecosystem spanning AI safety (Anthropic), infrastructure (xAI), and vertical applications (Perplexity).
This echoes the wave of Silicon Valley entrepreneurship that followed PayPal's acquisition by eBay in 2002, when founders like Musk and Peter Thiel went on to create what became known as the "PayPal Mafia," launching legendary companies such as Tesla, LinkedIn, and YouTube. Now, OpenAI's departing employees are forming their own "OpenAI Mafia."
Yet the "OpenAI Mafia" script is even more aggressive: while the "PayPal Mafia" took 10 years to produce two $100-billion companies, the "OpenAI Mafia" has, in just two years since ChatGPT’s release, spawned five companies valued at over $1 billion each. Anthropic is now valued at $61.5 billion, Sutskever’s SSI at $30 billion, and Musk’s xAI at $24 billion. It’s highly likely that a $100-billion unicorn will emerge from the “OpenAI Mafia” within the next three years.
The talent exodus from OpenAI is triggering a new wave of "talent fission" across Silicon Valley, reshaping the global AI power structure.
The Fracture Lines of OpenAI
Of OpenAI’s 11 co-founders, only Sam Altman and Wojciech Zaremba, head of language and code generation, remain with the company.
2024 marked a peak year for OpenAI departures. Key figures including Ilya Sutskever (left in May 2024) and John Schulman (left in August 2024) exited one after another. The safety team shrank from 30 to 16 members—a 47% reduction. Senior executives such as CTO Mira Murati and Chief Research Officer Bob McGrew departed. On the technical side, GPT series lead designer Alec Radford and Sora project leader Tim Brooks (who joined Google) left, along with deep learning expert Ian Goodfellow, who also moved to Google. Andrej Karpathy left OpenAI for the second time to start an education-focused company.
"Together they were a blazing fire; scattered, they became stars across the sky."
Among core technical staff who joined OpenAI before 2018, over 45% have started their own ventures. These new startups have deconstructed and reassembled OpenAI’s technological DNA into three strategic blocs.
First are the “mainline successors,” essentially a group of ambitious architects envisioning an OpenAI 2.0.
Mira Murati’s Thinking Machines Lab has almost completely replicated OpenAI’s R&D architecture: John Schulman leads reinforcement learning frameworks, Lilian Weng heads AI safety systems, and even the neural architecture diagram of GPT-4 has reportedly become the technical blueprint for new projects.
Their "open science manifesto" directly challenges OpenAI’s recent trend toward closed development. By continuously publishing technical blogs, research papers, and open-sourcing code, they aim to create a "more transparent path to AGI." This has already triggered ripple effects in the AI industry—three top researchers from Google DeepMind defected to join them, bringing the Transformer-XL architecture with them.
In contrast, Ilya Sutskever’s Safe Superintelligence Inc. (SSI) has chosen a different path. Co-founded with researchers Daniel Gross and Daniel Levy, SSI has abandoned all short-term commercial goals and is focused exclusively on building an "irreversibly safe superintelligence"—a near-philosophical technical framework. Despite having no product yet, institutions like a16z and Sequoia Capital have committed $1 billion to fund Sutskever’s vision.

Ilya Sutskever and SSI
The second faction consists of earlier departures—the “disruptors”—who left before the ChatGPT era.
Dario Amodei’s Anthropic has evolved from an "OpenAI dissenter" into one of its most formidable competitors. Its Claude 3 series performs on par with GPT-4 across multiple benchmarks. Moreover, Anthropic has secured an exclusive partnership with Amazon AWS, gradually eroding OpenAI’s foundation in computing power. Jointly developed chip technology between Anthropic and AWS could further weaken OpenAI’s bargaining position in purchasing Nvidia GPUs.
Another key figure in this camp is Elon Musk. Although Musk left OpenAI back in 2018, several founding members of his xAI team previously worked at OpenAI, including Igor Babuschkin and Kyle Kosic, who later returned to OpenAI. Backed by Musk’s vast resources, xAI poses significant threats to OpenAI across talent, data, and computing power. By integrating real-time social data streams from Musk’s X platform, xAI’s Grok-3 can instantly capture trending topics to generate responses, while ChatGPT’s training data stops at 2023—creating a stark gap in timeliness. This closed-loop data system is difficult for OpenAI, reliant on Microsoft’s ecosystem, to replicate.
However, Musk does not see xAI as a mere disruptor to OpenAI but rather as a return to OpenAI’s original mission. xAI adheres to a "maximum openness" strategy—for example, releasing the Grok-1 model under the Apache 2.0 license, inviting global developers to participate in ecosystem development. This stands in sharp contrast to OpenAI’s recent shift toward closed-source models, such as offering GPT-4 only via API access.
The third group comprises the “game-changers” who are redefining industrial logic.
Perplexity, founded by former OpenAI research scientist Aravind Srinivas, was among the first to use large AI models to reinvent search engines. Perplexity replaces traditional link-based results with AI-generated answers. Today, it handles over 20 million daily searches and has raised over $500 million (with a $9 billion valuation).
Adept was founded by David Luan, former engineering VP at OpenAI, who contributed to language modeling, supercomputing, reinforcement learning, and policy/safety for GPT-2, GPT-3, CLIP, and DALL-E. Adept focuses on developing AI Agents, aiming to automate complex user tasks (e.g., generating compliance reports, designing blueprints) by combining large models with tool-use capabilities. Its ACT-1 model can directly operate office software and Photoshop. However, the core founding team, including David Luan, has now joined Amazon’s AGI team.
Covariant is a robotics AI startup valued at $1 billion. Its founding team comes from OpenAI’s disbanded robotics division, drawing technical expertise from GPT model development. Focused on building foundational robot models, Covariant aims to enable autonomous robotic operations through multimodal AI, particularly in warehouse logistics automation. However, three key “OpenAI Mafia” members—Pieter Abbeel, Peter Chen, and Rocky Duan—have since joined Amazon.
Some notable “OpenAI Mafia” startups

Source: Public data, compiled by Flagship
The shift of AI from a "tool" to a core "productivity factor" has created three types of industrial opportunities: replacement scenarios (e.g., disrupting traditional search engines), incremental scenarios (e.g., intelligent transformation in manufacturing), and transformative scenarios (e.g., foundational breakthroughs in life sciences). Common characteristics of these scenarios include potential for data flywheels (user interaction data feeding back into model improvement), deep interaction with the physical world (robot motion data, biological experiment data), and operating in ethically and regulatorily gray areas.
OpenAI’s technological spillover is providing foundational momentum for this industrial transformation. Its early open-source strategy (e.g., partial open-sourcing of GPT-2) created a "dandelion effect" of widespread diffusion. But as technological advances reached deeper stages, closed-source commercialization became inevitable.
This tension has led to two phenomena: First, departing talent transplanting technologies like Transformer architectures and reinforcement learning into vertical domains (e.g., manufacturing, biotech), using domain-specific data to build moats. Second, tech giants acquiring talent to secure technological positions, creating a “technology harvesting” loop.
When Moats Become Watersheds
While the "OpenAI Mafia" surges forward, the parent company OpenAI finds itself struggling.
Technologically and product-wise, the launch of GPT-5 has been repeatedly delayed, and the mainstream ChatGPT product is widely perceived as lagging behind industry innovation pace.
In the marketplace, newcomers like DeepSeek are catching up quickly. DeepSeek’s models match ChatGPT’s performance at just 5% of GPT-4’s training cost—a low-cost replication path that is dismantling OpenAI’s technological moat.
Yet much of the rapid rise of the "OpenAI Mafia" stems from internal conflicts within OpenAI itself.
OpenAI’s core research team is effectively fragmented. Of the 11 co-founders, only Sam Altman and Wojciech Zaremba remain. Forty-five percent of core researchers have left.

Wojciech Zaremba
Co-founder Ilya Sutskever left to establish SSI. Chief Scientist Andrej Karpathy publicly shared Transformer optimization insights. Tim Brooks, head of the Sora video generation project, moved to Google DeepMind. Among the original authors of early GPT versions, over half have departed, with most joining OpenAI’s competitors.
Meanwhile, according to data compiled by job-tracking firm Lightcast, OpenAI’s hiring focus appears to be shifting. In 2021, 23% of its job postings were for general research roles. By 2024, that number had dropped to just 4.4%, reflecting a clear decline in the status of research talent at OpenAI.
The organizational culture clash caused by commercialization is becoming increasingly apparent. While the company’s workforce expanded by 225% over three years, the early hacker ethos has gradually given way to KPI-driven management. Some researchers openly admit being "forced to shift from exploratory research to product iteration."
This strategic ambiguity has placed OpenAI in a double bind: it must continually deliver groundbreaking innovations to sustain its valuation, while simultaneously facing fierce competition from former employees who rapidly replicate its methodologies.
The key to winning in the AI industry lies not in breaking lab records on model parameters, but in embedding technological DNA into the capillaries of industries—reconstructing the foundational logic of business through answer streams in search engines, movement trajectories of robotic arms, and molecular dynamics in biological cells.
Is Silicon Valley Breaking Up OpenAI?
The rapid rise of the "OpenAI Mafia" and "PayPal Mafia" owes much to California’s legal environment.
Since California outlawed non-compete agreements in 1872, its unique legal framework has acted as a catalyst for Silicon Valley innovation. Under Section 16600 of the California Business and Professions Code, any clause restricting professional freedom is void. This institutional design has directly enabled free movement of technical talent.
The average tenure for a Silicon Valley programmer is only 3–5 years—far shorter than in other tech hubs—creating a strong "knowledge spillover" effect. For instance, former employees of Fairchild Semiconductor went on to found 12 semiconductor giants including Intel and AMD, laying the industrial foundation of Silicon Valley.
While the ban on non-compete clauses may seem to offer insufficient protection for innovative firms, it actually promotes innovation more effectively. The mobility of technical talent accelerates technology diffusion and lowers barriers to entry.
The U.S. Federal Trade Commission (FTC) estimates that after a full ban on non-compete agreements takes effect in April 2024, American innovation will be further unleashed. In the policy’s first year alone, it could spawn 8,500 new businesses, increase patent filings by 17,000–29,000, and add 3,000–5,000 new patents. Over the next decade, annual patent growth rates could reach 11–19%.
Capital has also played a crucial role in the rise of the OpenAI Mafia.
Silicon Valley accounts for over 30% of all U.S. venture capital. Institutions like Sequoia Capital and Kleiner Perkins have built a complete funding pipeline from seed rounds to IPOs. This capital-intensive model creates a dual effect.
First, capital acts as an engine of innovation. Angel investors provide not just money, but also industry connections. Uber, for example, started with just $200,000 from its two founders and three registered taxis. After receiving a $1.25 million angel investment, it entered rapid fundraising mode and reached a $40 billion valuation by 2015.
Risk capital’s long-term focus on technology has also driven industrial upgrades. Sequoia invested in Apple in 1978 and Oracle in 1984, establishing influence in semiconductors and computing. In 2020, it began deeply investing in AI, backing frontier projects like OpenAI. International capital—such as Microsoft’s hundred-billion-dollar investments in AI—has shortened the commercialization cycle of generative AI from years to mere months.
Capital also provides innovators with greater tolerance for failure. Accelerators benefit as much from quickly identifying failed projects as from nurturing successful ones. According to startup analytics firm Startuptalky, the global startup failure rate is 90%, while in Silicon Valley it’s 83%. Though few startups succeed, within the risk capital network, failure experiences are rapidly converted into fuel for new ventures.

Image source: startuptalky.com
However, capital is also altering the developmental paths of these innovative startups.
Top-tier AI startups are achieving valuations exceeding $1 billion before launching any product—making resource acquisition exponentially harder for smaller teams. This structural imbalance is especially evident geographically. According to Dealroom, the Bay Area receives in a single quarter ($24.7 billion) as much venture capital as the next four global VC hubs combined (London, Beijing, Bangalore, Berlin). Meanwhile, although emerging markets like India saw 133% financing growth, 97% of funds flowed to "unicorns" valued above $1 billion.
Moreover, capital exhibits strong "path dependency," favoring areas with quantifiable returns. This makes it difficult for nascent fundamental science innovations to secure adequate funding. In quantum computing, for instance, Guo Guoping, founder of domestic startup Origin Quantum, sold his home to fund his venture due to lack of capital in the early days. When Guo first sought investment in 2015, official data showed China’s total R&D spending accounted for less than 2.2% of GDP, with basic research making up only 4.7% of that.
Beyond insufficient support, big capital uses financial incentives to lock in top talent, resulting in CTO-level salaries at startups consistently reaching seven figures (USD at U.S. firms, RMB at Chinese ones), reinforcing a cycle of "tech giants monopolizing talent, capital chasing giants."
Nonetheless, the massive front-loading of valuations among "OpenAI Mafia" startups carries inherent risks.
Mira Murati and Ilya Sutskever’s companies both secured hundreds of millions—or billions—in funding based solely on a vision. This reflects a trust premium in their proven technical leadership at OpenAI. Yet this trust carries risks: whether AI can sustain exponential growth, and whether vertical-domain data can form insurmountable moats. If these assumptions face real-world challenges—such as slowing progress in multimodal models or surging data acquisition costs—capital overheating could trigger an industry shakeout.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














