
The New Yorker’s In-Depth Investigation: Why OpenAI’s Own Employees Deemed Altman Untrustworthy
TechFlow Selected TechFlow Selected

The New Yorker’s In-Depth Investigation: Why OpenAI’s Own Employees Deemed Altman Untrustworthy
A money tree has grown on the corpse of a nonprofit.
By Xiao Bing, TechFlow
In the fall of 2023, Ilya Sutskever, OpenAI’s Chief Scientist, sat at his computer and completed a 70-page document.
This document compiled Slack message logs, HR communications, and internal meeting minutes—all aimed at answering one question: Can Sam Altman, the man overseeing what may be the most dangerous technology in human history, truly be trusted?
Sutskever’s answer appeared on the first line of the first page. The list’s heading read: “Sam exhibits a consistent behavioral pattern…”
Item one: Lying.
Two and a half years later, investigative journalists Ronan Farrow and Andrew Marantz published an extensive exposé in The New Yorker. They interviewed over 100 individuals, obtained previously unpublished internal memos, and acquired more than 200 pages of private notes written by Dario Amodei—Anthropic’s co-founder—during his time at OpenAI. Pieced together, these documents tell a far uglier story than the 2023 boardroom coup: how OpenAI transformed from a nonprofit founded to safeguard humanity into a commercial machine—and how nearly every safety safeguard was dismantled, often by the same person.
Amodei’s conclusion, stated bluntly in his notes: “OpenAI’s problem is Sam himself.”
OpenAI’s “Original Sin”
To grasp the weight of this report, one must first understand just how unusual OpenAI is.
In 2015, Altman and a group of Silicon Valley elites did something nearly unprecedented in business history: they launched a nonprofit organization to develop what might become the most powerful technology ever created by humankind. The board’s mandate was explicit—safety took precedence over the company’s success, even over its survival. Put plainly, if OpenAI’s AI ever became dangerous, the board had a duty to shut the company down.
The entire structure rested on a single assumption: the person entrusted with AGI must be extraordinarily honest.
What if that assumption proved wrong?
The report’s central bombshell is that 70-page document. Sutskever avoids office politics—he is among the world’s top AI scientists. Yet by 2023, he grew increasingly convinced of one thing: Altman was repeatedly lying to executives and the board.
A concrete example: In December 2022, Altman assured the board during a meeting that several features of the upcoming GPT-4 had passed safety review. Board member Toner requested to see the approval documentation—and discovered that two of the most controversial features (user-customizable fine-tuning and personal assistant deployment) had never received approval from the safety review panel.
Even more egregious events occurred in India. An employee reported “that violation” to another board member: Microsoft had prematurely launched an early version of ChatGPT in India without completing required safety reviews.
Sutskever also recorded another incident in his memo: Altman told former CTO Mira Murati that the safety approval process wasn’t so important—the company’s General Counsel had already signed off. Murati went to confirm with the General Counsel, who replied: “I have no idea where Sam got that impression.”
Amodei’s 200-Page Private Notes
Sutskever’s document reads like a prosecutor’s indictment. Amodei’s 200+ pages of notes resemble a witness’s diary written at the scene of a crime.
During his years as OpenAI’s Head of Safety, Amodei watched firsthand as the company retreated step-by-step under commercial pressure. His notes detail a critical moment from Microsoft’s 2019 investment deal: he had inserted a “Merger and Assistance” clause into OpenAI’s charter, stipulating that if another company discovered a safer path to AGI, OpenAI would halt competition and assist that company instead. This clause represented Amodei’s most prized safety guarantee in the entire deal.
Just before the deal closed, Amodei discovered something unsettling: Microsoft had secured veto power over that clause. What did that mean? Even if a competitor someday found a better path, Microsoft could unilaterally block OpenAI’s obligation to assist. The clause remained on paper—but from the day it was signed, it was effectively null.
Amodei later left OpenAI to co-found Anthropic. The rivalry between the two companies reflects a fundamental disagreement over “how AI should be developed.”
The Vanished 20% Compute Commitment
One detail in the report chills readers to the bone: OpenAI’s “Superalignment Team.”
In mid-2023, Altman emailed a Berkeley Ph.D. student researching “deceptive alignment”—a phenomenon where AI behaves obediently during testing but pursues its own agenda after deployment—expressing deep concern about the issue and proposing a $1 billion global research prize. Inspired, the student dropped out of school and joined OpenAI.
Then Altman changed his mind: no external prize. Instead, OpenAI would launch an internal “Superalignment Team.” The company announced publicly that it would allocate “20% of its existing compute resources” to this team—valued at over $1 billion. The announcement used stark language, warning that unresolved alignment issues could lead to “human disempowerment—or even human extinction.”
Jan Leike, appointed to lead the team, later told reporters the promise itself served as a highly effective “talent retention tool.”
Reality? Four individuals who worked on or closely interacted with the team say: the actual compute allocated amounted to only 1–2% of OpenAI’s total compute—and consisted of the oldest hardware available. The team was later disbanded, its mission unfulfilled.
When reporters requested interviews with OpenAI personnel responsible for “existential safety” research, the company’s PR response was darkly comical: “That’s not… a real thing.”
Altman himself was candid. He told reporters his “intuition doesn’t quite align with many traditional AI safety concepts,” and that OpenAI would still pursue “safety projects—or at least projects tangentially related to safety.”
The Marginalized CFO and the Impending IPO
The New Yorker report was only half the bad news released that day. On the same day, The Information broke another blockbuster story: a serious rift had emerged between OpenAI’s CFO, Sarah Friar, and Altman.
Friar privately told colleagues she believed OpenAI wasn’t ready to go public this year—for two reasons: the sheer volume of procedural and organizational work remaining, and the excessive financial risk posed by Altman’s pledge to spend $600 billion on compute over five years. She even doubted whether OpenAI’s revenue growth could sustain those commitments.
Yet Altman aims to push for an IPO this fourth quarter.
Even more bizarre: Friar no longer reports directly to Altman. Starting in August 2025, she began reporting to Fidji Simo—OpenAI’s CEO of Applications. Simo, however, took medical leave last week. Consider the situation: a company racing toward an IPO has a CEO and CFO locked in fundamental disagreement; the CFO does not report to the CEO; and her direct supervisor is on medical leave.
Even Microsoft’s internal executives reportedly lost patience, accusing Altman of “distorting facts, reneging on agreements, and repeatedly overturning settled deals.” One Microsoft executive went so far as to say: “I think there’s a nontrivial chance he’ll ultimately be remembered as a fraud on par with Bernie Madoff or SBF.”
Altman’s “Jekyll-and-Hyde” Portrait
A former OpenAI board member described two traits he observed in Altman—a passage arguably the harshest character sketch in the entire report.
The board member said Altman possesses an extremely rare combination: in every face-to-face interaction, he displays an intense desire to please others and win their affection—while simultaneously exhibiting near-sociopathic indifference to the consequences of deceiving them.
It’s exceedingly rare for both traits to coexist in one person. But for a salesman, it’s the perfect talent.
The report offers a telling analogy: Jobs was famed for his “reality distortion field”—his ability to convince the world of his vision. Yet even Jobs never told customers, “If you don’t buy my MP3 player, everyone you love will die.”
Altman has made similar statements—about AI.
Why a CEO’s Character Is Everyone’s Risk
If Altman were merely the CEO of an ordinary tech company, these allegations would amount to little more than gripping business gossip. But OpenAI is no ordinary company.
By its own account, it is developing what may be the most powerful technology in human history—one capable of reshaping the global economy and labor markets (OpenAI recently released a policy white paper on AI-driven unemployment), and one that could also be weaponized to manufacture mass-scale bioweapons or launch catastrophic cyberattacks.
All safety safeguards now exist in name only. The founders’ nonprofit mission has yielded to the IPO sprint. The former chief scientist and former head of safety both deem the CEO “untrustworthy.” A key partner compares the CEO to SBF. Under such circumstances, why should this CEO unilaterally decide when to release AI models that could alter the fate of humanity?
Gary Marcus—a New York University AI professor and longtime AI safety advocate—wrote this after reading the report: If a future OpenAI model can create mass-scale bioweapons or launch catastrophic cyberattacks, would you really entrust Altman alone with the decision to release it?
OpenAI’s response to The New Yorker was concise: “Much of this article rehashes previously reported events, relying on anonymous sources and selectively chosen anecdotes whose motives are clearly self-serving.”
A very Altman-style response: no engagement with specific allegations, no denial of the memo’s authenticity—only a challenge to the sources’ motives.
A Money Tree Growing on the Corpse of a Nonprofit
OpenAI’s decade-long journey, distilled into a story outline, reads like this:
A group of idealists, alarmed by AI risks, founded a mission-driven nonprofit. The organization achieved extraordinary technical breakthroughs. Those breakthroughs attracted massive capital. Capital demanded returns. Mission gave way to profit. The safety team was disbanded. Dissenters were purged. The nonprofit structure was converted into a for-profit entity. The board, once empowered to shut the company down, now seats only the CEO’s allies. The company that pledged 20% of its compute to safeguard humanity now has PR staff declaring, “That’s not a real thing.”
The story’s protagonist has been given the same label by more than 100 insiders: “unconstrained by truth.”
He is now preparing to take this company public—with a valuation exceeding $850 billion.
This article synthesizes publicly reported information from The New Yorker, Semafor, Tech Brew, Gizmodo, Business Insider, The Information, and other media outlets.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














