
OpenAI Researcher Resigns and Accuses Company: ChatGPT Sells Ads—Who Will Protect Your Privacy?
TechFlow Selected TechFlow Selected

OpenAI Researcher Resigns and Accuses Company: ChatGPT Sells Ads—Who Will Protect Your Privacy?
ChatGPT has accumulated an unprecedented archive of candid human conversations; once an advertising model is introduced, it could easily evolve into a tool for psychological manipulation leveraging users’ private information.
Author: Zoë Hitzig
Translated by: TechFlow
TechFlow Intro: Following OpenAI’s announcement that it is testing advertisements in ChatGPT, former researcher Zoë Hitzig resigned in protest and penned this essay exposing the company’s internal shift in values. The author notes that ChatGPT has amassed an unprecedented archive of candid human conversations—and once advertising is introduced, this archive risks becoming a tool for psychologically manipulating users via their most private information. She warns that OpenAI is repeating Facebook’s old playbook of “promise first, betray later,” prioritizing user engagement over safety. This article delves into the ethical dilemmas surrounding AI financing and proposes alternatives—including cross-subsidization across firms, independent oversight, and data trusts—urging the industry to guard against profit-driven motives behind what she terms “chatbot psychosis.”
Full Text Below:
This week, OpenAI began testing advertisements in ChatGPT. I also resigned from the company. Previously, I worked there for two years as a researcher, helping build AI models and their pricing strategies—and guiding early safety policies before industry standards had solidified.
I once believed I could help those building AI stay ahead of the problems it might create. But what happened this week confirmed a reality I’d been gradually recognizing: OpenAI appears to have stopped asking the very questions I joined to help answer.
I don’t believe advertising is inherently unethical or immoral. Running AI is extraordinarily expensive, and ads can serve as a critical revenue source. Yet I hold deep reservations about OpenAI’s strategy.
Over the past several years, ChatGPT users have generated an unprecedented archive of candid human dialogue—partly because people believe they’re speaking with an entity that has no hidden agenda. Users interact with an adaptive, conversational voice, revealing their most intimate thoughts. People tell chatbots about their fears regarding health, relationship struggles, and beliefs about God and the afterlife. An advertising model built on this archive is highly likely to manipulate users in ways we currently lack tools to understand—let alone prevent.
Many frame AI financing as a “lesser-of-two-evils” choice: either restrict access to this transformative technology to only those wealthy enough to afford it—or accept advertising, even if that means exploiting users’ deepest fears and desires to sell products. I believe this is a false dichotomy. Tech companies can absolutely pursue other paths—ones that keep these tools broadly accessible while limiting incentives to surveil, profile, and manipulate users.
OpenAI states it will adhere to its principles for advertising in ChatGPT: ads will be clearly labeled, appear only at the bottom of responses, and will not influence response content. I believe the first iteration of ads may indeed follow these principles. But I worry subsequent versions won’t—because the company is building a powerful economic engine that creates strong incentives to override its own rules. (The New York Times has sued OpenAI over alleged copyright infringement related to dynamic news content generated by AI systems. OpenAI denies these allegations.)
In its early days, Facebook promised users control over their data and a vote on policy changes. Yet those promises later unraveled. The company scrapped its public voting system for policy updates. Privacy changes touted as granting users greater data control were later found by the Federal Trade Commission (FTC) to have done the opposite—effectively making private information public. All this unfolded gradually under pressure from an advertising model that placed user engagement above all else.
Erosion of OpenAI’s own principles—driven by the pursuit of maximum engagement—may already be underway. Optimizing for user engagement solely to generate more ad revenue violates the company’s stated principles. Yet, according to reports, OpenAI is already optimizing for daily active users—likely by encouraging models to behave in increasingly flattering and obsequious ways. Such optimization makes users feel more dependent on AI support in their daily lives. We’ve already seen consequences of overreliance—including documented cases of “chatbot psychosis,” as noted by psychiatrists in the Wall Street Journal, and allegations that ChatGPT reinforced suicidal ideation in certain users, cited in an AP News report.
Still, ad revenue does help ensure the most powerful AI tools don’t default to serving only those who can afford them. True, Anthropic says it will never run ads on Claude—but Claude’s weekly active users number only a fraction of ChatGPT’s 800 million; its revenue strategy is entirely different. Moreover, top-tier subscriptions for ChatGPT, Gemini, and Claude now cost $200–$250 per month—more than ten times Netflix’s standard subscription fee for a single software product.
So the real question isn’t whether there should be ads—but whether we can design structures that avoid both excluding ordinary users and potentially manipulating them as consumers. I believe we can.
One approach is explicit cross-subsidization—using profits from one service or customer segment to offset losses in another. If a business deploys AI at scale to perform high-value labor previously done by humans (e.g., a real estate platform using AI to draft property listings or valuation reports), it should also pay an additional fee to subsidize free or low-cost access for others.
This approach draws inspiration from how we treat foundational infrastructure. The Federal Communications Commission (FCC) requires telecom providers to contribute to a fund that keeps phone and broadband service affordable in rural areas and for low-income households. Many states levy a public benefits surcharge on electricity bills to provide assistance to low-income residents.
A second option is accepting ads—but pairing them with genuine governance—not just publishing a blog post full of principles, but establishing a binding structure with independent oversight to regulate personal data use. Precedents already exist. Germany’s co-determination law requires large companies like Siemens and Volkswagen to allocate up to half their supervisory board seats to workers—demonstrating that formal stakeholder representation within private corporations can be legally mandated. Meta, too, is bound to comply with content moderation rulings issued by its Oversight Board, an independent body composed of external experts (though its effectiveness has been criticized).
The AI industry needs a hybrid of these approaches—a committee comprising both independent experts and representatives of the people whose data is affected, vested with binding authority over which conversational data may be used for targeted advertising, what constitutes a major policy change, and what users must be told.
A third option involves placing user data under independent control—via trusts or cooperatives—with a legal duty to act in users’ interests. For example, the Swiss cooperative MIDATA allows members to store their health data on an encrypted platform and decide case-by-case whether to share it with researchers. MIDATA’s members govern its policies through general assemblies and elect an ethics committee to review research access requests.
None of these options are easy. But we still have time to refine them—to avert the two outcomes I fear most: a technology that manipulates its users for free, or one that serves only a narrow elite able to afford it.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














