
AI Read *1984* and Decided to Ban It
TechFlow Selected TechFlow Selected

AI Read *1984* and Decided to Ban It
Those who resisted AI judgments lost their jobs, while those who signed off on AI judgments faced no consequences.
Author: Kuli, TechFlow
Last week, a secondary school in Manchester, UK, used AI to audit its own library.
The AI generated a list of 193 books slated for removal—each with an accompanying justification. George Orwell’s 1984 appeared prominently on the list, flagged for containing “themes of torture, violence, and sexual coercion.”
1984 depicts a world where the government monitors everything, rewrites history, and dictates what citizens may or may not read. Today, an AI performed the same function for a school—and it likely had no idea what it was saying.
The school’s librarian deemed the recommendations unreasonable and refused to implement them in full.

The school immediately launched an internal investigation against her, citing “child safety” as grounds—accusing her of introducing inappropriate books into the library and reporting her to local government authorities. Under mounting pressure, she took medical leave and ultimately resigned.
The absurdity deepened when the local government’s investigation concluded she had indeed violated child safety procedures, upholding the complaint.
Caroline Roche, Chair of the School Library Association’s UK branch, stated that this finding effectively bars her from working in any school ever again.
The person who resisted AI’s judgment lost her job; the person who signed off on AI’s judgment faced no consequences.
Later, the school acknowledged in an internal document that all categorizations and justifications were AI-generated—stating verbatim: “Although the categorization was generated by AI, we consider it broadly accurate.”
A school delegated to AI the judgment of “which books are appropriate for students”—and the AI returned an answer it itself did not understand. A human manager then approved it without close scrutiny.
After being exposed by Index on Censorship—the UK-based freedom-of-expression organization—the incident raised questions far beyond the shelves of a single school:
When AI begins deciding for humans what content is acceptable and what is dangerous, who judges whether AI’s judgment is correct?
Wikipedia Slams the Door on AI
The same week, another institution answered that question—with action.
While schools let AI decide what people may read, Wikipedia—the world’s largest online encyclopedia—chose the opposite: it refuses to let AI dictate what the encyclopedia writes.
That same week, English Wikipedia formally adopted a new policy prohibiting the use of large language models (LLMs) to generate or rewrite article content. The vote passed 44–2.
The immediate trigger was an AI account named TomWikiAssist. In early March, this account autonomously created and edited multiple Wikipedia articles before being discovered and swiftly removed by the community.
An AI can draft an article in seconds—but volunteers often spend hours verifying the accuracy of facts, sources, and phrasing in a single AI-generated article.

Wikipedia’s editor community is finite. If AI could mass-produce content indefinitely, human editors simply couldn’t keep up.
But that isn’t even the most troubling part. Wikipedia is one of the most critical training data sources for global AI models. AI learns knowledge from Wikipedia, then uses that knowledge to write new Wikipedia articles—which, in turn, get ingested by next-generation AI models for further training.
Once inaccurate information generated by AI creeps in, it amplifies across this loop—a recursive, nested form of AI poisoning:
AI pollutes training data; polluted training data then poisons AI.
Still, Wikipedia’s policy leaves two narrow exceptions: editors may use AI to polish text they’ve written themselves, or to assist with translation. Yet the policy explicitly warns that AI may “go beyond your instructions, altering the meaning of the text so that it contradicts cited sources.”
Human writers make mistakes—and for over two decades, Wikipedia has corrected them through community collaboration. AI makes mistakes differently: its fabrications appear more convincing than truth—and it produces them at scale.
One school trusted AI’s judgment—and lost a librarian. Wikipedia chose distrust—and shut the door entirely.
But what if even those who build AI begin losing faith in it?
The AI Builders Themselves Are Getting Scared
While external institutions are closing doors on AI, AI companies themselves are pulling back.
That same week, OpenAI indefinitely suspended ChatGPT’s “Adult Mode”—a feature originally scheduled to launch in December last year, allowing age-verified adult users to engage in sexually explicit conversations with ChatGPT.
CEO Sam Altman personally announced the plan in October last year, stating the company intended to “treat adult users like adults.”
Yet after three postponements, the feature was scrapped outright.
According to the UK’s Financial Times, OpenAI’s internal Health Advisory Board voted unanimously against the feature. Advisors voiced concrete concerns: users might develop unhealthy emotional dependencies on AI, and minors would inevitably circumvent age verification.
One advisor put it more bluntly: without major improvements, the feature risked becoming a “sexy suicide coach.”
The age-verification system’s error rate exceeds 10%. With ChatGPT’s weekly active user base standing at 800 million, a 10% error rate translates to tens of millions of misclassified individuals.
Adult Mode wasn’t the only product axed this month. AI video tool Sora and ChatGPT’s built-in instant checkout feature were also withdrawn simultaneously. Altman explained the company would refocus on core business lines and eliminate “side projects.”
Yet OpenAI is also preparing for an IPO.
For a company racing toward public listing, the concentrated removal of potentially controversial features suggests something more precise than “refocusing.”
Five months ago, Altman pledged to treat users like adults. Five months later, he realized his own company still hadn’t figured out what AI should—or shouldn’t—allow users to access.
Even the builders of AI lack answers. So who draws the line?
The Unbridgeable Speed Gap
Lay these three incidents side by side, and a central conclusion emerges clearly:
The speed at which AI generates content has already outstripped human capacity to review it—by orders of magnitude.
In this context, the Manchester school’s decision becomes easy to understand. How long would it take a librarian to read and assess all 193 books? Minutes for an AI to run through them.
The headteacher chose the minutes-long solution. Did he truly believe in AI’s judgment? I suspect he simply didn’t want to spend the time.
This is an economic problem: generation costs approach zero, while review costs fall entirely on humans.
Hence every institution affected by AI is forced into the crudest possible response: Wikipedia bans outright; OpenAI scraps entire product lines. None of these decisions reflect deep deliberation—they’re stopgap measures taken because there’s no time to think things through.
“Plug the hole first” is becoming the norm.
AI capabilities evolve every few months—yet discussions about what content AI should or shouldn’t handle still lack even a coherent international framework. Each institution draws its own line within its own yard, with no coordination and frequent contradictions between lines.
AI’s speed continues accelerating. Human reviewers won’t magically multiply. This gap will only widen—until one day, something far more serious than banning 1984 occurs.
By then, drawing the line may be too late.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News











