PEPE0.00 -2.16%

TON1.25 -3.80%

BNB618.66 -1.97%

SOL84.90 -3.64%

XRP1.34 -2.25%

DOGE0.09 -0.69%

TRX0.31 1.23%

ETH2036.71 -2.05%

BTC67543.50 -2.95%

SUI0.91 -1.93%

AI Banned 1984. No One Saw the Irony.

As schools, Wikipedia, and even OpenAI itself pull back from trusting AI with sensitive decisions, the real problem is becoming clear - AI is scaling far faster than any human oversight system built to contain it.

Last week, a secondary school in Manchester, England, used AI to review its own library.

The AI produced a removal list of 193 books, each with an accompanying justification. George Orwell's 1984 was on it, flagged for "containing themes of torture, violence, and sexual coercion."

1984 depicts a world where the government surveils everything, rewrites history, and decides what citizens can and cannot read. Now, AI has done the same thing for a school, and it may not have had any understanding of what it was doing.

The school's librarian found the recommendations unreasonable and refused to carry out the AI's suggestions in full.

The school then launched an internal investigation against her on grounds of "child safety," accusing her of introducing inappropriate books into the library and reporting her to the local authority. She went on sick leave due to the pressure and ultimately resigned.

The absurd part: the local authority's investigation concluded that she had indeed violated child safety procedures. The complaint was upheld.

Caroline Roche, chair of the School Libraries Group in the UK, said this outcome effectively meant the librarian could never work in any school again.

The person who pushed back against the AI's judgment lost her job. Those who approved it faced no consequences.

The school later acknowledged in internal documents that all classifications and justifications were AI-generated. The exact wording: "Although the classifications were generated by AI, we believe they are broadly accurate."

A school handed the judgment of "which books are appropriate for students" to an AI. The AI returned answers it did not itself understand. Then a human administrator rubber-stamped them without scrutiny.

After the story was exposed by Index on Censorship, a UK free speech organization, the questions it raised went far beyond one school's bookshelves:

When AI begins deciding for humans what content is appropriate and what is dangerous, who determines whether the AI's judgment is correct?

Wikipedia Closes Its Door to AI

That same week, another institution answered the question through action.

A school allows AI to decide what people could read. Wikipedia, the world's largest online encyclopedia, made the opposite choice: it would not let AI decide what the encyclopedia says.

That same week, the English-language Wikipedia formally passed a new policy banning the use of large language models to generate or rewrite article content. The vote was 44 in favor, 2 against.

The immediate trigger was an AI account called TomWikiAssist. In early March this year, the account autonomously created and edited multiple Wikipedia entries before the community caught it and intervened.

AI can write an article in seconds. But for a volunteer to verify whether the facts, sources, and phrasing in an AI-generated article are accurate takes hours.

Wikipedia's editing community is finite. If AI can produce content at unlimited scale, human editors simply cannot keep up with review.

And that is not even the most troubling part. Wikipedia is one of the most important training data sources for AI models worldwide. AI learns from Wikipedia, then uses what it learned to write new Wikipedia entries, which are then ingested by the next generation of AI models for further training.

Once AI-generated misinformation enters this loop, it amplifies through each cycle, creating a recursive form of AI data poisoning:

AI contaminates the training data. The training data contaminates AI.

Wikipedia's policy did leave two narrow exceptions: editors may use AI to polish their own writing and to assist with translation. But the policy specifically warns that AI will "go beyond what you asked, alter the meaning of the text, and make it inconsistent with cited sources."

When human writers make errors, Wikipedia's community collaboration model has spent over two decades correcting them. AI makes mistakes differently. What it fabricates looks more convincing than the real thing, and it can do so at industrial scale.

A school trusted AI's judgment and lost a librarian. Wikipedia chose not to trust it and shut the door entirely.

But what happens when the people who build AI start losing trust in it themselves?

The Builders of AI Get Scared First

While the institutions are closing their doors to AI, AI companies themselves are pulling back.

That same week, OpenAI indefinitely shelved ChatGPT's "adult mode." The feature, originally planned for launch last December, would have allowed age-verified adult users to engage in erotic conversations with ChatGPT.

CEO Sam Altman personally previewed it last October, saying the company wanted to "treat adult users like adults."

After three delays, the feature was scrapped entirely.

According to the Financial Times, OpenAI's well-being advisory council voted unanimously against the feature. The advisors' concerns were specific: users would develop unhealthy emotional dependence on AI, and minors would inevitably find ways to bypass age verification.

One advisor put it more bluntly: without major improvements, the feature could become a "sexy suicide coach."

The error rate for age verification systems exceeds 10%. At ChatGPT's scale of 800 million weekly active users, 10% means tens of millions of people could be misclassified.

Adult mode was not the only product cut that month. The AI video tool Sora and ChatGPT's built-in instant checkout feature were also taken offline in the same period. Altman said the company needed to focus on its core business and eliminate "side quests."

But OpenAI is simultaneously preparing for an IPO.

A company sprinting toward a public listing while aggressively cutting features that could invite controversy. The more accurate term for that may not be "focus."

Five months ago, Altman was talking about treating users like adults. Five months later, he discovered that his own company still had not figured out what AI should and should not let users access.

Even the people building AI do not have the answer. So who, exactly, is supposed to draw that line?

The Speed Gap No One Can Close

When you place these three stories side by side, a core conclusion emerges:

The speed at which AI produces content and the speed at which humans can review it are no longer in the same order of magnitude.

The Manchester school's decision becomes easy to understand in this context. How long would it take for a librarian to read all 193 books and make individual judgments? How long for AI to run through them? Minutes.

The headteacher chose the option that took minutes. Did they genuinely trust the AI's judgment? More likely, they simply did not want to spend the time.

This is an economic problem. The cost of generation approaches zero. The cost of review falls entirely on humans.

As a result, every institution affected by AI is forced to respond in the bluntest possible way: Wikipedia bans it outright; OpenAI cuts entire product lines. None of these responses are the product of careful deliberation. They are all reactions driven by urgency: no time to think it through, just block the gap for now.

"Block it first, figure it out later" is becoming the norm.

AI capabilities iterate every few months, yet the discussion about what content AI should be allowed to touch does not even have a credible international framework. Each institution draws its own line within its own walls. Those lines contradict each other, and no one is coordinating.

AI is only getting faster. The number of people available to review its output is not growing. This gap will only widen, until one day something happens that is far more serious than banning 1984.

By the time anyone tries to draw the line then, it may already be too late.

 

If you find this helpful, feel free to follow us for future updates. ❤

Where crypto flows differently,expert analysis and industry interviews.