Directive Blogs
How AI Hallucinations Are Corrupting Archival Truth
Wikipedia has always been the gold standard for human-vetted information. A recent clash between the Open Knowledge Association (OKA) and veteran Wikipedia editors has highlighted a big issue: AI hallucinations.
What started as an ambitious project to translate and expand the world’s most famous encyclopedia has turned into a cautionary tale about the erosion of AI trust.
What is the OKA?
The Open Knowledge Association is a non-profit organization that is dedicated to expanding Wikipedia’s reach, particularly in underrepresented languages. Their strategy involves:
- Providing stipends to full-time contributors and translators.
- The use of large language models (LLMs) like Grok and ChatGPT to automate the heavy lifting of translation.
- Hiring contractors (often from the Global South) to oversee these AI-generated drafts.
On paper, it’s a brilliant way to bridge the knowledge gap. In practice, it’s creating a hallucination factory that has real-world implications.
The Hallucinations: When AI Rewrites History
Wikipedia editors recently sounded the alarm after noticing bizarre errors in OKA-sponsored articles. Unlike simple typos, these were AI hallucinations, the kind that look perfectly real but are entirely fabricated. They included:
- Phantom citations - AI-generated articles cited real books and page numbers that, upon inspection, had nothing to do with the topic.
- Swapped sources - Facts about one historical figure were accidentally attributed to another because the AI blended context.
- Logical slop - In one instance involving the French La Bourdonnaye family, the AI invented an entire origin story and linked it to a source that didn't even mention the family.
Why is this Happening?
As any AI enthusiast knows, LLMs are statistical engines, not fact-checkers. When an AI translates a complex historical article, it isn't reading the facts; it’s predicting the next likely word. If the training data is thin on a specific niche topic, the AI simply fills in the gaps with plausible-sounding fiction.
The Human Cost of Cheap Information
The OKA’s model relies on human-in-the-loop verification, but Wikipedia editors found that the human part was failing. Many contractors, pressured by the volume of work or lacking the specific expertise to spot subtle errors, were simply copy-pasting AI output directly into the encyclopedia.
The issue isn't just the AI, noted one veteran editor. It's the false sense of security that a human is checking it when they're actually just acting as a conduit for AI slop.
Wikipedia Strikes Back
The Wikipedia community hasn't taken this lightly. In response to the OKA’s hallucination surge, they have implementing the following restrictions:
- Strict policies - OKA translators who fail verification four times are now being permanently blocked.
- Presumptive deletion - Large swaths of OKA-generated content are being flagged for deletion unless a human in good standing can manually verify every single sentence.
- New guardrails - The OKA has been forced to implement a second AI-checking-AI protocol to flag discrepancies before they reach the public.
Can We Automate Truth?
The OKA’s struggle proves that while AI is great for brainstorming or coding, it is still a dangerous tool for archival truth. Every time a hallucination makes it onto Wikipedia, it risks being cited by other AI models, creating a feedback loop of falsehoods that could be impossible to untangle.
For more interesting technology perspectives, return to our blog again soon.

Comments