- The Loop
- Posts
- 🔆 The Compelling Sense of Reality: Hallucinations
🔆 The Compelling Sense of Reality: Hallucinations
AI hallucinations, the difference between liars and bullshitters, and Finland's screen ban in schools
Was this email forwarded to you? Sign up here
🗞️ Issue 68 // ⏱️ Read Time: 7 min
In this week's newsletter
What we’re talking about: AI hallucinations, where systems like Cursor's customer support confidently generate completely false information that appears credible.
How it's relevant: These hallucinations are already causing millions in reputational and financial damage to major companies and destroying customer trust at organizations across industries.
Why it matters: While hallucinations cannot be eliminated entirely, implementing proper safeguards like human oversight and RAG technology can transform this risk into a competitive advantage.
[Looking for the section Big Tech News of the Week? Move to the end of this newsletter.]
Hello 👋
Last week, an AI-powered customer support system for Cursor (an AI coding tool) fabricated an entire company policy out of thin air, causing a significant user exodus. When users began experiencing unexpected logouts while switching between devices, they naturally contacted support for answers. The AI confidently explained this was "expected behaviour" under a new one-device login policy.
The problem? No such policy existed. It was completely made up by the AI.
The false explanation spread like wildfire. Within hours, dozens of users canceled their subscriptions, community forums erupted with complaints, and by the time Cursor's team realized what had happened, the company was facing a full-blown crisis. What had actually been a simple backend bug became a case study in what AI researchers call "hallucinations", as well as their potentially devastating business impact.
As Berkeley researcher Shomit Ghose puts it: "We may now be finding that the AI 'ghost in the machine' that we all should fear is not sentience, but simple hallucination."
Understanding AI Hallucinations: Making Something Out of Nothing
AI hallucinations occur when AI systems generate information that sounds plausible but is factually incorrect or entirely fabricated. Unlike human hallucinations (a false perception of objects or events involving our senses, which stem from neurochemical factors), AI hallucinations are an inherent product of how these systems work. In a sense, the term is misleading: What we are actually talking about when a model hallucinates is that it is outputting false information.
Researchers classify these hallucinations into two main types: