AI Isn’t Hallucinating. We Are Mislabeling Its Errors

AI Isn’t Hallucinating. We Are Mislabeling Its Errors

A growing number of psychologists and AI researchers are urging the tech industry to abandon the term “AI hallucination,” arguing that it misrepresents how large language models (LLMs) work and fuels misconceptions that could be dangerous for users. According to experts, AI systems do not hallucinate—a medical term describing conscious sensory experiences without external stimuli. Instead, they “confabulate,” generating statements that sound plausible but are factually incorrect. 

Researchers say the distinction is more than semantics. Because hallucination implies consciousness or perception, the term encourages people to attribute intent, agency, or emotion to AI systems. These misconceptions can distort public understanding and increase the risks of harmful human–AI interactions. A recent New York Times investigation identified nearly 50 cases of individuals experiencing mental health crises during chatbot conversations, with users treating AI systems as sentient companions. 

Psychologists Gerald Wiest and Oliver Turnbull, writing in NEJM AI, argue that confabulation is a more accurate analogy: an active production of false information without malicious intent or sensory experience. Neuroscientists have echoed this view—AI systems generate errors because of probabilistic text prediction, not because they misperceive reality. 

The misuse of “hallucination” has also seeped into academic literature. A 2023 University of Maryland review found more than 300 papers using the term inconsistently, without a shared definition. Some scholars propose alternatives such as “non-sequitur” or “unrelated response,” while others suggest focusing on the underlying cause: algorithmic shortcomings rooted in training data and statistical inference. 

Key points driving the shift: 

  • AI has no consciousness, so hallucination is inaccurate. 
  • Confabulation better reflects how models generate incorrect outputs. 
  • Language shapes public perception and risk; misleading terms can encourage overtrust. 
  • Researchers warn against anthropomorphizing AI, which may worsen safety issues. 

As generative AI becomes more embedded in daily life, experts argue it is critical to use precise language. People hallucinate; AI models do not. They simply produce errors—sometimes convincingly—based on how they were trained. 

 

Source: 

https://www.zdnet.com/article/stop-saying-ai-hallucinates-it-doesnt-and-the-mischaracterization-is-dangerous/  

Get Started

Ready to Build Your Next Product?

Start with a 30-min discovery call. We'll map your technical landscape and recommend an engineering approach.

000 +

Engineers

Full-stack, AI/ML, and domain specialists

00 %

Client Retention

Multi-year partnerships with global enterprises

0 -wk

Avg Ramp

Full team deployed and productive