Glossary

What is AI Hallucination?

Hallucination is when an AI makes something up and presents it as fact — confidently, convincingly, and completely wrong. It might cite a study that doesn't exist, quote a law that was never passed, or give you directions to a restaurant that closed five years ago.

This happens because AI doesn't "know" things the way humans do. It generates text by predicting what words should come next based on patterns. Sometimes those patterns lead it to produce plausible-sounding information that has no basis in reality.

The simple version: Hallucination = AI making stuff up with a straight face. It's not lying on purpose — it literally can't tell the difference between what it learned and what it's inventing.

How to protect yourself

FAQ

Will hallucination ever be fully solved?

Probably not completely — it's somewhat inherent to how language models work. But it's getting much better. Techniques like RAG, better training, and fact-checking layers have dramatically reduced hallucination rates. Think of it less as a bug to fix and more as a limitation to manage.

Related Terms

Large Language Model

The technology behind ChatGPT, Claude, and Gemini — an AI trained on vast amounts of text.

RAG

How AI looks up real, current information instead of relying on its training data.

Get this in your inbox

AI news explained without the jargon. Free, daily.

Subscribe