AI Fundamentals
← All Concepts
beginner

AI Hallucinations

The Confident Student Who Makes Things Up

6 min read

The Analogy

The Confident Student Who Makes Things Up

The student who never says 'I don't know' — they always give an answer, even if they're inventing it.

In every class, there's a student who sounds confident about everything. Ask them a question they don't know and they'll give you a detailed, convincing answer — that's completely wrong. AI models do this too. Because they're trained to predict plausible text, they sometimes produce confident, fluent answers that are factually incorrect. This is called hallucination.

In Plain English

AI hallucination is when a model generates information that sounds correct but is factually wrong or completely made up. The AI isn't lying — it's producing statistically plausible text that happens to be false. Always verify important AI outputs.


The Technical Picture

Hallucinations arise from the LLM's objective of maximising token prediction probability, not factual accuracy. The model interpolates from patterns in training data, producing plausible-sounding but factually unsupported outputs, especially for obscure facts, citations, statistics, or recent events beyond the knowledge cutoff.

Real-World Examples

  • An AI generating a fake academic citation that looks completely real
  • ChatGPT confidently stating an incorrect date for a historical event
  • A legal AI inventing case law that doesn't exist — actually happened in a US court case
Key Takeaway

AI hallucinations are confident-sounding wrong answers — always verify anything critical from an AI.

Related Concepts