Why AI Makes Things Up: Understanding Hallucination (Topic 3) in Module 2 – AI-Basics (BG)

Why AI Makes Things Up: Understanding Hallucination

What Is Hallucination?

AI hallucination is when a model generates plausible-sounding but factually incorrect content — and does so without any signal that something is wrong. The model doesn't know it's wrong. It predicts text that fits the pattern of correct answers, even when it isn't.

Real examples of hallucinated AI output: - Fabricated legal case citations that look real (correct court name, plausible case name, believable date) but don't exist - Invented research paper abstracts with fake author names and DOIs - Incorrect statistics presented with false precision - Biographical details about real people that contain invented facts

Why It Happens: The Prediction Engine Problem

The root cause is the training objective: predict the next most likely token. When an AI is asked about a specific case study, citation, or fact that: - Didn't appear often in training data - Falls after the training data cutoff - Requires precise detail the model doesn't have stored

...the model fills in with what sounds right, not what is right. It's the same mechanism that produces a fluent essay — but applied to fact-retrieval, it produces confident fabrications.

Which Tasks Have the Highest Hallucination Risk?

High hallucination risk Lower hallucination risk
Specific citations and case numbers General explanations of concepts
Statistics and precise numbers Brainstorming and ideation
Recent news (after training cutoff) Writing assistance and editing
People's exact quotes Code in common languages
Niche or specialist knowledge Widely-documented historical facts

The Confident Tone Problem

Hallucination is especially dangerous because AI doesn't hedge. A human expert who doesn't know something says 'I'm not sure.' An AI model predicts the most plausible-sounding response — which is delivered in the same confident tone whether it's absolutely correct or completely invented.

This is called calibration failure: the model's expressed confidence doesn't match its actual accuracy.

Practical Habits for Protecting Yourself

  1. Verify specific facts — Any claim involving specific names, numbers, dates, or citations should be verified in an authoritative source
  2. Ask for sources, then check them — AI can cite sources, but those citations are sometimes hallucinated. Always verify cited sources exist and say what the AI claims
  3. Be more skeptical for niche topics — General questions are safer than highly specific ones
  4. Use search-grounded AI when accuracy matters — Tools like Perplexity.ai connect AI to live search results, dramatically reducing hallucination on factual questions
  5. Treat AI output as a draft — The AI's job is a productive first draft; your job is critical review
Sign in to join the discussion.
Recent posts
No posts yet.

Why AI Makes Things Up: Understanding Hallucination

What Is Hallucination?

AI hallucination is when a model generates plausible-sounding but factually incorrect content — and does so without any signal that something is wrong. The model doesn't know it's wrong. It predicts text that fits the pattern of correct answers, even when it isn't.

Real examples of hallucinated AI output: - Fabricated legal case citations that look real (correct court name, plausible case name, believable date) but don't exist - Invented research paper abstracts with fake author names and DOIs - Incorrect statistics presented with false precision - Biographical details about real people that contain invented facts

Why It Happens: The Prediction Engine Problem

The root cause is the training objective: predict the next most likely token. When an AI is asked about a specific case study, citation, or fact that: - Didn't appear often in training data - Falls after the training data cutoff - Requires precise detail the model doesn't have stored

...the model fills in with what sounds right, not what is right. It's the same mechanism that produces a fluent essay — but applied to fact-retrieval, it produces confident fabrications.

Which Tasks Have the Highest Hallucination Risk?

High hallucination risk Lower hallucination risk
Specific citations and case numbers General explanations of concepts
Statistics and precise numbers Brainstorming and ideation
Recent news (after training cutoff) Writing assistance and editing
People's exact quotes Code in common languages
Niche or specialist knowledge Widely-documented historical facts

The Confident Tone Problem

Hallucination is especially dangerous because AI doesn't hedge. A human expert who doesn't know something says 'I'm not sure.' An AI model predicts the most plausible-sounding response — which is delivered in the same confident tone whether it's absolutely correct or completely invented.

This is called calibration failure: the model's expressed confidence doesn't match its actual accuracy.

Practical Habits for Protecting Yourself

  1. Verify specific facts — Any claim involving specific names, numbers, dates, or citations should be verified in an authoritative source
  2. Ask for sources, then check them — AI can cite sources, but those citations are sometimes hallucinated. Always verify cited sources exist and say what the AI claims
  3. Be more skeptical for niche topics — General questions are safer than highly specific ones
  4. Use search-grounded AI when accuracy matters — Tools like Perplexity.ai connect AI to live search results, dramatically reducing hallucination on factual questions
  5. Treat AI output as a draft — The AI's job is a productive first draft; your job is critical review
Sign in to join the discussion.
Recent posts
No posts yet.
Info
You aren't logged in. Please Log In or Join for Free to unlock full access.