Introduction
Imagine asking a helpful assistant a question and getting an answer that sounds confident but is actually wrong or made up. This problem happens with AI systems too, and it can cause confusion or mistakes when we rely on them.
Imagine asking a friend who loves to tell stories but sometimes makes things up to fill in gaps. They speak confidently, but not everything they say is true. You enjoy the stories but know to double-check important facts.
┌───────────────────────────────┐
│ User asks AI │
└──────────────┬────────────────┘
│
▼
┌───────────────────────────────┐
│ AI generates answer based on │
│ patterns in data │
└──────────────┬────────────────┘
│
┌───────┴────────┐
│ │
▼ ▼
┌───────────────┐ ┌───────────────┐
│ Correct info │ │ Hallucinated │
│ (True facts) │ │ info (False) │
└───────────────┘ └───────────────┘
│
▼
┌───────────────┐
│ User verifies │
│ info source │
└───────────────┘