Introduction
Imagine reading a story where some facts seem made up or don't match reality. In AI, sometimes the answers given sound confident but are actually incorrect or invented. This problem is called hallucination, and detecting it helps us trust AI outputs more.