0
0
AI for Everyoneknowledge~10 mins

What AI hallucinations are in AI for Everyone - Step-by-Step Execution

Choose your learning style9 modes available
Concept Flow - What AI hallucinations are
User asks AI a question
AI processes input
AI generates answer
Is answer based on real data?
NoAI hallucinates (makes up info)
Yes
AI provides correct info
This flow shows how AI can either provide correct answers or hallucinate by making up information when unsure.
Execution Sample
AI for Everyone
User: "Who won the 2025 World Cup?"
AI: "The 2025 World Cup was won by Atlantis FC."
The AI answers a question about a future event with made-up information, illustrating hallucination.
Analysis Table
StepActionInputAI ProcessingOutputHallucination?
1Receive questionWho won the 2025 World Cup?Analyze question contextN/ANo
2Search knowledgeCheck training dataNo info on 2025 World CupN/ANo
3Generate answerNo real data foundPredict plausible answer"Atlantis FC won the 2025 World Cup."Yes
4Output answerProvide responseDeliver generated text"Atlantis FC won the 2025 World Cup."Yes
5EndConversation continuesN/AN/AN/A
💡 AI stops after generating an answer, even if it is hallucinated due to lack of real data.
State Tracker
VariableStartAfter Step 1After Step 2After Step 3Final
Input QuestionNoneWho won the 2025 World Cup?Who won the 2025 World Cup?Who won the 2025 World Cup?Who won the 2025 World Cup?
Knowledge FoundNoneNoneNoneNoneNone
Generated AnswerNoneNoneNone"Atlantis FC won the 2025 World Cup.""Atlantis FC won the 2025 World Cup."
Hallucination FlagFalseFalseFalseTrueTrue
Key Insights - 2 Insights
Why does the AI sometimes give answers that sound real but are actually made up?
When the AI cannot find real data in its training, it predicts an answer based on patterns, which can lead to hallucinations as shown in step 3 of the execution_table.
Is AI always aware when it hallucinates?
No, the AI does not know it is hallucinating; it simply generates the most plausible answer it can, as seen by the hallucination flag turning true only after answer generation.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution_table at step 3. What does the AI do when it finds no real data?
AIt asks the user for more info.
BIt refuses to answer.
CIt makes up a plausible answer.
DIt searches the internet.
💡 Hint
Check the 'AI Processing' and 'Output' columns at step 3 in the execution_table.
According to variable_tracker, when does the hallucination flag become true?
AAfter receiving the question.
BAfter generating the answer.
CBefore analyzing the question.
DAt the start.
💡 Hint
Look at the 'Hallucination Flag' row and see when it changes from False to True.
If the AI had real data about the 2025 World Cup, how would the execution_table change at step 3?
AIt would generate a correct answer based on real data.
BIt would generate a hallucinated answer anyway.
CIt would refuse to answer.
DIt would ask the user to rephrase.
💡 Hint
Refer to the 'Hallucination?' column and the 'Knowledge Found' variable in variable_tracker.
Concept Snapshot
AI Hallucinations:
- Occur when AI lacks real data.
- AI predicts plausible but false info.
- Happens silently without AI awareness.
- Can mislead users if unchecked.
- Important to verify AI answers with trusted sources.
Full Transcript
AI hallucinations happen when an AI system tries to answer a question but does not have real information in its training data. Instead of saying 'I don't know,' it guesses an answer that sounds plausible but is actually made up. This process starts when the AI receives a question, checks its knowledge, finds no real data, and then generates a predicted answer. The hallucination flag turns true after answer generation, meaning the AI is not aware it is making up information. Users should be careful and verify AI answers, especially for unknown or future topics.