Challenge - 5 Problems
Hallucination Detection Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate1:30remaining
What is hallucination in AI-generated text?
In the context of AI language models, what does the term 'hallucination' mean?
Attempts:
2 left
💡 Hint
Think about when AI makes up information that isn't true.
✗ Incorrect
Hallucination in AI means the model creates information that looks plausible but is actually false or unsupported.
❓ Predict Output
intermediate2:00remaining
Detecting hallucination with confidence scores
Given a model output with confidence scores for each token, which output indicates a higher chance of hallucination?
Prompt Engineering / GenAI
tokens = ['The', 'capital', 'of', 'France', 'is', 'Berlin'] confidences = [0.99, 0.98, 0.97, 0.96, 0.95, 0.40] # Which token likely indicates hallucination?
Attempts:
2 left
💡 Hint
Lower confidence tokens may indicate hallucination.
✗ Incorrect
Lower confidence on a token, especially when it contradicts known facts, suggests hallucination.
❓ Model Choice
advanced2:30remaining
Choosing a model architecture to reduce hallucination
Which model architecture is best suited to reduce hallucination in generated text by grounding outputs on external knowledge?
Attempts:
2 left
💡 Hint
Models that check facts during generation help reduce hallucination.
✗ Incorrect
Retrieval-augmented models access external knowledge to verify facts, reducing hallucination.
❓ Metrics
advanced2:00remaining
Evaluating hallucination with automatic metrics
Which automatic metric is most appropriate to measure hallucination in AI-generated summaries?
Attempts:
2 left
💡 Hint
Look for metrics that check factual correctness, not just similarity.
✗ Incorrect
FactCC is designed to detect factual inconsistencies, making it suitable for hallucination evaluation.
🔧 Debug
expert3:00remaining
Identifying hallucination cause in model output
You have a language model that often hallucinates facts in its answers. Which debugging step is most likely to help reduce hallucination?
Attempts:
2 left
💡 Hint
Improving training data quality helps reduce hallucination.
✗ Incorrect
Fine-tuning on fact-checked data helps the model learn accurate information and reduces hallucination.