Challenge - 5 Problems
Output Safety Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 conceptual
intermediate2:00remaining
Why is output filtering important in AI systems?
Imagine you have an AI assistant that answers questions. Why should the system include output filtering and safety checks before showing answers to users?
Attempts:
2 left
💻 code output
intermediate2:00remaining
What is the output of this safety check code snippet?
Given this Python code that filters out unsafe words from AI output, what will be printed?
Agentic_ai
unsafe_words = ['badword', 'danger'] output = 'This is a safe message.' filtered_output = ' '.join(word for word in output.split() if word.lower() not in unsafe_words) print(filtered_output)
Attempts:
2 left
❓ model choice
advanced2:00remaining
Which model architecture best supports output safety checks?
You want to build an AI that can self-monitor and filter its own outputs for safety. Which model design helps most?
Attempts:
2 left
❓ hyperparameter
advanced2:00remaining
Which hyperparameter adjustment can reduce unsafe outputs in text generation?
When generating text with a language model, which hyperparameter change helps reduce risky or harmful content?
Attempts:
2 left
🔧 debug
expert2:00remaining
Why does this output filtering code fail to block unsafe words?
Review this Python code meant to block unsafe words from AI output. Why does it fail to filter 'Danger'?
Agentic_ai
unsafe_words = ['danger'] output = 'This is a Danger.' filtered_output = ' '.join(word for word in output.split() if word.lower() not in unsafe_words) print(filtered_output)
Attempts:
2 left
