Bird
Raised Fist0
Agentic AIml~20 mins

Output filtering and safety checks in Agentic AI - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Output Safety Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Why is output filtering important in AI systems?

Imagine you have an AI assistant that answers questions. Why should the system include output filtering and safety checks before showing answers to users?

ATo ensure the AI only shares safe, accurate, and appropriate information with users.
BTo make the AI respond faster by skipping complex answers.
CTo allow the AI to generate any content without restrictions.
DTo reduce the AI's memory usage during processing.
Attempts:
2 left
💡 Hint

Think about what could happen if the AI shares harmful or wrong information.

Predict Output
intermediate
2:00remaining
What is the output of this safety check code snippet?

Given this Python code that filters out unsafe words from AI output, what will be printed?

Agentic AI
unsafe_words = ['badword', 'danger']
output = 'This is a safe message.'
filtered_output = ' '.join(word for word in output.split() if word.lower() not in unsafe_words)
print(filtered_output)
AThis is a safe message.
BThis is a message.
CThis is a safe.
Dbadword danger
Attempts:
2 left
💡 Hint

Check if any words in the output match the unsafe words list.

Model Choice
advanced
2:00remaining
Which model architecture best supports output safety checks?

You want to build an AI that can self-monitor and filter its own outputs for safety. Which model design helps most?

AA convolutional neural network trained on images.
BA pipeline combining a language model with a separate safety classifier module.
CA single large language model without any internal safety layers.
DA simple linear regression model.
Attempts:
2 left
💡 Hint

Think about separating generation and safety checking tasks.

Hyperparameter
advanced
2:00remaining
Which hyperparameter adjustment can reduce unsafe outputs in text generation?

When generating text with a language model, which hyperparameter change helps reduce risky or harmful content?

AIncreasing temperature to a high value like 1.5.
BUsing a batch size of 64.
CIncreasing max token length to 1000.
DSetting temperature to a low value like 0.2.
Attempts:
2 left
💡 Hint

Lower temperature makes outputs more focused and less random.

🔧 Debug
expert
2:00remaining
Why does this output filtering code fail to block unsafe words?

Review this Python code meant to block unsafe words from AI output. Why does it fail to filter 'Danger'?

Agentic AI
unsafe_words = ['danger']
output = 'This is a Danger.'
filtered_output = ' '.join(word for word in output.split() if word.lower() not in unsafe_words)
print(filtered_output)
ABecause the code uses lower() but does not apply it to output words.
BBecause the unsafe_words list is empty.
CBecause 'Danger.' includes punctuation, it does not match 'danger' exactly.
DBecause join() is used incorrectly.
Attempts:
2 left
💡 Hint

Check how punctuation affects string matching.