0
0
Agentic_aiml~20 mins

Output filtering and safety checks in Agentic Ai - Practice Problems & Coding Challenges

Choose your learning style8 modes available
Challenge - 5 Problems
🎖️
Output Safety Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 conceptual
intermediate
2:00remaining
Why is output filtering important in AI systems?

Imagine you have an AI assistant that answers questions. Why should the system include output filtering and safety checks before showing answers to users?

ATo ensure the AI only shares safe, accurate, and appropriate information with users.
BTo make the AI respond faster by skipping complex answers.
CTo allow the AI to generate any content without restrictions.
DTo reduce the AI's memory usage during processing.
Attempts:
2 left
💻 code output
intermediate
2:00remaining
What is the output of this safety check code snippet?

Given this Python code that filters out unsafe words from AI output, what will be printed?

Agentic_ai
unsafe_words = ['badword', 'danger']
output = 'This is a safe message.'
filtered_output = ' '.join(word for word in output.split() if word.lower() not in unsafe_words)
print(filtered_output)
AThis is a safe message.
BThis is a message.
CThis is a safe.
Dbadword danger
Attempts:
2 left
model choice
advanced
2:00remaining
Which model architecture best supports output safety checks?

You want to build an AI that can self-monitor and filter its own outputs for safety. Which model design helps most?

AA convolutional neural network trained on images.
BA pipeline combining a language model with a separate safety classifier module.
CA single large language model without any internal safety layers.
DA simple linear regression model.
Attempts:
2 left
hyperparameter
advanced
2:00remaining
Which hyperparameter adjustment can reduce unsafe outputs in text generation?

When generating text with a language model, which hyperparameter change helps reduce risky or harmful content?

AIncreasing temperature to a high value like 1.5.
BUsing a batch size of 64.
CIncreasing max token length to 1000.
DSetting temperature to a low value like 0.2.
Attempts:
2 left
🔧 debug
expert
2:00remaining
Why does this output filtering code fail to block unsafe words?

Review this Python code meant to block unsafe words from AI output. Why does it fail to filter 'Danger'?

Agentic_ai
unsafe_words = ['danger']
output = 'This is a Danger.'
filtered_output = ' '.join(word for word in output.split() if word.lower() not in unsafe_words)
print(filtered_output)
ABecause the code uses lower() but does not apply it to output words.
BBecause the unsafe_words list is empty.
CBecause 'Danger.' includes punctuation, it does not match 'danger' exactly.
DBecause join() is used incorrectly.
Attempts:
2 left