0
0
Prompt Engineering / GenAIml~20 mins

Output guardrails in Prompt Engineering / GenAI - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Output Guardrails Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Why are output guardrails important in AI models?

Output guardrails help control what an AI model produces. Which of these is the main reason to use output guardrails?

ATo increase the size of the AI model
BTo make the AI run faster during training
CTo ensure the AI outputs are safe and do not cause harm
DTo reduce the number of layers in the AI model
Attempts:
2 left
💡 Hint

Think about why we want to control what the AI says or does.

Predict Output
intermediate
2:00remaining
What will this output guardrail code print?

Consider this simple Python function that applies a guardrail to filter out negative numbers from AI output predictions. What does it print?

Prompt Engineering / GenAI
def guardrail_filter(predictions):
    return [x if x >= 0 else 0 for x in predictions]

outputs = [-3, 5, -1, 7]
print(guardrail_filter(outputs))
A[-3, 5, -1, 7]
B[0, 5, 0, 7]
C[3, 5, 1, 7]
D[-3, 0, -1, 0]
Attempts:
2 left
💡 Hint

Look at how negative numbers are handled in the list comprehension.

Model Choice
advanced
2:00remaining
Which model architecture best supports output guardrails for text generation?

You want to build an AI that generates text but must avoid harmful or biased content. Which model architecture helps best to apply output guardrails?

ATransformer with a safety layer that filters outputs before returning
BSimple feedforward neural network without output filtering
CConvolutional neural network designed for image recognition
DRecurrent neural network without any output constraints
Attempts:
2 left
💡 Hint

Think about which architecture allows easy integration of output filters.

Metrics
advanced
2:00remaining
Which metric helps measure effectiveness of output guardrails?

After applying output guardrails, you want to check if harmful outputs are reduced. Which metric is best to measure this?

AModel training loss value
BTotal number of predictions made
CNumber of layers in the AI model
DPercentage of outputs flagged as harmful by a content filter
Attempts:
2 left
💡 Hint

Focus on measuring harmful content, not model size or training progress.

🔧 Debug
expert
3:00remaining
Why does this output guardrail code fail to block harmful words?

Look at this Python code meant to block harmful words from AI output. Why does it fail to block the word 'badword'?

Prompt Engineering / GenAI
def block_harmful(text):
    harmful_words = ['badword']
    for word in harmful_words:
        if word in text:
            text = text.replace(word, '****')
    return text

output = 'This is a BadWord example.'
print(block_harmful(output))
AThe check is case-sensitive, so 'BadWord' is not matched
BThe replace method is used incorrectly and does not change the text
CThe harmful_words list is empty, so no words are blocked
DThe function returns before replacing words
Attempts:
2 left
💡 Hint

Check how the code compares words and the case of letters.