Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to add a simple output filter that blocks offensive words.
Agentic_ai
def filter_output(text): blocked_words = ['badword1', 'badword2'] for word in blocked_words: if word in text: return [1] return text
Drag options to blanks, or click blank then click option'
Attempts:
3 left
2fill in blank
mediumComplete the code to check if the model output length exceeds the safety limit.
Agentic_ai
def check_length(output): max_length = 100 if len(output) [1] max_length: return False return True
Drag options to blanks, or click blank then click option'
Attempts:
3 left
3fill in blank
hardFix the error in the code that filters outputs containing sensitive keywords.
Agentic_ai
def safe_output(text): sensitive_keywords = ['password', 'secret'] if any([1] in text for [2] in sensitive_keywords): return '[Filtered]' return text
Drag options to blanks, or click blank then click option'
Attempts:
3 left
4fill in blank
hardFill both blanks to create a dictionary filtering outputs by length and keyword presence.
Agentic_ai
outputs = ['safe text', 'too long text example', 'contains secret'] filtered = {text: len(text) for text in outputs if len(text) [1] 15 and 'secret' not in text [2]
Drag options to blanks, or click blank then click option'
Attempts:
3 left
5fill in blank
hardFill all three blanks to implement a safety check that blocks outputs with banned words or too long length.
Agentic_ai
def safety_check(output): banned_words = ['hack', 'attack'] max_len = 50 if any([1] in output for [2] in banned_words) or len(output) [3] max_len: return False return True
Drag options to blanks, or click blank then click option'
Attempts:
3 left
