Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to define a simple AI safety check function.
Prompt Engineering / GenAI
def safety_check(input_text): if 'danger' in input_text: return [1] return 'safe'
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Returning 'allow' lets dangerous input pass.
Returning 'ignore' does not stop misuse.
✗ Incorrect
The function should block inputs containing 'danger' to prevent misuse.
2fill in blank
mediumComplete the code to add a misuse prevention step before AI response.
Prompt Engineering / GenAI
def generate_response(user_input): if safety_check(user_input) == [1]: return 'Request blocked due to safety.' return 'Response generated'
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'allow' lets unsafe input through.
Using 'ignore' does not block misuse.
✗ Incorrect
If safety_check returns 'block', the response should be blocked to prevent misuse.
3fill in blank
hardFix the error in the misuse detection logic.
Prompt Engineering / GenAI
def misuse_detected(text): keywords = ['hack', 'steal', 'danger'] for word in keywords: if word [1] text: return True return False
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using '==' compares whole strings, not membership.
Using 'not in' reverses the logic.
✗ Incorrect
To check if a keyword is inside the text, use 'in' operator.
4fill in blank
hardFill both blanks to create a dictionary comprehension that flags unsafe inputs.
Prompt Engineering / GenAI
inputs = ['safe', 'hack', 'hello', 'steal'] flags = {item: item [1] keywords for item in inputs if item [2] keywords}
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'not in' reverses the logic.
Using '==' only matches exact strings.
✗ Incorrect
We check if item is in keywords to flag unsafe inputs correctly.
5fill in blank
hardFill all three blanks to filter and map unsafe words to warnings.
Prompt Engineering / GenAI
keywords = ['hack', 'steal', 'danger'] warnings = {word[1]: 'Warning!' for word in inputs if word [2] keywords and len(word) [3] 4}
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'not in' excludes unsafe words.
Using '<' filters wrong length.
✗ Incorrect
We convert words to uppercase, check if they are in keywords, and filter by length greater than 4.