0
0
Prompt Engineering / GenAIml~10 mins

Why AI safety prevents misuse in Prompt Engineering / GenAI - Test Your Understanding

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to define a simple AI safety check function.

Prompt Engineering / GenAI
def safety_check(input_text):
    if 'danger' in input_text:
        return [1]
    return 'safe'
Drag options to blanks, or click blank then click option'
A'block'
B'allow'
C'ignore'
D'warn'
Attempts:
3 left
💡 Hint
Common Mistakes
Returning 'allow' lets dangerous input pass.
Returning 'ignore' does not stop misuse.
2fill in blank
medium

Complete the code to add a misuse prevention step before AI response.

Prompt Engineering / GenAI
def generate_response(user_input):
    if safety_check(user_input) == [1]:
        return 'Request blocked due to safety.'
    return 'Response generated'
Drag options to blanks, or click blank then click option'
A'warn'
B'allow'
C'block'
D'ignore'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'allow' lets unsafe input through.
Using 'ignore' does not block misuse.
3fill in blank
hard

Fix the error in the misuse detection logic.

Prompt Engineering / GenAI
def misuse_detected(text):
    keywords = ['hack', 'steal', 'danger']
    for word in keywords:
        if word [1] text:
            return True
    return False
Drag options to blanks, or click blank then click option'
Ain
Bnot in
C==
D!=
Attempts:
3 left
💡 Hint
Common Mistakes
Using '==' compares whole strings, not membership.
Using 'not in' reverses the logic.
4fill in blank
hard

Fill both blanks to create a dictionary comprehension that flags unsafe inputs.

Prompt Engineering / GenAI
inputs = ['safe', 'hack', 'hello', 'steal']
flags = {item: item [1] keywords for item in inputs if item [2] keywords}
Drag options to blanks, or click blank then click option'
Ain
Bnot in
C==
D!=
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'not in' reverses the logic.
Using '==' only matches exact strings.
5fill in blank
hard

Fill all three blanks to filter and map unsafe words to warnings.

Prompt Engineering / GenAI
keywords = ['hack', 'steal', 'danger']
warnings = {word[1]: 'Warning!' for word in inputs if word [2] keywords and len(word) [3] 4}
Drag options to blanks, or click blank then click option'
A.upper()
Bin
C>
Dnot in
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'not in' excludes unsafe words.
Using '<' filters wrong length.