0
0
Prompt Engineering / GenAIml~10 mins

Prompt injection defense in Prompt Engineering / GenAI - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to safely add user input to the prompt.

Prompt Engineering / GenAI
safe_prompt = base_prompt + ' User says: ' + [1]
Drag options to blanks, or click blank then click option'
Auser_input
Bsanitize(user_input)
Craw_input
Dinput()
Attempts:
3 left
💡 Hint
Common Mistakes
Using raw user input without cleaning.
Using input() which reads new input instead of the variable.
2fill in blank
medium

Complete the code to detect prompt injection keywords.

Prompt Engineering / GenAI
if '[1]' in user_input.lower():
    alert('Possible injection detected')
Drag options to blanks, or click blank then click option'
Aignore previous command
Bdelete all
Cignore previous
Dignore previous instructions
Attempts:
3 left
💡 Hint
Common Mistakes
Using incomplete or incorrect phrases that don't match injection attempts.
Checking for unrelated words.
3fill in blank
hard

Fix the error in the function that blocks injection by replacing dangerous words.

Prompt Engineering / GenAI
def block_injection(text):
    blocked_text = text.replace([1], '[REDACTED]')
    return blocked_text
Drag options to blanks, or click blank then click option'
A'ignore previous instructions'
Bignore previous instructions
C'ignore previous'
D'delete all'
Attempts:
3 left
💡 Hint
Common Mistakes
Passing a variable name without quotes causes a NameError.
Using incomplete phrases that don't match the injection.
4fill in blank
hard

Fill both blanks to create a safe prompt by filtering and then formatting user input.

Prompt Engineering / GenAI
def create_safe_prompt(user_input):
    filtered = [1](user_input)
    prompt = f"System: Follow rules. User says: [2]"
    return prompt
Drag options to blanks, or click blank then click option'
Asanitize
Buser_input
Cfiltered
Dclean_input
Attempts:
3 left
💡 Hint
Common Mistakes
Using the original user_input directly in the prompt.
Mixing variable names incorrectly.
5fill in blank
hard

Fill all three blanks to check for injection and respond safely.

Prompt Engineering / GenAI
def respond(user_input):
    if [1] in user_input.lower():
        return '[2]'
    safe_input = [3](user_input)
    return f"Processed: {safe_input}"
Drag options to blanks, or click blank then click option'
A'ignore previous instructions'
B'Injection detected. Input blocked.'
Csanitize
Duser_input
Attempts:
3 left
💡 Hint
Common Mistakes
Not checking for the exact injection phrase.
Returning unsafe input directly.
Not sanitizing input before use.