Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to safely add user input to the prompt.
Prompt Engineering / GenAI
safe_prompt = base_prompt + ' User says: ' + [1]
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using raw user input without cleaning.
Using input() which reads new input instead of the variable.
✗ Incorrect
Using sanitize(user_input) helps remove harmful parts from the input before adding it to the prompt.
2fill in blank
mediumComplete the code to detect prompt injection keywords.
Prompt Engineering / GenAI
if '[1]' in user_input.lower(): alert('Possible injection detected')
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using incomplete or incorrect phrases that don't match injection attempts.
Checking for unrelated words.
✗ Incorrect
The phrase 'ignore previous instructions' is a common injection attempt to override the prompt.
3fill in blank
hardFix the error in the function that blocks injection by replacing dangerous words.
Prompt Engineering / GenAI
def block_injection(text): blocked_text = text.replace([1], '[REDACTED]') return blocked_text
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Passing a variable name without quotes causes a NameError.
Using incomplete phrases that don't match the injection.
✗ Incorrect
The replace method needs the first argument as a string literal, so it must be in quotes.
4fill in blank
hardFill both blanks to create a safe prompt by filtering and then formatting user input.
Prompt Engineering / GenAI
def create_safe_prompt(user_input): filtered = [1](user_input) prompt = f"System: Follow rules. User says: [2]" return prompt
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using the original user_input directly in the prompt.
Mixing variable names incorrectly.
✗ Incorrect
First sanitize the input, then use the filtered result in the prompt string.
5fill in blank
hardFill all three blanks to check for injection and respond safely.
Prompt Engineering / GenAI
def respond(user_input): if [1] in user_input.lower(): return '[2]' safe_input = [3](user_input) return f"Processed: {safe_input}"
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Not checking for the exact injection phrase.
Returning unsafe input directly.
Not sanitizing input before use.
✗ Incorrect
Check if the injection phrase is in input, return a warning, else sanitize input before processing.