Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to define a safe prompt that avoids injection.
Prompt Engineering / GenAI
safe_prompt = "Please answer clearly: [1]"
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Selecting prompts that include commands like 'Ignore previous instructions' or harmful actions.
Using vague or ambiguous prompts that can be exploited.
✗ Incorrect
The safe prompt asks a clear question without including harmful instructions that could be injected.
2fill in blank
mediumComplete the code to detect if a user input contains a prompt injection attempt.
Prompt Engineering / GenAI
if "[1]" in user_input.lower(): alert('Possible injection detected')
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Checking for harmless words like 'hello' or 'thank you' instead of injection phrases.
Not converting input to lowercase before checking.
✗ Incorrect
The phrase 'ignore previous instructions' is a common injection attempt to override AI behavior.
3fill in blank
hardFix the error in the code that tries to sanitize user input to prevent prompt injection.
Prompt Engineering / GenAI
def sanitize_input(text): return text.replace('[1]', '')
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Replacing only part of the phrase, which leaves injection commands in the input.
Not handling case sensitivity.
✗ Incorrect
Removing the exact phrase 'ignore previous instructions' helps prevent injection attempts that use this command.
4fill in blank
hardFill both blanks to create a function that blocks injection by checking for dangerous keywords.
Prompt Engineering / GenAI
def is_safe(text): dangerous_keywords = ['[1]', '[2]'] return not any(word in text.lower() for word in dangerous_keywords)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Including harmless words like 'hello' or 'thank you' as dangerous keywords.
Not converting text to lowercase before checking.
✗ Incorrect
The function checks for common injection phrases like 'ignore previous instructions' and 'delete all data' to block unsafe inputs.
5fill in blank
hardFill all three blanks to build a safe prompt that includes user input but prevents injection.
Prompt Engineering / GenAI
def build_prompt(user_text): safe_text = user_text.replace('[1]', '').replace('[2]', '') prompt = "Answer safely: [3]" return prompt.format(safe_text)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Not removing all dangerous phrases from user input.
Using unsafe string concatenation instead of placeholders.
✗ Incorrect
The function removes dangerous phrases and safely inserts the cleaned user input into the prompt using a placeholder.