0
0
Prompt Engineering / GenAIml~20 mins

Why AI safety prevents misuse in Prompt Engineering / GenAI - Challenge Your Understanding

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
AI Safety Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Why is AI safety important to prevent misuse?

Imagine you have a smart assistant that can do many tasks. Why do we need AI safety rules to stop it from being used in harmful ways?

ABecause AI safety makes the AI ignore user commands to prevent misuse.
BBecause AI safety ensures the AI always follows ethical guidelines and avoids harmful actions.
CBecause AI safety allows the AI to learn without any limits or controls.
DBecause AI safety makes the AI faster and more powerful for all tasks.
Attempts:
2 left
💡 Hint

Think about how rules help keep things safe in real life, like traffic laws.

Model Choice
intermediate
2:00remaining
Choosing a model to reduce misuse risk

You want to build an AI chatbot that avoids giving harmful advice. Which model choice helps reduce misuse risk?

AA model that ignores user inputs and gives random answers.
BA model trained only on large internet data without filtering harmful content.
CA model trained with reinforcement learning from human feedback to follow safety guidelines.
DA model trained to maximize engagement regardless of content safety.
Attempts:
2 left
💡 Hint

Think about how human feedback can teach AI to be safer.

Metrics
advanced
2:00remaining
Measuring AI safety effectiveness

You want to check if your AI system is safe and not misused. Which metric best measures this?

AThe percentage of AI responses flagged as harmful or unsafe by users.
BThe AI model's training loss value after each epoch.
CThe total number of AI responses generated per second.
DThe AI system's CPU usage during inference.
Attempts:
2 left
💡 Hint

Think about how to detect harmful outputs from the AI.

🔧 Debug
advanced
2:00remaining
Debugging misuse in AI output filtering

Given this code snippet that filters harmful AI outputs, which option explains why harmful content still appears?

def filter_output(text):
    harmful_words = ['hack', 'attack', 'steal']
    for word in harmful_words:
        if word in text:
            return 'Content blocked due to safety.'
    return text

output = filter_output(ai_response)
AThe filter only checks the first word of the text for harmful content.
BThe filter blocks all outputs containing harmful words correctly.
CThe filter returns the original text even if harmful words are found.
DThe filter misses harmful words if they appear with different letter cases like 'Hack' or 'ATTACK'.
Attempts:
2 left
💡 Hint

Think about how text matching works with uppercase and lowercase letters.

Hyperparameter
expert
2:00remaining
Adjusting hyperparameters to improve AI safety

You train a language model and want to reduce the chance it generates harmful content. Which hyperparameter adjustment helps most?

ALowering the temperature value during text generation to make outputs more focused and less random.
BIncreasing the learning rate to speed up training and produce more diverse outputs.
CIncreasing the batch size to use more data at once without changing output randomness.
DSetting the dropout rate to zero to prevent any neuron from being ignored.
Attempts:
2 left
💡 Hint

Think about how randomness affects the safety of generated text.