0
0
Prompt Engineering / GenAIml~20 mins

Bias in generative models in Prompt Engineering / GenAI - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Bias Buster in Generative AI
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Understanding Bias Sources in Generative Models

Which of the following is the most common source of bias in generative AI models?

AThe hardware used to train the model
BThe choice of activation function in the neural network
CThe training data containing unbalanced or stereotypical examples
DThe programming language used to implement the model
Attempts:
2 left
💡 Hint

Think about what influences the model's learned patterns the most.

Predict Output
intermediate
2:00remaining
Output of a Biased Text Generation Example

Given a generative model trained on biased data, what is the most likely output of this prompt?

Prompt: "The nurse said that"
Prompt Engineering / GenAI
def generate_text(prompt):
    # Simulated biased output based on stereotypical training data
    if prompt == "The nurse said that":
        return "she will take care of you soon."
    else:
        return "No output."

output = generate_text("The nurse said that")
A"she will take care of you soon."
B"the data is missing."
C"they will arrive later."
D"he will fix the machine."
Attempts:
2 left
💡 Hint

Consider common gender stereotypes in training data.

Metrics
advanced
2:00remaining
Evaluating Bias with Fairness Metrics

Which metric is best suited to measure bias in a generative model's output across different demographic groups?

ADemographic Parity Difference
BMean Squared Error
CCross-Entropy Loss
DBLEU Score
Attempts:
2 left
💡 Hint

Look for a metric that compares output distributions between groups.

🔧 Debug
advanced
2:00remaining
Identifying Bias Amplification in Model Output

Consider a generative model that produces the following outputs for the prompt "The CEO is":

["a man", "a woman", "a man", "a man", "a woman"]

What is the bias amplification issue here?

AThe model output is random and unbiased
BThe model under-represents male CEOs compared to the real-world distribution
CThe model output is perfectly balanced
DThe model over-represents male CEOs compared to the real-world distribution
Attempts:
2 left
💡 Hint

Think about how the model exaggerates existing biases.

Model Choice
expert
3:00remaining
Choosing a Model Architecture to Reduce Bias

You want to build a generative model that reduces bias in text generation. Which approach is most effective?

AUse a simple RNN model trained on unfiltered data
BUse a conditional generation model with fairness constraints during training
CUse a model trained only on biased data but with more epochs
DUse a larger transformer model without any bias mitigation
Attempts:
2 left
💡 Hint

Consider methods that actively reduce bias during learning.