0
0
ML Pythonml~20 mins

Bias detection and mitigation in ML Python - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Bias Detection and Mitigation Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Understanding Bias Types in Machine Learning

Which of the following best describes sampling bias in a dataset?

AThe model predictions are skewed because of incorrect feature scaling.
BThe model favors one class over another because of imbalanced class labels during training.
CThe dataset contains data points that are not representative of the real population due to how samples were collected.
DThe dataset has missing values that cause errors during training.
Attempts:
2 left
💡 Hint

Think about how the data was gathered and if it truly reflects the whole group you want to study.

Predict Output
intermediate
2:00remaining
Detecting Bias with Statistical Parity Difference

Given the following Python code calculating statistical parity difference, what is the printed output?

ML Python
group_0_positive_rate = 0.7
group_1_positive_rate = 0.5
statistical_parity_difference = group_0_positive_rate - group_1_positive_rate
print(f"Statistical Parity Difference: {statistical_parity_difference}")
AStatistical Parity Difference: 0.2
BStatistical Parity Difference: -0.2
CStatistical Parity Difference: 1.2
DStatistical Parity Difference: 0.0
Attempts:
2 left
💡 Hint

Subtract the smaller positive rate from the larger one.

Model Choice
advanced
2:00remaining
Choosing a Model to Mitigate Bias

You want to reduce bias in a classification task where sensitive attributes affect predictions. Which model approach is best to mitigate bias during training?

ATrain a standard logistic regression without any fairness constraints.
BUse a decision tree with maximum depth to fit training data perfectly.
CTrain a model only on the sensitive attribute to predict the target.
DUse an adversarial debiasing model that learns to predict while minimizing sensitive attribute information.
Attempts:
2 left
💡 Hint

Think about a model that tries to hide sensitive information while learning.

Hyperparameter
advanced
2:00remaining
Hyperparameter Impact on Bias Mitigation

In a fairness-aware model using a regularization term to penalize bias, which hyperparameter adjustment will most likely reduce bias?

AIncrease the regularization strength to penalize bias more heavily.
BDecrease the regularization strength to allow more model flexibility.
CIncrease the learning rate to speed up training.
DDecrease the batch size to improve gradient estimates.
Attempts:
2 left
💡 Hint

Think about how stronger penalties affect bias in the model.

Metrics
expert
3:00remaining
Interpreting Equalized Odds Metric

You have two groups in your dataset. The true positive rates (TPR) and false positive rates (FPR) for each group are:

  • Group A: TPR=0.8, FPR=0.1
  • Group B: TPR=0.6, FPR=0.1

Does this model satisfy equalized odds fairness?

AYes, because both groups have the same false positive rate.
BNo, because the true positive rates differ between groups.
CYes, because the average of TPR and FPR is equal.
DNo, because the false positive rates differ between groups.
Attempts:
2 left
💡 Hint

Equalized odds requires both TPR and FPR to be equal across groups.