0
0
ML Pythonml~20 mins

Fairness metrics in ML Python - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Fairness Metrics Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Understanding Equal Opportunity in Fairness Metrics

Which statement best describes the Equal Opportunity fairness metric in machine learning?

AIt requires that the overall accuracy is equal across different groups.
BIt requires that the false positive rates are equal across different groups.
CIt requires that the true positive rates are equal across different groups.
DIt requires that the predicted positive rates are equal across different groups.
Attempts:
2 left
💡 Hint

Think about which error type Equal Opportunity focuses on balancing.

Predict Output
intermediate
2:00remaining
Output of Fairness Metric Calculation

Given the following confusion matrices for two groups, what is the difference in false positive rates (FPR) between Group A and Group B?

Group A: TP=40, FP=10, TN=50, FN=5
Group B: TP=30, FP=20, TN=40, FN=10
A0.17
B0.20
C0.10
D0.25
Attempts:
2 left
💡 Hint

FPR = FP / (FP + TN). Calculate for each group and subtract.

Model Choice
advanced
2:00remaining
Choosing a Model to Optimize Demographic Parity

You want to build a classifier that satisfies Demographic Parity. Which model choice below is most likely to help achieve this?

AA model trained with a fairness constraint that equalizes predicted positive rates across groups.
BA model trained to minimize false negative rates only.
CA model trained to maximize overall accuracy without constraints.
DA model trained with a loss function ignoring group membership.
Attempts:
2 left
💡 Hint

Demographic Parity focuses on equalizing positive predictions, not just accuracy.

Metrics
advanced
2:00remaining
Interpreting Disparate Impact Metric

What does a Disparate Impact value of 0.6 indicate about a model's fairness?

AThe model's false positive rate is 60% higher for the privileged group.
BThe model favors the unprivileged group by predicting positives more often.
CThe model has 60% accuracy on the unprivileged group.
DThe model predicts positive outcomes for the unprivileged group at 60% the rate of the privileged group.
Attempts:
2 left
💡 Hint

Disparate Impact compares positive prediction rates between groups.

🔧 Debug
expert
3:00remaining
Debugging Fairness Metric Calculation Code

What error will this Python code raise when calculating the statistical parity difference?

def stat_parity_difference(y_pred, group):
    pos_rate_priv = sum(y_pred[group == 0]) / len(y_pred[group == 0])
    pos_rate_unpriv = sum(y_pred[group == 1]) / len(y_pred[group == 1])
    return pos_rate_unpriv - pos_rate_priv

# y_pred = [1, 0, 1, 1], group = [0, 1, 0]
ML Python
def stat_parity_difference(y_pred, group):
    pos_rate_priv = sum(y_pred[group == 0]) / len(y_pred[group == 0])
    pos_rate_unpriv = sum(y_pred[group == 1]) / len(y_pred[group == 1])
    return pos_rate_unpriv - pos_rate_priv

# y_pred = [1, 0, 1, 1], group = [0, 1, 0]
ANo error; the function returns a float value.
BTypeError because boolean indexing is not supported on lists.
CIndexError due to mismatched lengths of y_pred and group arrays.
DZeroDivisionError because one group has zero members.
Attempts:
2 left
💡 Hint

Check the data types and operations used for indexing.