0
0
ML Pythonml~10 mins

Bias detection and mitigation in ML Python - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to calculate the demographic parity difference.

ML Python
demographic_parity_diff = abs(P(y_pred = 1 | [1] = 0) - P(y_pred = 1 | [1] = 1))
Drag options to blanks, or click blank then click option'
Atarget
Bsensitive_attribute
Cfeature
Dprediction
Attempts:
3 left
💡 Hint
Common Mistakes
Using the target variable instead of the sensitive attribute.
Using the prediction variable instead of the sensitive attribute.
2fill in blank
medium

Complete the code to split data into training and test sets with stratification on the sensitive attribute.

ML Python
train_test_split(X, y, test_size=0.3, stratify=[1], random_state=42)
Drag options to blanks, or click blank then click option'
Asensitive_attribute
BX
Cy
Drandom_state
Attempts:
3 left
💡 Hint
Common Mistakes
Stratifying on the target variable instead of the sensitive attribute.
Not stratifying at all, causing imbalance.
3fill in blank
hard

Fix the error in the fairness metric calculation by completing the code.

ML Python
equal_opportunity_diff = abs(TPR_[1] - TPR_{group1})
Drag options to blanks, or click blank then click option'
Agroup0
Bgroup1
Cpositive
Dnegative
Attempts:
3 left
💡 Hint
Common Mistakes
Using the same group for both TPRs.
Using positive or negative instead of group names.
4fill in blank
hard

Fill both blanks to create a dictionary comprehension that filters features with importance above a threshold.

ML Python
important_features = {feature: importance for feature, importance in feature_importances.items() if importance [1] [2]
Drag options to blanks, or click blank then click option'
A>
B0.05
C<
D0.1
Attempts:
3 left
💡 Hint
Common Mistakes
Using '<' instead of '>' causing wrong filtering.
Using too high or too low threshold.
5fill in blank
hard

Fill all three blanks to create a fairness-aware model training pipeline.

ML Python
model = [1](sensitive_features=[2])
model.fit(X_train, y_train)
predictions = model.[3](X_test)
Drag options to blanks, or click blank then click option'
AFairClassifier
Bsensitive_attribute
Cpredict
DRandomForestClassifier
Attempts:
3 left
💡 Hint
Common Mistakes
Using a standard classifier without fairness parameters.
Using fit instead of predict for predictions.