Complete the code to calculate the demographic parity difference.
demographic_parity_diff = abs(P(y_pred = 1 | [1] = 0) - P(y_pred = 1 | [1] = 1))
The demographic parity difference measures the difference in positive prediction rates between groups defined by the sensitive attribute.
Complete the code to split data into training and test sets with stratification on the sensitive attribute.
train_test_split(X, y, test_size=0.3, stratify=[1], random_state=42)
Stratifying on the sensitive attribute ensures that both training and test sets have similar distributions of that attribute.
Fix the error in the fairness metric calculation by completing the code.
equal_opportunity_diff = abs(TPR_[1] - TPR_{group1})True Positive Rate (TPR) should be compared between two groups, here group0 and group1. The code needs to specify the first group in the first TPR.
Fill both blanks to create a dictionary comprehension that filters features with importance above a threshold.
important_features = {feature: importance for feature, importance in feature_importances.items() if importance [1] [2]This comprehension selects features whose importance is greater than 0.05.
Fill all three blanks to create a fairness-aware model training pipeline.
model = [1](sensitive_features=[2]) model.fit(X_train, y_train) predictions = model.[3](X_test)
FairClassifier is initialized with sensitive features, then trained and used to predict on test data.