Complete the code to calculate demographic parity difference using the AIF360 library.
from aif360.metrics import BinaryLabelDatasetMetric metric = BinaryLabelDatasetMetric(dataset, privileged_groups=[1], unprivileged_groups=unprivileged_groups)
The privileged group is usually represented by 'sex': 1 (e.g., male). This is used to calculate fairness metrics like demographic parity difference.
Complete the code to compute equal opportunity difference metric.
equal_opportunity_diff = metric.[1]()The method equal_opportunity_difference() computes the difference in true positive rates between privileged and unprivileged groups.
Fix the error in the code to calculate disparate impact.
disparate_impact = metric.[1]()The correct method name is disparate_impact() with underscores and lowercase letters.
Fill both blanks to create a fairness metric object for unprivileged and privileged groups.
metric = BinaryLabelDatasetMetric(dataset, privileged_groups=[1], unprivileged_groups=[2])
The privileged group is usually 'gender': 1 and unprivileged is 'gender': 0 to measure fairness across gender.
Fill all three blanks to create a dictionary comprehension that filters dataset features by threshold.
filtered_features = {feature: value for feature, value in dataset.features.items() if value [1] threshold and feature [2] 'age' and value [3] 0}The comprehension filters features with values greater than threshold, excludes 'age' feature, and values greater or equal to zero.