Complete the code to log model fairness metrics using a popular MLOps tool.
from mlflow import [1] client = [1].MlflowClient() client.log_metric(run_id, 'fairness_metric', fairness_value)
The tracking module provides the MlflowClient class to log metrics such as fairness.
Complete the code to check for bias in a dataset using the AI Fairness 360 toolkit.
from aif360.datasets import [1] dataset = [1](features, labels)
StandardDataset is the common class to load datasets for bias checking in AI Fairness 360.
Fix the error in the code that applies a fairness metric to a model's predictions.
from aif360.metrics import [1] metric = [1](dataset_true, dataset_pred, unprivileged_groups, privileged_groups) fairness_score = metric.mean_difference()
ClassificationMetric is the correct class to compute fairness metrics on classification results.
Fill both blanks to create a dictionary comprehension that filters features with values greater than 0.
{feature: value for feature, value in features.items() if value [1] 0 and feature [2] 'age'}The code filters features where the value is greater than 0 and the feature name is not 'age'.
Fill all three blanks to create a dictionary comprehension that maps uppercased feature names to values greater than 10.
{ [1]: [2] for [3], value in features.items() if value > 10 }The comprehension maps uppercased feature names to their values for values greater than 10.