Which of the following is the most effective method to reduce bias in a machine learning model during training?
Think about the source of bias and how data affects model fairness.
Bias often comes from unrepresentative data. Collecting diverse data helps the model learn fairly across groups.
Given the command below to generate feature importance for a trained model, what is the expected output format?
mlflow explain --model-uri runs:/12345/model --explainer shap
Explainability tools usually output data explaining feature impact.
The SHAP explainer outputs detailed JSON data showing how each feature influences predictions.
Which step should be included before deploying an AI model to production to ensure responsible AI practices?
Think about checks that ensure ethical use before release.
Fairness audits help detect and mitigate bias before the model impacts users.
After deploying an AI model, you notice a sudden drop in accuracy. Which command helps detect if data drift is causing this issue?
Data drift detection compares old and new data distributions.
The 'mlflow data drift detect' command compares baseline and current data to identify drift.
Which approach best aligns with responsible AI principles to protect user data privacy during model training?
Consider methods that keep data local and secure.
Federated learning trains models locally on devices, preserving privacy by not sharing raw data.