Which of the following best explains why bias in machine learning models can cause harm?
Think about how unfair treatment can impact people in real life.
Bias in models can lead to unfair or harmful decisions, especially for underrepresented groups, which is why responsible ML aims to detect and reduce bias.
You want to build a model that predicts loan approvals fairly across different groups. Which model choice helps reduce harm?
Consider how fairness constraints help prevent harm.
Using fairness constraints helps ensure the model treats groups fairly, reducing harm while maintaining reasonable accuracy.
Given a classification model, which metric helps detect if the model treats different groups fairly?
Think about errors that affect groups differently.
Difference in false positive rates between groups shows if one group is unfairly more likely to be wrongly classified, indicating potential harm.
Which data problem is most likely to cause harm if not addressed in a machine learning model?
Consider how data representation affects model fairness.
Underrepresentation of a group can cause the model to perform poorly for that group, leading to unfair and harmful outcomes.
Why is responsible machine learning essential to prevent harm in real-world applications?
Think about fairness and safety in decisions made by models.
Responsible ML focuses on fairness, transparency, and safety to reduce harm caused by biased or unsafe model decisions.