Experiment - Why responsible ML prevents harm
Problem:You have a machine learning model that predicts loan approvals. Currently, it shows bias by approving fewer loans for a certain group of people, which can cause unfair harm.
Current Metrics:Accuracy: 85%, Bias detected: Loan approval rate for Group A is 30% vs Group B is 70%
Issue:The model is unfair and biased, causing harm by discriminating against Group A.