Experiment - Model interpretability (SHAP, LIME)
Problem:You have trained a classification model on the Iris dataset. The model achieves good accuracy, but you want to understand which features influence its predictions the most.
Current Metrics:Training accuracy: 95%, Validation accuracy: 93%
Issue:The model works well but is a 'black box'. You cannot explain why it makes certain predictions.