Model Pipeline - Model interpretability (SHAP, LIME)
This pipeline shows how a machine learning model is trained and then explained using SHAP and LIME methods. These methods help us understand why the model makes certain predictions by showing the importance of each feature.