0
0
ML Pythonml~5 mins

Model interpretability (SHAP, LIME) in ML Python - Cheat Sheet & Quick Revision

Choose your learning style9 modes available
Recall & Review
beginner
What is model interpretability in machine learning?
Model interpretability means understanding how a machine learning model makes its decisions. It helps us see which features affect the predictions and why.
Click to reveal answer
intermediate
What does SHAP stand for and what is its purpose?
SHAP stands for SHapley Additive exPlanations. It explains the output of any machine learning model by assigning each feature an importance value for a particular prediction.
Click to reveal answer
intermediate
How does LIME help in interpreting machine learning models?
LIME (Local Interpretable Model-agnostic Explanations) explains individual predictions by approximating the model locally with a simple interpretable model, like a linear model.
Click to reveal answer
advanced
What is the main difference between SHAP and LIME?
SHAP provides consistent and theoretically sound feature importance values based on game theory, while LIME focuses on local approximations and may vary depending on the sampled data around the prediction.
Click to reveal answer
beginner
Why is model interpretability important in real-life applications?
Interpretability builds trust, helps detect errors or bias, and ensures models meet legal or ethical standards, especially in sensitive areas like healthcare or finance.
Click to reveal answer
What does SHAP primarily provide for a machine learning model?
AModel training speed improvements
BA simpler model to replace the original
CData preprocessing techniques
DFeature importance values for each prediction
LIME explains model predictions by:
ACreating a local simple model around the prediction
BUsing deep neural networks
CIgnoring feature values
DChanging the original model
Which of the following is a key advantage of SHAP over LIME?
ASHAP values are consistent and based on game theory
BSHAP is faster to compute in all cases
CSHAP only works with linear models
DSHAP ignores feature interactions
Why is model interpretability especially important in healthcare?
ATo avoid using any features
BTo ensure decisions are understandable and trustworthy
CTo reduce data size
DTo speed up model training
Which method is model-agnostic, meaning it can explain any model?
AOnly LIME
BOnly SHAP
CBoth SHAP and LIME
DNeither SHAP nor LIME
Explain in your own words how SHAP helps interpret a machine learning model's prediction.
Think about how SHAP assigns credit to each feature for a single prediction.
You got /4 concepts.
    Describe the main idea behind LIME and how it explains model predictions locally.
    Consider how LIME looks at a small neighborhood around a prediction.
    You got /4 concepts.