Recall & Review
beginner
What is model interpretability in machine learning?
Model interpretability means understanding how a machine learning model makes its decisions. It helps us see which features affect the predictions and why.
Click to reveal answer
intermediate
What does SHAP stand for and what is its purpose?
SHAP stands for SHapley Additive exPlanations. It explains the output of any machine learning model by assigning each feature an importance value for a particular prediction.
Click to reveal answer
intermediate
How does LIME help in interpreting machine learning models?
LIME (Local Interpretable Model-agnostic Explanations) explains individual predictions by approximating the model locally with a simple interpretable model, like a linear model.
Click to reveal answer
advanced
What is the main difference between SHAP and LIME?
SHAP provides consistent and theoretically sound feature importance values based on game theory, while LIME focuses on local approximations and may vary depending on the sampled data around the prediction.
Click to reveal answer
beginner
Why is model interpretability important in real-life applications?
Interpretability builds trust, helps detect errors or bias, and ensures models meet legal or ethical standards, especially in sensitive areas like healthcare or finance.
Click to reveal answer
What does SHAP primarily provide for a machine learning model?
✗ Incorrect
SHAP assigns importance values to features for each prediction, explaining how much each feature contributed.
LIME explains model predictions by:
✗ Incorrect
LIME approximates the complex model locally with a simple interpretable model to explain predictions.
Which of the following is a key advantage of SHAP over LIME?
✗ Incorrect
SHAP provides consistent, theoretically grounded explanations using Shapley values from game theory.
Why is model interpretability especially important in healthcare?
✗ Incorrect
In healthcare, understanding model decisions helps doctors trust and safely use AI predictions.
Which method is model-agnostic, meaning it can explain any model?
✗ Incorrect
Both SHAP and LIME can explain predictions from any machine learning model.
Explain in your own words how SHAP helps interpret a machine learning model's prediction.
Think about how SHAP assigns credit to each feature for a single prediction.
You got /4 concepts.
Describe the main idea behind LIME and how it explains model predictions locally.
Consider how LIME looks at a small neighborhood around a prediction.
You got /4 concepts.