Challenge - 5 Problems
Model Interpretability Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate1:30remaining
Understanding SHAP values
Which statement best describes what SHAP values represent in model interpretability?
Attempts:
2 left
💡 Hint
Think about how SHAP explains individual predictions by comparing to a baseline.
✗ Incorrect
SHAP values quantify the contribution of each feature to the difference between the model's average output and the specific prediction, helping to explain why the model made that prediction.
❓ Predict Output
intermediate2:00remaining
Output of LIME explanation code
What will be the output of the following Python code snippet using LIME for a binary classification model?
ML Python
import numpy as np from sklearn.linear_model import LogisticRegression from lime.lime_tabular import LimeTabularExplainer X_train = np.array([[0,1],[1,1],[0,0],[1,0]]) y_train = np.array([0,1,0,1]) model = LogisticRegression().fit(X_train, y_train) explainer = LimeTabularExplainer(X_train, feature_names=['f1','f2'], class_names=['class0','class1'], discretize_continuous=False) exp = explainer.explain_instance(np.array([1,0]), model.predict_proba, num_features=2) print(exp.as_list())
Attempts:
2 left
💡 Hint
LIME returns feature contributions as tuples with feature name and weight.
✗ Incorrect
LIME explains the prediction by showing feature contributions with weights. For input [1,0], 'f1' contributes positively around 0.5 and 'f2' slightly negatively.
❓ Model Choice
advanced1:30remaining
Choosing between SHAP and LIME
You want to explain predictions of a complex black-box model on tabular data with many features. Which method is best if you need consistent global and local explanations?
Attempts:
2 left
💡 Hint
Consider which method has a solid theoretical foundation for consistent explanations.
✗ Incorrect
SHAP values have a strong theoretical basis ensuring consistent additive explanations that can be aggregated for global insights, unlike LIME which is local and can vary.
❓ Hyperparameter
advanced1:30remaining
Effect of number of samples in LIME
In LIME, increasing the 'num_samples' parameter during explanation will most likely:
Attempts:
2 left
💡 Hint
Think about how sampling affects the local surrogate model fit.
✗ Incorrect
More samples allow LIME to better approximate the local decision boundary, improving explanation accuracy but requiring more computation time.
🔧 Debug
expert2:00remaining
Debugging SHAP value computation error
You run SHAP's TreeExplainer on a scikit-learn RandomForestClassifier but get the error: 'ValueError: Model output type not supported'. What is the most likely cause?
Attempts:
2 left
💡 Hint
Check if the model type matches the explainer requirements.
✗ Incorrect
TreeExplainer only supports tree-based models like RandomForest or XGBoost. Using it on non-tree models causes this error.