For model interpretability tools like SHAP and LIME, the key "metrics" are explanations' consistency and faithfulness. This means the explanations should reliably show how each feature affects the model's prediction. Unlike accuracy or precision, interpretability focuses on understanding the model's decisions, not just how often it is right.
Good interpretability helps users trust the model and find errors or biases. So, metrics like local fidelity (how well the explanation matches the model near a specific prediction) and global consistency (how stable explanations are across data) matter most.