Challenge - 5 Problems
Confusion Matrix Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
Output of confusion matrix calculation
What is the output of the following Python code that computes a confusion matrix for a binary classification?
Computer Vision
from sklearn.metrics import confusion_matrix true_labels = [0, 1, 0, 1, 0, 1, 1] pred_labels = [0, 0, 0, 1, 0, 1, 1] cm = confusion_matrix(true_labels, pred_labels) print(cm)
Attempts:
2 left
💡 Hint
Recall confusion matrix rows represent true classes, columns predicted classes.
✗ Incorrect
The confusion matrix counts true negatives (3), false positives (0), false negatives (1), and true positives (3).
🧠 Conceptual
intermediate1:30remaining
Understanding precision and recall from confusion matrix
Given a confusion matrix for a binary classifier: [[50, 10], [5, 35]], what is the recall value?
Attempts:
2 left
💡 Hint
Recall = True Positives / (True Positives + False Negatives)
✗ Incorrect
Recall = 35 / (35 + 5) = 35 / 40 = 0.875
❓ Metrics
advanced1:30remaining
Choosing the best metric for imbalanced data
Which metric is most appropriate to evaluate a model on a highly imbalanced dataset where the positive class is rare?
Attempts:
2 left
💡 Hint
Consider a metric that balances precision and recall.
✗ Incorrect
F1-score balances precision and recall, making it suitable for imbalanced classes.
🔧 Debug
advanced1:30remaining
Identify the error in confusion matrix code
What error will this code raise?
from sklearn.metrics import confusion_matrix
true = [1, 0, 1]
pred = [0, 1]
cm = confusion_matrix(true, pred)
print(cm)
Computer Vision
from sklearn.metrics import confusion_matrix true = [1, 0, 1] pred = [0, 1] cm = confusion_matrix(true, pred) print(cm)
Attempts:
2 left
💡 Hint
Check if true and pred lists have the same length.
✗ Incorrect
The true and predicted label lists have different lengths, causing ValueError.
❓ Model Choice
expert2:00remaining
Best model choice based on confusion matrix analysis
You have two models evaluated on the same test set with confusion matrices:
Model A: [[90, 10], [30, 70]]
Model B: [[80, 20], [10, 90]]
Which model has better recall for the positive class?
Attempts:
2 left
💡 Hint
Recall = TP / (TP + FN)
✗ Incorrect
Model A recall = 70 / (70 + 30) = 0.7; Model B recall = 90 / (90 + 10) = 0.9; Model B has better recall.