Recall & Review
beginner
What does accuracy measure in a classification model?
Accuracy measures the percentage of correct predictions out of all predictions made by the model. It tells us how often the model is right.
Click to reveal answer
intermediate
Explain the F1 score in simple terms.
The F1 score is the balance between precision (how many selected items are relevant) and recall (how many relevant items are selected). It is useful when you want to balance false positives and false negatives.
Click to reveal answer
beginner
What is a confusion matrix?
A confusion matrix is a table that shows the number of correct and incorrect predictions broken down by each class. It helps us see where the model makes mistakes.
Click to reveal answer
intermediate
How do false positives and false negatives relate to the confusion matrix?
False positives are cases where the model predicted positive but the true label is negative. False negatives are cases where the model predicted negative but the true label is positive. Both appear in the confusion matrix.
Click to reveal answer
intermediate
Why might accuracy be misleading in some cases?
Accuracy can be misleading when classes are imbalanced. For example, if 95% of data is one class, a model that always predicts that class will have high accuracy but poor real performance.Click to reveal answer
Which metric balances precision and recall?
✗ Incorrect
The F1 score combines precision and recall into a single metric.
What does the diagonal of a confusion matrix represent?
✗ Incorrect
The diagonal shows the number of correct predictions for each class.
If a model has high accuracy but low F1 score, what might be true?
✗ Incorrect
High accuracy with low F1 often means the model is biased toward the majority class.
Which of these is NOT part of a confusion matrix?
✗ Incorrect
Loss values are not shown in a confusion matrix.
What does recall measure?
✗ Incorrect
Recall measures the proportion of actual positives that were correctly identified.
Describe what a confusion matrix is and how it helps evaluate a classification model.
Think of it as a table showing correct and wrong predictions for each class.
You got /6 concepts.
Explain why accuracy alone might not be enough to judge a model's performance and when F1 score is more useful.
Consider a case where one class is much bigger than others.
You got /4 concepts.