Imagine you have a model that predicts the top 3 possible labels for an image. The true label is among these 3 predictions. What does this mean about the model's Top-3 accuracy?
Top-K accuracy checks if the true label is within the top K predictions, not just the first one.
Top-K accuracy measures how often the true label is among the model's top K predicted labels. If the true label is in the top 3 predictions, it counts as correct for Top-3 accuracy.
Given the following predictions and true labels, what is the Top-2 accuracy?
predictions = [[0.1, 0.7, 0.2], [0.6, 0.3, 0.1], [0.2, 0.2, 0.6]] true_labels = [1, 0, 2]
Each inner list shows predicted probabilities for classes 0, 1, and 2.
import numpy as np def top_k_accuracy(preds, labels, k): correct = 0 for pred, label in zip(preds, labels): top_k = np.argsort(pred)[-k:][::-1] if label in top_k: correct += 1 return correct / len(labels) predictions = [[0.1, 0.7, 0.2], [0.6, 0.3, 0.1], [0.2, 0.2, 0.6]] true_labels = [1, 0, 2] result = top_k_accuracy(predictions, true_labels, 2) print(result)
Check if each true label is in the top 2 predicted classes by probability.
For each prediction, the top 2 classes are selected. All true labels are within these top 2 classes, so accuracy is 3/3 = 1.0.
You want a model that performs well on Top-5 accuracy for a dataset with 100 classes. Which model architecture is best suited for this goal?
Top-K accuracy requires ranking multiple class probabilities.
To compute Top-5 accuracy, the model must output probabilities for all classes to rank the top 5 predictions. A softmax layer over 100 classes provides this.
If you increase K in Top-K accuracy from 1 to 10, what is the expected effect on the accuracy metric?
Think about how including more guesses affects the chance of the true label being in the top K.
Increasing K means the true label has more chances to be included in the top predictions, so accuracy usually increases or stays the same.
Given the following model output logits and true labels, what is the Top-3 accuracy?
logits = [[2.0, 1.0, 0.1, 0.5], [0.1, 0.2, 3.0, 0.4], [1.0, 2.5, 0.3, 0.2]] true_labels = [0, 2, 1]
Use softmax to convert logits to probabilities before selecting top predictions.
import numpy as np def softmax(x): e_x = np.exp(x - np.max(x)) return e_x / e_x.sum() def top_k_accuracy_from_logits(logits, labels, k): correct = 0 for logit, label in zip(logits, labels): probs = softmax(logit) top_k = np.argsort(probs)[-k:][::-1] if label in top_k: correct += 1 return correct / len(labels) logits = [[2.0, 1.0, 0.1, 0.5], [0.1, 0.2, 3.0, 0.4], [1.0, 2.5, 0.3, 0.2]] true_labels = [0, 2, 1] result = top_k_accuracy_from_logits(logits, true_labels, 3) print(round(result, 2))
Apply softmax to logits, then check if true label is in top 3 probabilities.
After softmax, the top 3 classes per sample are selected. All three true labels are in these top 3, so accuracy is 3/3 = 1.0.