Complete the code to import the function that calculates top-k accuracy from PyTorch.
from torchmetrics import [1]
The Accuracy class from torchmetrics calculates the top-k accuracy metric.
Complete the code to create an Accuracy metric object for top 3 accuracy.
top3_acc = Accuracy(task="multiclass", top_k=[1])
Setting top_k=3 means the metric checks if the true label is in the top 3 predictions.
Fix the error in the code to compute top-5 accuracy from model outputs and labels.
top5_acc = Accuracy(task="multiclass", top_k=[1]) outputs = torch.randn(8, 1000) # batch of 8, 1000 classes labels = torch.randint(0, 1000, (8,)) accuracy = top5_acc(outputs, labels)
Top-5 accuracy requires top_k=5 to check if true label is in top 5 predictions.
Fill both blanks to create a dictionary of top-1 and top-3 accuracy metrics.
metrics = {
'top1': Accuracy(task="multiclass", top_k=[1]),
'top3': Accuracy(task="multiclass", top_k=[2])
}Top-1 accuracy uses top_k=1 and top-3 accuracy uses top_k=3.
Fill all three blanks to compute and print top-1, top-3, and top-5 accuracy from outputs and labels.
outputs = torch.randn(16, 1000) labels = torch.randint(0, 1000, (16,)) metrics = { 'top1': Accuracy(task="multiclass", top_k=[1]), 'top3': Accuracy(task="multiclass", top_k=[2]), 'top5': Accuracy(task="multiclass", top_k=[3]) } for name, metric in metrics.items(): acc = metric(outputs, labels) print(f"{name} accuracy: {acc.item():.4f}")
Top-1, top-3, and top-5 accuracies require top_k values 1, 3, and 5 respectively.