0
0
PyTorchml~10 mins

Label smoothing in PyTorch - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to create a label smoothing loss using PyTorch.

PyTorch
import torch.nn as nn

criterion = nn.CrossEntropyLoss(label_smoothing=[1])
Drag options to blanks, or click blank then click option'
A0.1
B1.0
C0.0
D-0.1
Attempts:
3 left
💡 Hint
Common Mistakes
Using 1.0 which removes all confidence.
Using negative values which are invalid.
2fill in blank
medium

Complete the code to apply label smoothing in a training loop.

PyTorch
outputs = model(inputs)
loss = criterion(outputs, [1])
Drag options to blanks, or click blank then click option'
Asmoothed_labels
Braw_labels
Cinputs
Doutputs
Attempts:
3 left
💡 Hint
Common Mistakes
Passing smoothed labels which causes errors.
Passing model outputs as labels.
3fill in blank
hard

Fix the error in the label smoothing parameter to avoid invalid values.

PyTorch
criterion = nn.CrossEntropyLoss(label_smoothing=[1])
Drag options to blanks, or click blank then click option'
A0.0
B-0.05
C1.5
D0.15
Attempts:
3 left
💡 Hint
Common Mistakes
Using negative or greater than 1 values causes runtime errors.
4fill in blank
hard

Fill both blanks to create a smoothed label tensor for a batch of size 3 and 5 classes.

PyTorch
import torch

batch_size = 3
num_classes = 5
smoothing = [1]
labels = torch.tensor([0, 2, 4])
smoothed_labels = torch.full((batch_size, num_classes), [2])
smoothed_labels.scatter_(1, labels.unsqueeze(1), 1 - smoothing)
Drag options to blanks, or click blank then click option'
A0.1
B0.9
C0.025
D0.5
Attempts:
3 left
💡 Hint
Common Mistakes
Using 0.9 as smoothing which is too high.
Using 0.5 as off-target value which is incorrect.
5fill in blank
hard

Fill all three blanks to compute the label smoothing loss manually.

PyTorch
import torch
import torch.nn.functional as F

outputs = torch.tensor([[2.0, 0.5, 0.3], [0.1, 1.0, 2.1]])
labels = torch.tensor([0, 2])
smoothing = [1]
num_classes = outputs.size(1)
with torch.no_grad():
    true_dist = torch.full_like(outputs, [2])
    true_dist.scatter_(1, labels.unsqueeze(1), [3])
log_probs = F.log_softmax(outputs, dim=1)
loss = (-true_dist * log_probs).sum(dim=1).mean()
Drag options to blanks, or click blank then click option'
A0.1
B0.05
C0.9
Attempts:
3 left
💡 Hint
Common Mistakes
Mixing up smoothing and target class values.
Not using torch.no_grad() when creating true_dist.