0
0
PyTorchml~20 mins

Training and validation loss tracking in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Loss Tracker Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Output of training and validation loss tracking code
What will be the printed output after running this PyTorch training loop snippet for 1 epoch?
PyTorch
import torch
import torch.nn as nn

model = nn.Linear(2, 1)
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)

train_losses = []
val_losses = []

# Dummy data
train_data = torch.tensor([[1.0, 2.0], [3.0, 4.0]])
train_targets = torch.tensor([[1.0], [2.0]])
val_data = torch.tensor([[5.0, 6.0]])
val_targets = torch.tensor([[3.0]])

# Training loop for 1 epoch
model.train()
optimizer.zero_grad()
pred = model(train_data)
loss = criterion(pred, train_targets)
loss.backward()
optimizer.step()
train_losses.append(loss.item())

model.eval()
with torch.no_grad():
    val_pred = model(val_data)
    val_loss = criterion(val_pred, val_targets)
    val_losses.append(val_loss.item())

print(f"Train loss: {train_losses[0]:.4f}")
print(f"Validation loss: {val_losses[0]:.4f}")
A
Train loss: 0.0000
Validation loss: 6.2500
B
Train loss: 1.2500
Validation loss: 0.0000
C
Train loss: 1.2500
Validation loss: 6.2500
D
Train loss: 0.0000
Validation loss: 0.0000
Attempts:
2 left
💡 Hint
Remember the model is untrained initially, so losses won't be zero.
🧠 Conceptual
intermediate
1:30remaining
Purpose of tracking validation loss during training
Why do we track validation loss separately from training loss during model training?
ATo reduce the size of the training dataset automatically
BTo speed up the training process by skipping some training steps
CTo increase the model complexity during training
DTo check if the model is learning patterns that generalize to new data
Attempts:
2 left
💡 Hint
Think about how we know if the model is overfitting or not.
Hyperparameter
advanced
1:30remaining
Effect of batch size on training and validation loss tracking
How does increasing the batch size during training typically affect the smoothness of training and validation loss curves?
ALarger batch sizes usually make loss curves smoother but may reduce generalization
BLarger batch sizes always make loss curves more noisy and unstable
CBatch size has no effect on loss curve smoothness
DSmaller batch sizes always produce smoother loss curves
Attempts:
2 left
💡 Hint
Think about how averaging over more samples affects loss calculation.
Metrics
advanced
1:00remaining
Calculating average validation loss over multiple batches
Given validation losses for 3 batches as [0.5, 0.7, 0.6], what is the correct average validation loss to report?
A0.7
B0.6
C0.5
D1.8
Attempts:
2 left
💡 Hint
Add all losses and divide by the number of batches.
🔧 Debug
expert
2:00remaining
Identifying the bug in validation loss tracking code
What error will this PyTorch code raise when tracking validation loss, and why? ```python val_losses = [] model.eval() for data, target in val_loader: pred = model(data) loss = criterion(pred, target) val_losses.append(loss) print(f"Validation loss: {sum(val_losses)/len(val_losses):.4f}") ```
ATypeError because loss tensors cannot be summed directly without converting to numbers
BZeroDivisionError because val_losses list is empty
CSyntaxError due to missing colon after for loop
DNo error, prints average validation loss correctly
Attempts:
2 left
💡 Hint
Think about the data type of loss and how sum() works.