Challenge - 5 Problems
Loss Tracker Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
Output of training and validation loss tracking code
What will be the printed output after running this PyTorch training loop snippet for 1 epoch?
PyTorch
import torch import torch.nn as nn model = nn.Linear(2, 1) criterion = nn.MSELoss() optimizer = torch.optim.SGD(model.parameters(), lr=0.1) train_losses = [] val_losses = [] # Dummy data train_data = torch.tensor([[1.0, 2.0], [3.0, 4.0]]) train_targets = torch.tensor([[1.0], [2.0]]) val_data = torch.tensor([[5.0, 6.0]]) val_targets = torch.tensor([[3.0]]) # Training loop for 1 epoch model.train() optimizer.zero_grad() pred = model(train_data) loss = criterion(pred, train_targets) loss.backward() optimizer.step() train_losses.append(loss.item()) model.eval() with torch.no_grad(): val_pred = model(val_data) val_loss = criterion(val_pred, val_targets) val_losses.append(val_loss.item()) print(f"Train loss: {train_losses[0]:.4f}") print(f"Validation loss: {val_losses[0]:.4f}")
Attempts:
2 left
💡 Hint
Remember the model is untrained initially, so losses won't be zero.
✗ Incorrect
The initial random weights cause the model to predict values far from targets, resulting in non-zero losses. The training loss is computed on two samples, and validation loss on one sample, both using mean squared error.
🧠 Conceptual
intermediate1:30remaining
Purpose of tracking validation loss during training
Why do we track validation loss separately from training loss during model training?
Attempts:
2 left
💡 Hint
Think about how we know if the model is overfitting or not.
✗ Incorrect
Validation loss helps us see if the model performs well on unseen data, indicating generalization. Training loss alone only shows how well the model fits the training data.
❓ Hyperparameter
advanced1:30remaining
Effect of batch size on training and validation loss tracking
How does increasing the batch size during training typically affect the smoothness of training and validation loss curves?
Attempts:
2 left
💡 Hint
Think about how averaging over more samples affects loss calculation.
✗ Incorrect
Larger batches average gradients over more samples, reducing noise and making loss curves smoother. However, very large batches can hurt generalization.
❓ Metrics
advanced1:00remaining
Calculating average validation loss over multiple batches
Given validation losses for 3 batches as [0.5, 0.7, 0.6], what is the correct average validation loss to report?
Attempts:
2 left
💡 Hint
Add all losses and divide by the number of batches.
✗ Incorrect
Average loss = (0.5 + 0.7 + 0.6) / 3 = 1.8 / 3 = 0.6
🔧 Debug
expert2:00remaining
Identifying the bug in validation loss tracking code
What error will this PyTorch code raise when tracking validation loss, and why?
```python
val_losses = []
model.eval()
for data, target in val_loader:
pred = model(data)
loss = criterion(pred, target)
val_losses.append(loss)
print(f"Validation loss: {sum(val_losses)/len(val_losses):.4f}")
```
Attempts:
2 left
💡 Hint
Think about the data type of loss and how sum() works.
✗ Incorrect
loss is a tensor, and summing tensors in a list without extracting their scalar values causes a TypeError. We must use loss.item() to get a number.