Challenge - 5 Problems
Epoch Training Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
Output of a simple epoch training loop
What will be the printed output after running this PyTorch training loop for 2 epochs?
Assume the model, optimizer, and loss function are correctly defined and the dataloader has 3 batches with dummy data.
Code:
Assume the model, optimizer, and loss function are correctly defined and the dataloader has 3 batches with dummy data.
Code:
PyTorch
for epoch in range(2): total_loss = 0 for batch in range(3): loss = batch + 1 # dummy loss values: 1, 2, 3 total_loss += loss print(f"Epoch {epoch+1}, Loss: {total_loss}")
Attempts:
2 left
💡 Hint
Sum the dummy loss values for each batch inside the epoch.
✗ Incorrect
The dummy loss values per batch are 1, 2, and 3. Their sum is 6. This sum is printed for each epoch, so both epochs print 6.
❓ Model Choice
intermediate1:30remaining
Choosing the correct place to reset loss in epoch training
In an epoch-based training loop, where should you reset the total loss variable to zero to correctly track loss per epoch?
Attempts:
2 left
💡 Hint
Think about when you want to start fresh loss tracking for each epoch.
✗ Incorrect
Resetting total loss before each epoch ensures loss is accumulated fresh for that epoch only.
❓ Hyperparameter
advanced1:30remaining
Effect of increasing number of epochs on model training
What is the most likely effect of increasing the number of epochs during training of a neural network?
Attempts:
2 left
💡 Hint
Think about what happens if the model learns too much from training data.
✗ Incorrect
Too many epochs can cause the model to memorize training data, reducing its ability to generalize to new data.
❓ Metrics
advanced1:30remaining
Calculating average loss per epoch
Given a training loop that sums batch losses in total_loss and processes 5 batches per epoch, how do you calculate the average loss per epoch?
Attempts:
2 left
💡 Hint
Average means total divided by count.
✗ Incorrect
Average loss per epoch is total loss divided by number of batches processed in that epoch.
🔧 Debug
expert2:30remaining
Identifying the bug in epoch training loop
What error will this PyTorch epoch training loop raise?
Code:
for epoch in range(3): total_loss = 0 for batch in dataloader: optimizer.zero_grad() outputs = model(batch[0]) loss = criterion(outputs, batch[1]) loss.backward() optimizer.step() total_loss += loss print(f"Epoch {epoch+1} Loss: {total_loss}")
Code:
for epoch in range(3): total_loss = 0 for batch in dataloader: optimizer.zero_grad() outputs = model(batch[0]) loss = criterion(outputs, batch[1]) loss.backward() optimizer.step() total_loss += loss print(f"Epoch {epoch+1} Loss: {total_loss}")
Attempts:
2 left
💡 Hint
Check the type of loss and total_loss before adding.
✗ Incorrect
loss is a tensor, total_loss is an int. Adding them directly causes a TypeError.