Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to initialize the training loss list.
PyTorch
train_losses = [1] Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using a dictionary {} instead of a list.
Initializing with 0 or None which cannot store multiple values.
✗ Incorrect
We use an empty list [] to store training loss values over epochs.
2fill in blank
mediumComplete the code to calculate the average training loss for an epoch.
PyTorch
epoch_train_loss = sum(train_losses) / [1]
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Dividing by the maximum or minimum loss instead of the count.
Dividing by the list itself which causes an error.
✗ Incorrect
To get the average loss, divide the sum by the number of loss values using len(train_losses).
3fill in blank
hardFix the error in the code to append validation loss after each epoch.
PyTorch
validation_losses.[1](val_loss) Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using
add which is not a list method.Using
extend which expects an iterable.Using
insert without specifying an index.✗ Incorrect
Use append to add a single loss value to the list.
4fill in blank
hardFill both blanks to compute and store average training and validation losses.
PyTorch
avg_train_loss = sum(train_losses) [1] len(train_losses) avg_val_loss = sum(validation_losses) [2] len(validation_losses)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using multiplication or addition instead of division.
Using subtraction which gives incorrect results.
✗ Incorrect
Divide the sum of losses by the count to get the average for both training and validation.
5fill in blank
hardFill all three blanks to track losses during training and validation phases.
PyTorch
for epoch in range(num_epochs): model.train() train_loss = 0 for data, target in train_loader: optimizer.zero_grad() output = model(data) loss = criterion(output, target) loss.[1]() optimizer.step() train_loss += loss.item() train_losses.[2](train_loss / len(train_loader)) model.eval() val_loss = 0 with torch.no_grad(): for data, target in val_loader: output = model(data) loss = criterion(output, target) val_loss += loss.item() validation_losses.[3](val_loss / len(val_loader))
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using
step instead of backward for gradients.Using
step or wrong methods to add losses to lists.✗ Incorrect
Call loss.backward() to compute gradients, then append average losses to the lists.