Complete the code to set the number of epochs for training.
num_epochs = [1]The number of epochs should be an integer value, like 10, not a string or range.
Complete the code to start the training loop over epochs.
for epoch in [1](num_epochs): print(f"Epoch {epoch+1}")
len which returns an integer, not iterable.list without an argument.enumerate without an iterable.We use range(num_epochs) to loop from 0 to num_epochs-1.
Fix the error in the training loop to correctly zero the gradients.
for epoch in range(num_epochs): optimizer.[1]() outputs = model(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step()
zero_gradients().In PyTorch, the correct method to clear gradients before backpropagation is zero_grad().
Fill both blanks to compute the loss and update the model parameters inside the epoch loop.
for epoch in range(num_epochs): optimizer.zero_grad() outputs = model(inputs) loss = [1](outputs, labels) loss.[2]() optimizer.step()
step() on loss instead of optimizer.backward() on loss.The loss is computed by calling the criterion function, then loss.backward() computes gradients.
Fill all three blanks to print the epoch number, loss value, and accuracy after each epoch.
for epoch in range(num_epochs): optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() _, predicted = torch.max(outputs, [1]) correct = (predicted == labels).sum().item() accuracy = correct / labels.size([2]) print(f"Epoch [3]: Loss={loss.item():.4f}, Accuracy={accuracy:.2%}")
We use dimension 1 for torch.max to get class predictions, dimension 0 for batch size, and print epoch number as epoch + 1.