Complete the code to start the training loop for 5 epochs.
for epoch in range([1]): print(f"Epoch {epoch + 1}")
The training loop runs for 5 epochs, so the range should be 5.
Complete the code to zero the gradients before backpropagation.
optimizer.[1]()Before backpropagation, gradients must be reset using optimizer.zero_grad().
Fix the error in the code to perform backpropagation on the loss.
loss.[1]()To compute gradients, call loss.backward() after computing the loss.
Fill both blanks to update model parameters and print the loss value.
optimizer.[1]() print(f"Loss: {loss.[2]()}")
optimizer.step() updates parameters. loss.item() gets the scalar loss value for printing.
Fill all three blanks to complete the training loop: zero gradients, compute loss backward, and update parameters.
for epoch in range(3): optimizer.[1]() output = model(data) loss = loss_fn(output, target) loss.[2]() optimizer.[3]()
In each epoch, zero gradients, compute gradients with backward(), then update parameters with step().