Complete the code to perform the backward pass and compute gradients.
loss.[1]()The backward() function computes the gradients of the loss with respect to model parameters.
Complete the code to reset gradients before the backward pass.
optimizer.[1]()The zero_grad() function clears old gradients before computing new ones.
Fix the error in the code to correctly perform a training step.
optimizer.zero_grad()
output = model(input)
loss = criterion(output, target)
loss.[1]()
optimizer.step()The backward() call computes gradients needed before the optimizer updates parameters.
Fill both blanks to correctly compute loss and perform backward pass.
output = model(input) loss = [1](output, target) loss.[2]()
The loss is computed using the criterion function, then backward() computes gradients.
Fill all three blanks to complete a training step with gradient reset, loss computation, and backward pass.
optimizer.[1]() output = model(input) loss = [2](output, target) loss.[3]()
First reset gradients with zero_grad(), then compute loss with criterion, and finally call backward() to compute gradients.