Complete the code to create a tensor that tracks gradients for training.
import torch x = torch.tensor([2.0, 3.0], requires_grad=[1])
Setting requires_grad=True tells PyTorch to track operations on the tensor for gradient calculation.
Complete the code to compute the gradient of y with respect to x.
y = x.pow(2).sum() y.backward([1])
Calling backward() without arguments computes gradients for scalar outputs.
Fix the error in the code to correctly zero gradients before backward pass.
optimizer.zero_grad() loss = model(input).sum() loss.[1]() optimizer.step()
The backward() function computes gradients for the loss.
Fill both blanks to create a simple training loop that updates model parameters.
for data, target in dataloader: optimizer.[1]() output = model(data) loss = criterion(output, target) loss.[2]() optimizer.step()
Before computing gradients, zero them with zero_grad(). Then compute gradients with backward().
Fill all three blanks to define a tensor, compute a function, and get its gradient.
x = torch.tensor([1.0, 2.0, 3.0], requires_grad=[1]) y = x.[2](2).sum() y.[3]()
Enable gradient tracking with True, use pow(2) to square, then call backward() to compute gradients.