0
0
PyTorchml~5 mins

Gradient access (.grad) in PyTorch - Cheat Sheet & Quick Revision

Choose your learning style9 modes available
Recall & Review
beginner
What does the .grad attribute in PyTorch represent?
The .grad attribute holds the gradient of a tensor after backpropagation. It shows how much the tensor affects the final output.
Click to reveal answer
beginner
When is the .grad attribute populated in PyTorch?
The .grad attribute is filled after calling backward() on a tensor that is part of a computation graph with requires_grad=True.
Click to reveal answer
beginner
How do you enable gradient tracking for a tensor in PyTorch?
Set requires_grad=True when creating the tensor, for example: torch.tensor([1.0, 2.0], requires_grad=True).
Click to reveal answer
beginner
What happens if you try to access .grad before calling backward()?
The .grad attribute will be None because gradients have not been computed yet.
Click to reveal answer
intermediate
How can you clear gradients stored in .grad before the next backward pass?
Use optimizer.zero_grad() or manually set tensor.grad.zero_() to reset gradients to zero.
Click to reveal answer
What must be set to True for a tensor to track gradients in PyTorch?
Arequires_grad
Btrack_grad
Cgrad_enabled
Denable_grad
When is the .grad attribute of a tensor updated?
AAfter calling <code>backward()</code>
BWhen the tensor is created
CAfter calling <code>forward()</code>
DWhen printing the tensor
What is the value of .grad before any backward pass?
ASame as the tensor
BZero tensor
CRandom values
DNone
How do you reset gradients in PyTorch before a new training step?
ASet <code>requires_grad=False</code>
BCall <code>optimizer.zero_grad()</code>
CCall <code>tensor.backward()</code>
DDelete the tensor
Which of these tensors will have a .grad attribute after backward?
ATensor with <code>requires_grad=False</code>
BTensor created with <code>torch.no_grad()</code>
CTensor with <code>requires_grad=True</code>
DTensor converted to numpy
Explain how to access and use the .grad attribute in PyTorch during training.
Think about the steps in a training loop involving gradients.
You got /4 concepts.
    Describe what happens if you forget to reset gradients before the next backward pass in PyTorch.
    Consider how gradients behave across multiple backward calls.
    You got /4 concepts.