0
0
PyTorchml~20 mins

Gradient access (.grad) in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Gradient Guru
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
What is the output of accessing .grad after backward()?
Consider the following PyTorch code snippet. What will be the value of x.grad after running it?
PyTorch
import torch
x = torch.tensor(2.0, requires_grad=True)
y = x ** 3
loss = y
loss.backward()
print(x.grad.item())
A6.0
B8.0
C12.0
DNone
Attempts:
2 left
💡 Hint
Recall the derivative of x^3 is 3*x^2.
Model Choice
intermediate
2:00remaining
Which tensor will have .grad populated after backward()?
Given the following tensors, which one will have its .grad attribute populated after calling loss.backward()?
PyTorch
import torch
x = torch.tensor(1.0, requires_grad=True)
y = torch.tensor(2.0)
z = x * y
loss = z
loss.backward()
Ax
By
Cz
Dloss
Attempts:
2 left
💡 Hint
Only tensors with requires_grad=True track gradients.
Hyperparameter
advanced
2:00remaining
What happens if you call backward() twice without zeroing gradients?
In PyTorch, if you call loss.backward() twice on the same graph without zeroing gradients, what will happen to x.grad?
PyTorch
import torch
x = torch.tensor(3.0, requires_grad=True)
y = x ** 2
loss = y
loss.backward()
loss.backward()
print(x.grad.item())
AThe gradient will be zero
BThe gradient will be reset to the last backward call (9.0)
CAn error will be raised
DThe gradient will be doubled (12.0)
Attempts:
2 left
💡 Hint
Gradients accumulate by default in PyTorch.
🔧 Debug
advanced
2:00remaining
Why is x.grad None after backward()?
Look at this code. Why is x.grad None after calling loss.backward()?
PyTorch
import torch
x = torch.tensor(4.0)
y = x ** 2
loss = y
loss.backward()
print(x.grad)
Ax does not have requires_grad=True
Bx.grad was manually set to None
Closs.backward() was not called
Dloss is not a scalar
Attempts:
2 left
💡 Hint
Check if the tensor tracks gradients.
🧠 Conceptual
expert
3:00remaining
What is the shape and content of .grad for a tensor with multiple elements?
Given the code below, what will be the shape and values of x.grad after loss.backward()?
PyTorch
import torch
x = torch.tensor([1.0, 2.0, 3.0], requires_grad=True)
y = x ** 2
loss = y.sum()
loss.backward()
print(x.grad)
ATensor(6.0) scalar
BTensor([2.0, 4.0, 6.0]) with shape (3,)
CTensor([1.0, 2.0, 3.0]) with shape (3,)
DNone
Attempts:
2 left
💡 Hint
Derivative of x^2 is 2*x element-wise.