Challenge - 5 Problems
No_Grad_Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
Output of code using no_grad context manager
What is the output of this PyTorch code snippet?
PyTorch
import torch x = torch.tensor([2.0], requires_grad=True) with torch.no_grad(): y = x * 3 print(y.requires_grad)
Attempts:
2 left
💡 Hint
Think about what no_grad does to the computation graph.
✗ Incorrect
Inside the no_grad block, PyTorch does not track operations for gradients, so y.requires_grad is False.
🧠 Conceptual
intermediate1:30remaining
Purpose of no_grad context manager
What is the main purpose of using the no_grad context manager in PyTorch?
Attempts:
2 left
💡 Hint
Think about when you do not need to update model weights.
✗ Incorrect
no_grad disables gradient tracking, which reduces memory use and speeds up computations during inference.
❓ Metrics
advanced2:00remaining
Effect of no_grad on backward pass
Consider the following code. What happens when calling backward() on y inside the no_grad block?
PyTorch
import torch x = torch.tensor([1.0], requires_grad=True) with torch.no_grad(): y = x * 2 y.backward()
Attempts:
2 left
💡 Hint
Can you call backward on a tensor that does not require gradients?
✗ Incorrect
Inside no_grad, y does not track gradients, so calling backward() raises RuntimeError.
🔧 Debug
advanced2:30remaining
Why does this code raise an error?
This code raises an error. What is the cause?
PyTorch
import torch x = torch.tensor([3.0], requires_grad=True) with torch.no_grad(): y = x * 4 z = y + 2 y.backward()
Attempts:
2 left
💡 Hint
Check if y requires gradients after no_grad block.
✗ Incorrect
y was computed inside no_grad, so it does not track gradients. Calling backward() on y raises RuntimeError.
❓ Model Choice
expert3:00remaining
Choosing when to use no_grad in model evaluation
You want to evaluate a trained PyTorch model on test data without updating weights. Which is the best practice?
Attempts:
2 left
💡 Hint
Think about saving memory and speeding up inference.
✗ Incorrect
Using torch.no_grad() during evaluation disables gradient tracking, saving memory and computation.