0
0
PyTorchml~20 mins

Why automatic differentiation enables training in PyTorch - Challenge Your Understanding

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Automatic Differentiation Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Why is automatic differentiation important in training neural networks?

Imagine you want to teach a robot to recognize cats in photos. The robot uses a neural network that learns by adjusting numbers called weights. Why do we need automatic differentiation for this learning process?

AIt calculates the exact changes needed for weights by computing gradients automatically.
BIt randomly changes weights without any calculation to speed up training.
CIt stores all training data to use later for predictions.
DIt converts images into text so the robot can understand them.
Attempts:
2 left
💡 Hint

Think about how the robot knows how to improve itself after seeing mistakes.

Predict Output
intermediate
2:00remaining
Output of PyTorch gradient calculation

What is the output of the following PyTorch code snippet?

PyTorch
import torch
x = torch.tensor(2.0, requires_grad=True)
y = x ** 3
z = y + 2 * x
z.backward()
print(x.grad.item())
A12.0
B14.0
C10.0
D8.0
Attempts:
2 left
💡 Hint

Recall the derivative of z = x^3 + 2x with respect to x.

Model Choice
advanced
2:00remaining
Choosing the right model for automatic differentiation demonstration

You want to demonstrate automatic differentiation with a simple model that has a clear gradient. Which model is best?

AA linear model with one input and one output: y = wx + b
BA random forest with multiple decision trees
CA k-nearest neighbors model without parameters
DA clustering model like k-means
Attempts:
2 left
💡 Hint

Think about which model has parameters that can be differentiated easily.

Hyperparameter
advanced
2:00remaining
Effect of learning rate on gradient-based training

During training with automatic differentiation, you adjust the learning rate. What happens if the learning rate is too high?

AThe gradients will become zero and stop updating weights.
BThe model will train slower but more accurately.
CThe model weights may change too much, causing training to become unstable.
DThe model will ignore the gradients and use random updates.
Attempts:
2 left
💡 Hint

Consider what happens if you take very large steps when trying to reach a target.

🔧 Debug
expert
2:00remaining
Why does this PyTorch code raise an error during backward pass?

Consider this PyTorch code:

import torch
x = torch.tensor(3.0, requires_grad=True)
y = x ** 2
z = y.detach() + 1
z.backward()

What error will occur and why?

AValueError because backward() requires a scalar but y is not scalar.
BTypeError because detach() returns a list instead of a tensor.
CNo error; the code runs and prints gradients correctly.
DRuntimeError because y.detach() breaks the computation graph, so z has no grad_fn.
Attempts:
2 left
💡 Hint

Think about what detach() does to the tensor in terms of gradient tracking.