0
0
PyTorchml~20 mins

Why the training loop is explicit in PyTorch - Challenge Your Understanding

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
PyTorch Training Loop Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Why does PyTorch require an explicit training loop?

PyTorch users write their own training loops instead of using a built-in one. Why is this explicit training loop important?

AIt allows users to customize every step of training, like data loading, forward pass, loss calculation, and backpropagation.
BBecause PyTorch only works with pre-trained models and does not support training from scratch.
CTo force users to write longer code and make training slower for better learning.
DBecause PyTorch does not support automatic differentiation, so users must manually compute gradients.
Attempts:
2 left
💡 Hint

Think about flexibility and control during model training.

Predict Output
intermediate
2:00remaining
Output of a simple PyTorch training loop snippet

What will be the printed output after running this PyTorch training loop snippet?

PyTorch
import torch
import torch.nn as nn

model = nn.Linear(2, 1)
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
criterion = nn.MSELoss()

inputs = torch.tensor([[1.0, 2.0]])
target = torch.tensor([[1.0]])

optimizer.zero_grad()
output = model(inputs)
loss = criterion(output, target)
loss.backward()
optimizer.step()

print(round(loss.item(), 4))
A0.0
B0.25
C0.5
D1.0
Attempts:
2 left
💡 Hint

Think about the initial random weights and the MSE loss formula.

Model Choice
advanced
2:00remaining
Choosing the right model update step in PyTorch training loop

In a PyTorch training loop, which step correctly updates the model parameters after computing gradients?

Amodel.backward()
Boptimizer.step()
Closs.backward()
Doptimizer.zero_grad()
Attempts:
2 left
💡 Hint

Which function applies the computed gradients to change model weights?

Hyperparameter
advanced
2:00remaining
Effect of learning rate in explicit PyTorch training loop

What happens if the learning rate in the optimizer is set too high in a PyTorch training loop?

AThe model parameters may update too much, causing training to diverge or become unstable.
BThe model will train faster and always reach the best accuracy immediately.
CThe gradients will not be computed, so the model won't learn anything.
DThe loss function will automatically adjust to compensate for the high learning rate.
Attempts:
2 left
💡 Hint

Think about how big steps affect learning stability.

🔧 Debug
expert
3:00remaining
Identifying the error in this PyTorch training loop snippet

What error will this PyTorch training loop snippet raise?

PyTorch
import torch
import torch.nn as nn

model = nn.Linear(3, 1)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
criterion = nn.MSELoss()

inputs = torch.tensor([[1.0, 2.0]])  # Only 2 features instead of 3
target = torch.tensor([[1.0]])

optimizer.zero_grad()
output = model(inputs)
loss = criterion(output, target)
loss.backward()
optimizer.step()
AValueError: target and output shapes do not match
BTypeError: optimizer.step() missing required positional argument
CNo error, runs successfully
DRuntimeError: size mismatch in linear layer input
Attempts:
2 left
💡 Hint

Check the input size vs model expected input size.