0
0
PyTorchml~20 mins

Forward pass, loss, backward, step in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Forward Pass Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Output of a simple forward pass and loss calculation
What is the value of the loss after running this PyTorch code snippet?
PyTorch
import torch
import torch.nn as nn

model = nn.Linear(2, 1)
inputs = torch.tensor([[1.0, 2.0]])
target = torch.tensor([[1.0]])

output = model(inputs)
criterion = nn.MSELoss()
loss = criterion(output, target)
print(loss.item())
ARaises a RuntimeError due to shape mismatch
BA positive float value representing the mean squared error loss
CA tensor with shape (2, 1)
D0.0
Attempts:
2 left
💡 Hint
The loss is the mean squared error between the model output and the target.
Model Choice
intermediate
2:00remaining
Choosing the correct optimizer step sequence
Which sequence of PyTorch commands correctly performs a training step including forward pass, loss calculation, backward pass, and optimizer step?
Aoptimizer.zero_grad(); output = model(inputs); loss = criterion(output, target); loss.backward(); optimizer.step()
Boutput = model(inputs); loss = criterion(output, target); optimizer.step(); loss.backward(); optimizer.zero_grad()
Closs.backward(); optimizer.zero_grad(); output = model(inputs); loss = criterion(output, target); optimizer.step()
Doptimizer.step(); optimizer.zero_grad(); output = model(inputs); loss = criterion(output, target); loss.backward()
Attempts:
2 left
💡 Hint
Remember to clear gradients before backward pass and update weights after backward.
🔧 Debug
advanced
2:00remaining
Identify the error in backward pass usage
What error will this PyTorch code raise when running the backward pass?
PyTorch
import torch
import torch.nn as nn

model = nn.Linear(3, 1)
inputs = torch.tensor([[1.0, 2.0, 3.0]])
target = torch.tensor([[1.0]])

output = model(inputs)
criterion = nn.MSELoss()
loss = criterion(output, target)
loss.backward(retain_graph=True)
loss.backward()
ARuntimeError: Trying to backward through the graph a second time without retain_graph=True.
BNo error, code runs successfully.
CTypeError: backward() got an unexpected keyword argument 'retain_graph'.
DValueError: Target and output shapes do not match.
Attempts:
2 left
💡 Hint
Calling backward twice requires special care with retain_graph.
Hyperparameter
advanced
2:00remaining
Effect of learning rate on optimizer step
If you increase the learning rate in the optimizer, what is the most likely effect on the training step?
AThe loss function automatically decreases without training.
BThe model weights update more slowly, improving training stability.
CThe model weights update more aggressively, which may cause training to diverge.
DThe backward pass is skipped to speed up training.
Attempts:
2 left
💡 Hint
Higher learning rates mean bigger steps in weight updates.
🧠 Conceptual
expert
2:00remaining
Why zero gradients before backward pass?
Why is it necessary to call optimizer.zero_grad() before loss.backward() in a training loop?
ABecause zero_grad() initializes model weights to zero before each update.
BBecause zero_grad() runs the forward pass to reset outputs.
CBecause zero_grad() clears the optimizer state to avoid memory leaks.
DBecause gradients accumulate by default, zeroing prevents mixing gradients from multiple batches.
Attempts:
2 left
💡 Hint
Think about how gradients behave across multiple backward calls.