0
0
PyTorchml~10 mins

Training loop structure in PyTorch - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to start the training loop for 5 epochs.

PyTorch
for epoch in range([1]):
    print(f"Epoch {epoch + 1}")
Drag options to blanks, or click blank then click option'
A10
B0
C5
D1
Attempts:
3 left
💡 Hint
Common Mistakes
Using range(10) which runs too many epochs.
Using range(0) which runs zero times.
2fill in blank
medium

Complete the code to zero the gradients before backpropagation.

PyTorch
optimizer.[1]()
Drag options to blanks, or click blank then click option'
Aupdate
Bstep
Cbackward
Dzero_grad
Attempts:
3 left
💡 Hint
Common Mistakes
Calling optimizer.step() before zeroing gradients.
Using optimizer.backward() which does not exist.
3fill in blank
hard

Fix the error in the code to perform backpropagation on the loss.

PyTorch
loss.[1]()
Drag options to blanks, or click blank then click option'
Abackward
Bupdate
Czero_grad
Dstep
Attempts:
3 left
💡 Hint
Common Mistakes
Calling loss.step() which does not exist.
Calling loss.zero_grad() which is incorrect.
4fill in blank
hard

Fill both blanks to update model parameters and print the loss value.

PyTorch
optimizer.[1]()
print(f"Loss: {loss.[2]()}")
Drag options to blanks, or click blank then click option'
Astep
Bitem
Cbackward
Dzero_grad
Attempts:
3 left
💡 Hint
Common Mistakes
Calling loss.backward() instead of loss.item() in print.
Calling optimizer.zero_grad() instead of optimizer.step().
5fill in blank
hard

Fill all three blanks to complete the training loop: zero gradients, compute loss backward, and update parameters.

PyTorch
for epoch in range(3):
    optimizer.[1]()
    output = model(data)
    loss = loss_fn(output, target)
    loss.[2]()
    optimizer.[3]()
Drag options to blanks, or click blank then click option'
Azero_grad
Bbackward
Cstep
Deval
Attempts:
3 left
💡 Hint
Common Mistakes
Calling optimizer.eval() which is for evaluation mode.
Skipping zero_grad causing gradient accumulation.