0
0
PyTorchml~10 mins

Epoch-based training in PyTorch - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to set the number of epochs for training.

PyTorch
num_epochs = [1]
Drag options to blanks, or click blank then click option'
A10
B'10'
Crange(10)
DNone
Attempts:
3 left
💡 Hint
Common Mistakes
Using a string instead of an integer for epochs.
Using a range object instead of an integer.
2fill in blank
medium

Complete the code to start the training loop over epochs.

PyTorch
for epoch in [1](num_epochs):
    print(f"Epoch {epoch+1}")
Drag options to blanks, or click blank then click option'
Alen
Blist
Cenumerate
Drange
Attempts:
3 left
💡 Hint
Common Mistakes
Using len which returns an integer, not iterable.
Using list without an argument.
Using enumerate without an iterable.
3fill in blank
hard

Fix the error in the training loop to correctly zero the gradients.

PyTorch
for epoch in range(num_epochs):
    optimizer.[1]()
    outputs = model(inputs)
    loss = criterion(outputs, labels)
    loss.backward()
    optimizer.step()
Drag options to blanks, or click blank then click option'
Azero_gradients
Bclear_grad
Czero_grad
Dreset_grad
Attempts:
3 left
💡 Hint
Common Mistakes
Using non-existent methods like zero_gradients().
Forgetting to clear gradients causing accumulation.
4fill in blank
hard

Fill both blanks to compute the loss and update the model parameters inside the epoch loop.

PyTorch
for epoch in range(num_epochs):
    optimizer.zero_grad()
    outputs = model(inputs)
    loss = [1](outputs, labels)
    loss.[2]()
    optimizer.step()
Drag options to blanks, or click blank then click option'
Acriterion
Bbackward
Cstep
Dzero_grad
Attempts:
3 left
💡 Hint
Common Mistakes
Calling step() on loss instead of optimizer.
Not calling backward() on loss.
5fill in blank
hard

Fill all three blanks to print the epoch number, loss value, and accuracy after each epoch.

PyTorch
for epoch in range(num_epochs):
    optimizer.zero_grad()
    outputs = model(inputs)
    loss = criterion(outputs, labels)
    loss.backward()
    optimizer.step()

    _, predicted = torch.max(outputs, [1])
    correct = (predicted == labels).sum().item()
    accuracy = correct / labels.size([2])
    print(f"Epoch [3]: Loss={loss.item():.4f}, Accuracy={accuracy:.2%}")
Drag options to blanks, or click blank then click option'
A1
B0
Cepoch + 1
D2
Attempts:
3 left
💡 Hint
Common Mistakes
Using wrong dimension for torch.max causing errors.
Using wrong dimension for batch size causing wrong accuracy.
Printing epoch without adding 1, confusing learners.