0
0
PyTorchml~10 mins

Fine-tuning strategy in PyTorch - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to load a pre-trained model for fine-tuning.

PyTorch
import torch
from torchvision import models

model = models.resnet18(pretrained=[1])
Drag options to blanks, or click blank then click option'
A0
BTrue
CNone
DFalse
Attempts:
3 left
💡 Hint
Common Mistakes
Setting pretrained to False loads a model with random weights, not suitable for fine-tuning.
Using None or 0 as pretrained argument causes errors.
2fill in blank
medium

Complete the code to freeze all layers except the final fully connected layer for fine-tuning.

PyTorch
for param in model.parameters():
    param.[1] = False
Drag options to blanks, or click blank then click option'
Adetach
Bgrad
Ctrain
Drequires_grad
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'grad' or 'train' attributes which do not control gradient computation.
Calling detach() on parameters instead of setting requires_grad.
3fill in blank
hard

Fix the error in replacing the final layer to match 10 output classes.

PyTorch
import torch.nn as nn

model.fc = nn.Linear(model.fc.in_features, [1])
Drag options to blanks, or click blank then click option'
A10
B5
C100
D1
Attempts:
3 left
💡 Hint
Common Mistakes
Setting output features to a wrong number causes shape mismatch errors.
Using 1 or 5 instead of the correct number of classes.
4fill in blank
hard

Fill both blanks to create an optimizer that only updates the final layer parameters with a learning rate of 0.001.

PyTorch
import torch.optim as optim

optimizer = optim.SGD([1], lr=[2])
Drag options to blanks, or click blank then click option'
Amodel.fc.parameters()
Bmodel.parameters()
C0.01
D0.001
Attempts:
3 left
💡 Hint
Common Mistakes
Using all model parameters causes the whole model to train, not just the final layer.
Using a too high learning rate can cause unstable training.
5fill in blank
hard

Fill all three blanks to write a training loop that computes loss, backpropagates, and updates parameters.

PyTorch
for inputs, labels in dataloader:
    optimizer.zero_grad()
    outputs = model([1])
    loss = criterion(outputs, [2])
    loss.[3]()  # backpropagation
    optimizer.step()
Drag options to blanks, or click blank then click option'
Ainputs
Blabels
Cbackward
Dforward
Attempts:
3 left
💡 Hint
Common Mistakes
Passing labels to the model instead of inputs.
Calling loss.forward() instead of loss.backward().
Forgetting to zero gradients before backpropagation.