Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to load a pre-trained model for fine-tuning.
Computer Vision
from torchvision import models model = models.resnet18(pretrained=[1])
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using pretrained=False will initialize random weights, not useful for fine-tuning.
Passing None or 0 will cause errors or ignore pre-trained weights.
✗ Incorrect
Setting pretrained=True loads the model with weights trained on ImageNet, which is needed for fine-tuning.
2fill in blank
mediumComplete the code to freeze all layers except the last fully connected layer.
Computer Vision
for param in model.parameters(): param.[1] = False
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using grad or train attributes will cause errors or have no effect.
detach is a method, not an attribute to freeze parameters.
✗ Incorrect
Setting requires_grad=False freezes the parameters so they won't update during training.
3fill in blank
hardFix the error in replacing the last layer to match 10 output classes.
Computer Vision
import torch.nn as nn model.fc = nn.Linear(model.fc.in_features, [1])
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 100 or 1 will cause shape mismatch errors during training.
Using 0 will cause runtime errors.
✗ Incorrect
The last layer must output the number of classes, here 10 for fine-tuning on a 10-class dataset.
4fill in blank
hardFill both blanks to set the optimizer to update only trainable parameters with learning rate 0.001.
Computer Vision
import torch.optim as optim optimizer = optim.SGD([1], lr=[2])
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Passing all parameters causes frozen layers to update unnecessarily.
Using a too high learning rate like 0.01 may harm fine-tuning.
✗ Incorrect
We use filter to pass only parameters that require gradients to the optimizer, and set learning rate to 0.001 for fine-tuning.
5fill in blank
hardFill all three blanks to complete the training loop for one epoch with loss calculation and optimizer step.
Computer Vision
model.train() for inputs, labels in dataloader: optimizer.zero_grad() outputs = model([1]) loss = criterion(outputs, [2]) loss.[3]() optimizer.step()
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Passing labels to model instead of inputs causes errors.
Calling forward() on loss is invalid; use backward().
✗ Incorrect
We pass inputs to the model, compare outputs with labels to compute loss, then call loss.backward() to compute gradients before optimizer.step().