Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to replace the classifier head of a pretrained model with a new linear layer.
PyTorch
import torch.nn as nn import torchvision model = torchvision.models.resnet18(pretrained=True) model.fc = nn.[1](512, 10)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using Conv2d instead of Linear for the classifier head.
Not matching the input feature size of the new layer.
✗ Incorrect
The classifier head of ResNet18 is replaced by a linear layer with input features 512 and output features 10.
2fill in blank
mediumComplete the code to freeze all layers except the new classifier head.
PyTorch
for param in model.parameters(): param.[1] = False for param in model.fc.parameters(): param.requires_grad = True
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using incorrect attribute names like grad or requires_grad_.
Not freezing the pretrained layers before training.
✗ Incorrect
Setting requires_grad to False freezes the parameters so they won't update during training.
3fill in blank
hardFix the error in the code to correctly replace the classifier head with a new linear layer.
PyTorch
import torchvision import torch.nn as nn model = torchvision.models.resnet50(pretrained=True) model.fc = nn.Linear([1], 5)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 512 which is the input size for ResNet18, not ResNet50.
Using arbitrary numbers without checking the model architecture.
✗ Incorrect
ResNet50's classifier head input features size is 2048, so the new linear layer must match this.
4fill in blank
hardFill both blanks to create a new classifier head with dropout and linear layers.
PyTorch
model.fc = nn.Sequential(
nn.Dropout(p=[1]),
nn.Linear(512, [2])
) Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using dropout probability greater than 1 or less than 0.
Mismatch between linear layer output size and number of classes.
✗ Incorrect
Dropout with p=0.5 is common to reduce overfitting, and the linear layer outputs 10 classes.
5fill in blank
hardFill all three blanks to replace the classifier head and freeze pretrained layers except the new head.
PyTorch
import torchvision import torch.nn as nn model = torchvision.models.resnet34(pretrained=True) for param in model.parameters(): param.[1] = False model.fc = nn.Linear([2], [3]) for param in model.fc.parameters(): param.requires_grad = True
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Not freezing pretrained layers before training.
Using wrong input or output sizes for the linear layer.
✗ Incorrect
Freezing pretrained layers requires setting requires_grad to False. ResNet34's fc input features are 512, and output is 7 classes.