0
0
PyTorchml~20 mins

Replacing classifier head in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Classifier Head Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Output of replacing classifier head in a PyTorch model
What is the output shape of the model's final layer after replacing the classifier head with a new linear layer of 10 output features?
PyTorch
import torch
import torch.nn as nn
from torchvision import models

model = models.resnet18()
num_features = model.fc.in_features
model.fc = nn.Linear(num_features, 10)

input_tensor = torch.randn(4, 3, 224, 224)
output = model(input_tensor)
output_shape = output.shape
print(output_shape)
Atorch.Size([4, 10])
Btorch.Size([4, 1000])
Ctorch.Size([1, 10])
Dtorch.Size([4, 512])
Attempts:
2 left
💡 Hint
Remember the batch size and the number of output classes in the new classifier head.
Model Choice
intermediate
2:00remaining
Choosing the correct way to replace classifier head in PyTorch
Which option correctly replaces the classifier head of a pretrained VGG16 model to output 5 classes?
Amodel.head = nn.Linear(4096, 5)
Bmodel.fc = nn.Linear(512, 5)
Cmodel.classifier = nn.Linear(4096, 5)
Dmodel.classifier[6] = nn.Linear(4096, 5)
Attempts:
2 left
💡 Hint
Check the attribute name and index of the classifier layer in VGG16.
Hyperparameter
advanced
2:00remaining
Effect of freezing layers when replacing classifier head
If you replace the classifier head of a pretrained ResNet50 and freeze all layers except the new head, which statement is true about training?
AAll model parameters will update during training.
BOnly the new classifier head's parameters will update during training.
CNo parameters will update because the model is frozen.
DOnly the first convolutional layer will update during training.
Attempts:
2 left
💡 Hint
Freezing layers means setting requires_grad to False for those parameters.
🔧 Debug
advanced
2:00remaining
Debugging error after replacing classifier head
After replacing the classifier head of a pretrained ResNet18 with nn.Linear(512, 20), the model raises a runtime error during training: "size mismatch, m1: [4 x 512], m2: [1000 x 20]". What is the cause?
AThe old classifier layer was not replaced properly; the model still uses the original 1000 output features.
BThe input tensor batch size is incorrect.
CThe new classifier layer has wrong input features; it should be 1000 instead of 512.
DThe loss function expects 1000 classes instead of 20.
Attempts:
2 left
💡 Hint
Check if the model's classifier attribute was correctly assigned.
🧠 Conceptual
expert
2:00remaining
Why replace classifier head instead of retraining entire model?
Why is it common practice to replace only the classifier head of a pretrained model when adapting it to a new task?
ABecause retraining the entire model is impossible with pretrained weights.
BBecause the classifier head contains all convolutional filters needed for feature extraction.
CBecause pretrained layers have learned useful features and retraining only the head saves time and data.
DBecause replacing the head increases the model size and improves accuracy automatically.
Attempts:
2 left
💡 Hint
Think about transfer learning and feature reuse.