0
0
PyTorchml~20 mins

Feature extraction strategy in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Feature Extraction Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Output of feature extraction with frozen layers
Consider a pretrained ResNet18 model in PyTorch. You freeze all layers except the last fully connected layer and pass a batch of images through it. What will be the shape of the output tensor?
PyTorch
import torch
import torchvision.models as models

model = models.resnet18(pretrained=True)
for param in model.parameters():
    param.requires_grad = False
model.fc = torch.nn.Linear(model.fc.in_features, 10)  # new output classes

inputs = torch.randn(8, 3, 224, 224)  # batch of 8 images
outputs = model(inputs)
print(outputs.shape)
Atorch.Size([1, 10])
Btorch.Size([8, 512])
Ctorch.Size([8, 1000])
Dtorch.Size([8, 10])
Attempts:
2 left
💡 Hint
Think about the batch size and the number of output classes after replacing the last layer.
Model Choice
intermediate
2:00remaining
Best model choice for feature extraction on small dataset
You want to use feature extraction on a small image dataset with limited labels. Which pretrained model is generally best to start with for extracting features?
AA small pretrained MobileNetV2 model
BA pretrained VGG16 model
CA large pretrained ResNet50 model
DA pretrained Transformer-based model like ViT
Attempts:
2 left
💡 Hint
Consider model size and overfitting risk on small datasets.
Hyperparameter
advanced
2:00remaining
Choosing learning rate for fine-tuning after feature extraction
After extracting features using a pretrained model and replacing the last layer, you want to fine-tune the model. Which learning rate setting is most appropriate to start with?
AA very high learning rate like 0.1
BA moderate learning rate like 0.01
CA very low learning rate like 1e-5
DNo learning rate, freeze all layers
Attempts:
2 left
💡 Hint
Fine-tuning pretrained weights requires careful small updates.
Metrics
advanced
2:00remaining
Evaluating feature extraction effectiveness
You extracted features from a pretrained model and trained a classifier on top. Which metric best shows if feature extraction helped improve classification?
AValidation accuracy compared to training from scratch
BTraining loss of the classifier
CNumber of model parameters
DTime taken to train the classifier
Attempts:
2 left
💡 Hint
Think about comparing performance with and without feature extraction.
🔧 Debug
expert
3:00remaining
Debugging feature extraction output mismatch
You extract features from a pretrained CNN by removing the last layer. But your extracted features have shape (8, 512, 1, 1) instead of expected (8, 512). What is the likely cause?
PyTorch
import torch
import torchvision.models as models

model = models.resnet18(pretrained=True)
# Attempt to remove last layer
model = torch.nn.Sequential(*list(model.children())[:-1])

inputs = torch.randn(8, 3, 224, 224)
features = model(inputs)
print(features.shape)
AThe last layer was not removed properly; output is from the final FC layer
BThe model output is flattened incorrectly, so shape is wrong
CInput batch size is wrong causing unexpected output shape
DThe model expects different input image size
Attempts:
2 left
💡 Hint
Check if the output tensor is still 4D and needs flattening.