Challenge - 5 Problems
Feature Extraction Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
Output of feature extraction with frozen layers
Consider a pretrained ResNet18 model in PyTorch. You freeze all layers except the last fully connected layer and pass a batch of images through it. What will be the shape of the output tensor?
PyTorch
import torch import torchvision.models as models model = models.resnet18(pretrained=True) for param in model.parameters(): param.requires_grad = False model.fc = torch.nn.Linear(model.fc.in_features, 10) # new output classes inputs = torch.randn(8, 3, 224, 224) # batch of 8 images outputs = model(inputs) print(outputs.shape)
Attempts:
2 left
💡 Hint
Think about the batch size and the number of output classes after replacing the last layer.
✗ Incorrect
The model outputs a tensor with shape (batch_size, number_of_classes). Since batch size is 8 and the last layer outputs 10 classes, the shape is (8, 10).
❓ Model Choice
intermediate2:00remaining
Best model choice for feature extraction on small dataset
You want to use feature extraction on a small image dataset with limited labels. Which pretrained model is generally best to start with for extracting features?
Attempts:
2 left
💡 Hint
Consider model size and overfitting risk on small datasets.
✗ Incorrect
MobileNetV2 is lightweight and less likely to overfit on small datasets, making it a good choice for feature extraction in such cases.
❓ Hyperparameter
advanced2:00remaining
Choosing learning rate for fine-tuning after feature extraction
After extracting features using a pretrained model and replacing the last layer, you want to fine-tune the model. Which learning rate setting is most appropriate to start with?
Attempts:
2 left
💡 Hint
Fine-tuning pretrained weights requires careful small updates.
✗ Incorrect
A very low learning rate like 1e-5 helps fine-tune pretrained weights gently without destroying learned features.
❓ Metrics
advanced2:00remaining
Evaluating feature extraction effectiveness
You extracted features from a pretrained model and trained a classifier on top. Which metric best shows if feature extraction helped improve classification?
Attempts:
2 left
💡 Hint
Think about comparing performance with and without feature extraction.
✗ Incorrect
Validation accuracy compared to training from scratch shows if feature extraction improved generalization.
🔧 Debug
expert3:00remaining
Debugging feature extraction output mismatch
You extract features from a pretrained CNN by removing the last layer. But your extracted features have shape (8, 512, 1, 1) instead of expected (8, 512). What is the likely cause?
PyTorch
import torch import torchvision.models as models model = models.resnet18(pretrained=True) # Attempt to remove last layer model = torch.nn.Sequential(*list(model.children())[:-1]) inputs = torch.randn(8, 3, 224, 224) features = model(inputs) print(features.shape)
Attempts:
2 left
💡 Hint
Check if the output tensor is still 4D and needs flattening.
✗ Incorrect
Removing the last layer returns a 4D tensor (batch, channels, 1, 1). Without flattening, shape appears as (8, 512, 1, 1). Proper flattening is needed.