0
0
Computer Visionml~20 mins

Fine-tuning approach in Computer Vision - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Fine-tuning Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Understanding Fine-tuning Layers

In a fine-tuning approach for a convolutional neural network, which layers are typically retrained to adapt the model to a new task?

ANo layers are retrained; the model is used as-is without any changes.
BOnly the first convolutional layers are retrained, freezing the rest.
COnly the final classification layers are retrained, while earlier layers remain frozen.
DAll layers are retrained from scratch with new random weights.
Attempts:
2 left
💡 Hint

Think about which parts of the model capture general features versus task-specific features.

Predict Output
intermediate
2:00remaining
Output Shape After Fine-tuning

Given a pretrained CNN model with an output layer of size 1000 classes, you replace the output layer with a new layer of size 10 classes for fine-tuning. What will be the output shape of the model for a batch of 32 images?

Computer Vision
import torch
import torch.nn as nn

class PretrainedModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.features = nn.Sequential(nn.Conv2d(3, 16, 3), nn.ReLU())
        self.classifier = nn.Linear(16 * 30 * 30, 1000)

    def forward(self, x):
        x = self.features(x)
        x = x.view(x.size(0), -1)
        x = self.classifier(x)
        return x

model = PretrainedModel()
model.classifier = nn.Linear(16 * 30 * 30, 10)  # Replace output layer
input_tensor = torch.randn(32, 3, 32, 32)
output = model(input_tensor)
output_shape = output.shape
A(1, 10)
B(32, 10)
C(32, 1000)
D(10, 32)
Attempts:
2 left
💡 Hint

Remember the batch size is the first dimension in PyTorch tensors.

Hyperparameter
advanced
2:00remaining
Choosing Learning Rate for Fine-tuning

When fine-tuning a pretrained model, which learning rate strategy is generally recommended?

AUse a much smaller learning rate than training from scratch to avoid destroying pretrained weights.
BUse a very large learning rate to quickly adapt the model to the new data.
CUse the same learning rate as training from scratch for consistency.
DUse zero learning rate to keep weights fixed.
Attempts:
2 left
💡 Hint

Think about how pretrained weights should be adjusted carefully.

Metrics
advanced
2:00remaining
Evaluating Fine-tuned Model Performance

After fine-tuning a model on a new dataset, which metric would best indicate if the model has successfully adapted without overfitting?

AValidation accuracy remains high and close to training accuracy.
BTraining accuracy is very high but validation accuracy is very low.
CTraining loss increases while validation loss decreases.
DValidation accuracy is zero.
Attempts:
2 left
💡 Hint

Consider what it means when training and validation metrics are similar.

🔧 Debug
expert
2:00remaining
Identifying Fine-tuning Bug in Code

Consider this PyTorch code snippet for fine-tuning a pretrained model. What error will it raise?

Computer Vision
import torch
import torch.nn as nn
from torchvision import models

model = models.resnet18(pretrained=True)
for param in model.parameters():
    param.requires_grad = False

model.fc = nn.Linear(model.fc.in_features, 5)

optimizer = torch.optim.SGD(model.parameters(), lr=0.01)

# Training loop omitted

# Attempt to update only trainable parameters
optimizer.zero_grad()
loss = torch.tensor(1.0, requires_grad=True)
loss.backward()
optimizer.step()
AAttributeError because model.fc is not replaced correctly.
BRuntimeError because optimizer tries to update parameters with requires_grad=False.
CTypeError due to incorrect loss tensor creation.
DNo error; optimizer updates only trainable parameters.
Attempts:
2 left
💡 Hint

Think about how PyTorch optimizers handle parameters with requires_grad=False.