0
0
PyTorchml~20 mins

Fine-tuning strategy in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Fine-tuning Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Understanding Fine-tuning Layers

When fine-tuning a pretrained neural network, which layers are typically updated to adapt the model to a new task?

AOnly the first few layers are updated, and the rest remain frozen.
BAll layers are frozen and no weights are updated during fine-tuning.
COnly the final classification layers are updated, while earlier layers remain frozen.
DRandom layers are updated without any specific strategy.
Attempts:
2 left
💡 Hint

Think about which parts of the model capture general features versus task-specific features.

Hyperparameter
intermediate
2:00remaining
Choosing Learning Rate for Fine-tuning

When fine-tuning a pretrained model, which learning rate strategy is generally recommended?

AUse a much smaller learning rate than training from scratch to avoid destroying pretrained weights.
BUse a very large learning rate to quickly adapt the model.
CUse the same learning rate as training from scratch.
DUse a learning rate that increases over time.
Attempts:
2 left
💡 Hint

Consider how sensitive pretrained weights are to big changes.

Predict Output
advanced
2:00remaining
Output of Fine-tuning Code Snippet

What will be the output of the following PyTorch code snippet that freezes all layers except the last linear layer?

PyTorch
import torch
import torch.nn as nn

class SimpleModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.features = nn.Sequential(
            nn.Linear(10, 20),
            nn.ReLU(),
            nn.Linear(20, 10)
        )
        self.classifier = nn.Linear(10, 2)

    def forward(self, x):
        x = self.features(x)
        return self.classifier(x)

model = SimpleModel()

# Freeze all layers except classifier
for param in model.features.parameters():
    param.requires_grad = False

trainable_params = [p for p in model.parameters() if p.requires_grad]
print(len(trainable_params))
A6
B0
C4
D2
Attempts:
2 left
💡 Hint

Count the number of parameters in the classifier layer that require gradients.

Metrics
advanced
2:00remaining
Interpreting Fine-tuning Training Metrics

During fine-tuning, you observe the training loss decreases steadily but validation accuracy plateaus early. What is the most likely explanation?

AThe model is overfitting the training data and not generalizing well.
BThe learning rate is too low causing slow learning.
CThe model is underfitting and needs more layers unfrozen.
DThe dataset is too small to train any model.
Attempts:
2 left
💡 Hint

Think about what it means when training improves but validation does not.

🔧 Debug
expert
3:00remaining
Debugging Fine-tuning Freezing Issue

You try to freeze layers in a pretrained model but notice all parameters are still updating during training. Which code snippet correctly freezes the feature extractor layers in PyTorch?

A
for param in model.features:
    param.requires_grad = False
B
for param in model.features.parameters():
    param.requires_grad = False
Cmodel.features.requires_grad = False
Dmodel.features.freeze()
Attempts:
2 left
💡 Hint

Remember how to access parameters in a PyTorch module.