Recall & Review
beginner
What is fine-tuning in machine learning?
Fine-tuning is the process of taking a pre-trained model and training it a bit more on a new, often smaller, dataset to adapt it to a specific task.
Click to reveal answer
beginner
Why do we freeze some layers during fine-tuning?
We freeze layers to keep their learned features unchanged, which helps prevent overfitting and reduces training time by only updating certain parts of the model.
Click to reveal answer
intermediate
In PyTorch, how do you freeze layers of a model?
You set the parameter's requires_grad attribute to False, like: <br>
for param in model.parameters():<br> param.requires_grad = FalseClick to reveal answer
intermediate
What is a common strategy to fine-tune a pre-trained model?
First, freeze most layers and train only the last layers. Then, optionally unfreeze some earlier layers and train with a smaller learning rate.
Click to reveal answer
beginner
How does using a smaller learning rate help during fine-tuning?
A smaller learning rate helps make small adjustments to the pre-trained weights, avoiding large changes that could ruin the learned features.
Click to reveal answer
What does freezing layers in a model mean?
✗ Incorrect
Freezing layers means their weights do not get updated during training.
Why start fine-tuning by training only the last layers?
✗ Incorrect
Last layers capture task-specific features, so training them first adapts the model quickly.
In PyTorch, which attribute controls if a parameter is trainable?
✗ Incorrect
The requires_grad attribute controls if gradients are computed and parameters updated.
What is a benefit of using a pre-trained model for fine-tuning?
✗ Incorrect
Pre-trained models have learned useful features, so fine-tuning needs less data and time.
What happens if you use a large learning rate during fine-tuning?
✗ Incorrect
A large learning rate can cause big changes that ruin the pre-trained knowledge.
Explain the step-by-step process of fine-tuning a pre-trained model in PyTorch.
Think about which layers to freeze and how to adjust learning rates.
You got /5 concepts.
Describe why fine-tuning is useful compared to training a model from scratch.
Consider the benefits of transfer learning.
You got /4 concepts.