0
0
PyTorchml~5 mins

Fine-tuning strategy in PyTorch - Cheat Sheet & Quick Revision

Choose your learning style9 modes available
Recall & Review
beginner
What is fine-tuning in machine learning?
Fine-tuning is the process of taking a pre-trained model and training it a bit more on a new, often smaller, dataset to adapt it to a specific task.
Click to reveal answer
beginner
Why do we freeze some layers during fine-tuning?
We freeze layers to keep their learned features unchanged, which helps prevent overfitting and reduces training time by only updating certain parts of the model.
Click to reveal answer
intermediate
In PyTorch, how do you freeze layers of a model?
You set the parameter's requires_grad attribute to False, like: <br>for param in model.parameters():<br> param.requires_grad = False
Click to reveal answer
intermediate
What is a common strategy to fine-tune a pre-trained model?
First, freeze most layers and train only the last layers. Then, optionally unfreeze some earlier layers and train with a smaller learning rate.
Click to reveal answer
beginner
How does using a smaller learning rate help during fine-tuning?
A smaller learning rate helps make small adjustments to the pre-trained weights, avoiding large changes that could ruin the learned features.
Click to reveal answer
What does freezing layers in a model mean?
AStopping updates to those layers during training
BRemoving those layers from the model
CAdding more neurons to those layers
DChanging the activation function of those layers
Why start fine-tuning by training only the last layers?
ABecause last layers have fewer parameters
BBecause last layers do not affect output
CBecause last layers are always frozen
DBecause last layers are usually task-specific
In PyTorch, which attribute controls if a parameter is trainable?
Agrad_enabled
Btrainable
Crequires_grad
Dupdate_flag
What is a benefit of using a pre-trained model for fine-tuning?
AIt guarantees perfect accuracy
BIt reduces training time and data needed
CIt removes the need for validation
DIt makes the model smaller
What happens if you use a large learning rate during fine-tuning?
AThe model might forget learned features
BThe model trains faster without issues
CThe model becomes smaller
DThe model ignores new data
Explain the step-by-step process of fine-tuning a pre-trained model in PyTorch.
Think about which layers to freeze and how to adjust learning rates.
You got /5 concepts.
    Describe why fine-tuning is useful compared to training a model from scratch.
    Consider the benefits of transfer learning.
    You got /4 concepts.