Recall & Review
beginner
What does it mean to 'freeze layers' in a neural network?
Freezing layers means stopping their weights from changing during training. This keeps the learned features fixed while training other parts of the model.
Click to reveal answer
beginner
Why do we freeze layers when using a pre-trained model?
We freeze layers to keep the useful features learned from large data. This saves time and avoids losing good knowledge while training on new data.
Click to reveal answer
beginner
How do you freeze layers in PyTorch?
Set the layer's parameters' requires_grad attribute to False. For example: <br>
for param in model.layer.parameters():<br> param.requires_grad = FalseClick to reveal answer
intermediate
What happens if you forget to freeze layers when fine-tuning?
The pre-trained weights may change too much, losing useful features. This can cause slower training or worse results.
Click to reveal answer
intermediate
Can you unfreeze layers after freezing them? Why would you do this?
Yes, you can set requires_grad back to True. This is useful to fine-tune the whole model after training only some layers first.
Click to reveal answer
What does setting
param.requires_grad = False do in PyTorch?✗ Incorrect
Setting requires_grad to False tells PyTorch not to compute gradients for that parameter, so it won't update during training.
Why freeze layers in transfer learning?
✗ Incorrect
Freezing layers keeps the useful features learned before and speeds up training on new tasks.
Which layers are usually frozen first in a convolutional neural network?
✗ Incorrect
Early layers learn basic features like edges and textures, so they are often frozen to keep these general features.
What is a common next step after freezing layers and training the new layers?
✗ Incorrect
After training new layers, unfreezing some layers allows fine-tuning the whole model for better accuracy.
If you freeze all layers in a model, what will happen during training?
✗ Incorrect
Freezing all layers means no parameters update, so the model cannot learn from new data.
Explain in your own words what freezing layers means and why it is useful in transfer learning.
Think about how you keep some parts fixed while changing others.
You got /4 concepts.
Describe how you would freeze and then unfreeze layers in a PyTorch model during training.
Focus on the requires_grad attribute and training steps.
You got /4 concepts.