0
0
PyTorchml~20 mins

Freezing layers in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Freezing Layers Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
What is the output of this PyTorch code snippet?

Consider the following PyTorch code that freezes some layers of a model. What will be the value of requires_grad for each parameter after running this code?

PyTorch
import torch
import torch.nn as nn

class SimpleModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(10, 5)
        self.fc2 = nn.Linear(5, 2)

    def forward(self, x):
        x = self.fc1(x)
        x = self.fc2(x)
        return x

model = SimpleModel()

# Freeze fc1 layer
for param in model.fc1.parameters():
    param.requires_grad = False

requires_grad_list = [param.requires_grad for param in model.parameters()]
print(requires_grad_list)
A[True, False, True, False]
B[True, True, False, False]
C[False, False, True, True]
D[False, True, False, True]
Attempts:
2 left
💡 Hint

Remember that each Linear layer has two parameters: weights and biases.

Model Choice
intermediate
2:00remaining
Which model freezing approach correctly freezes only the convolutional layers?

You want to freeze all convolutional layers in a PyTorch model but keep other layers trainable. Which code snippet correctly achieves this?

A
for name, param in model.named_parameters():
    if 'conv' in name:
        param.requires_grad = False
B
for layer in model.children():
    if 'conv' in str(layer):
        layer.requires_grad = False
C
for param in model.parameters():
    if isinstance(param, nn.Conv2d):
        param.requires_grad = False
D
for module in model.modules():
    if isinstance(module, nn.Conv2d):
        for param in module.parameters():
            param.requires_grad = False
Attempts:
2 left
💡 Hint

Parameters themselves are tensors, not layers. You need to access modules to check their type.

Hyperparameter
advanced
2:00remaining
What is the effect of freezing layers on optimizer hyperparameters?

You freeze some layers in your PyTorch model by setting requires_grad=False for their parameters. What should you do with the optimizer to avoid updating frozen parameters?

APass only parameters with <code>requires_grad=True</code> to the optimizer.
BPass all model parameters to the optimizer; it will automatically skip frozen ones.
CSet the learning rate to zero for frozen parameters in the optimizer.
DFreeze layers after optimizer creation; no changes needed.
Attempts:
2 left
💡 Hint

Optimizer updates parameters it receives. Frozen parameters should not be included.

🔧 Debug
advanced
2:00remaining
Why does training still update frozen layers?

You froze some layers by setting requires_grad=False, but after training, those layers' weights changed. What is the most likely cause?

AThe loss function does not support frozen layers, causing updates.
BYou forgot to filter parameters passed to the optimizer; optimizer still updates frozen layers.
CYou used torch.no_grad() during training, which disables freezing.
DThe model layers were not set to eval() mode, so gradients were computed anyway.
Attempts:
2 left
💡 Hint

Freezing layers disables gradient computation, but optimizer updates parameters it receives.

🧠 Conceptual
expert
3:00remaining
What is the impact of freezing layers on transfer learning performance?

When using transfer learning, freezing early layers of a pretrained model is common. What is the main reason for freezing these layers?

AEarly layers capture general features; freezing them prevents overfitting and speeds up training.
BEarly layers are specific to the original task; freezing them improves adaptation to new data.
CFreezing early layers increases model capacity by adding more trainable parameters.
DFreezing early layers reduces model size by removing those layers from computation.
Attempts:
2 left
💡 Hint

Think about what early layers learn in deep networks and why retraining them might be unnecessary.