Complete the code to load a pre-trained model in PyTorch.
import torch import torchvision.models as models model = models.resnet18(pretrained=[1])
Setting pretrained=True loads the model with weights trained on a large dataset, speeding up development.
Complete the code to freeze all layers of the pre-trained model.
for param in model.[1](): param.requires_grad = False
The parameters() method returns all parameters of the model to freeze them.
Fix the error in replacing the final layer for transfer learning.
import torch.nn as nn num_classes = 10 model.fc = nn.[1](num_classes)
The final fully connected layer should be replaced with nn.Linear to match the number of classes.
Fill both blanks to set the model to evaluation mode and disable gradient calculation.
model.[1]() with torch.[2](): outputs = model(inputs)
Use eval() to set evaluation mode and no_grad() to disable gradients during inference.
Fill all three blanks to create a dictionary of layer names and their requires_grad status.
grad_status = {name: param.[1] for name, param in model.[2]() if name.[3]('fc') == False}This code collects the gradient status of all layers except those starting with 'fc' (final layer).