Complete the code to load a pre-trained model for fine-tuning.
import torch from torchvision import models model = models.resnet18(pretrained=[1])
Setting pretrained=True loads the model with pre-trained weights, which is essential for fine-tuning.
Complete the code to freeze all layers except the final fully connected layer for fine-tuning.
for param in model.parameters(): param.[1] = False
Setting requires_grad = False freezes the parameters so they are not updated during training.
Fix the error in replacing the final layer to match 10 output classes.
import torch.nn as nn model.fc = nn.Linear(model.fc.in_features, [1])
The final layer's output features must match the number of classes, which is 10 here.
Fill both blanks to create an optimizer that only updates the final layer parameters with a learning rate of 0.001.
import torch.optim as optim optimizer = optim.SGD([1], lr=[2])
Only the final layer's parameters should be updated, and the learning rate is set to 0.001 for fine-tuning.
Fill all three blanks to write a training loop that computes loss, backpropagates, and updates parameters.
for inputs, labels in dataloader: optimizer.zero_grad() outputs = model([1]) loss = criterion(outputs, [2]) loss.[3]() # backpropagation optimizer.step()
loss.forward() instead of loss.backward().The model takes inputs to produce outputs. The loss compares outputs to labels. Calling loss.backward() computes gradients.