Complete the code to save the model's state dictionary.
torch.save(model[1], 'model.pth')
Saving the state_dict() stores the model's learned parameters, which is essential for checkpointing.
Complete the code to load the saved state dictionary into the model.
model.load_state_dict(torch.load('[1]'))
The file model.pth is where the model's state dictionary was saved, so it must be loaded from there.
Fix the error in saving the optimizer state for checkpointing.
torch.save(optimizer[1], 'optimizer.pth')
The optimizer's state_dict() contains all the information needed to resume training exactly.
Fill both blanks to save both model and optimizer states in one checkpoint dictionary.
checkpoint = {'model': model[1], 'optimizer': optimizer[2], 'epoch': epoch}
torch.save(checkpoint, 'checkpoint.pth')Both model and optimizer states are saved using their state_dict() methods to preserve training progress.
Fill all three blanks to load checkpoint and resume training with model, optimizer, and epoch.
checkpoint = torch.load('checkpoint.pth') model.load_state_dict(checkpoint['[1]']) optimizer.load_state_dict(checkpoint['[2]']) epoch = checkpoint['[3]']
The checkpoint dictionary stores keys 'model', 'optimizer', and 'epoch' to restore training state.