Recall & Review
beginner
What is a checkpoint in PyTorch training?
A checkpoint is a saved snapshot of the model's parameters and optimizer state during training. It allows you to pause and resume training later without losing progress.
Click to reveal answer
beginner
Why should you save the optimizer state along with the model checkpoint?
Saving the optimizer state preserves information like learning rate, momentum, and other internal variables. This helps training resume exactly where it left off, ensuring consistent updates.
Click to reveal answer
intermediate
How do you save a checkpoint with both model and optimizer states in PyTorch?
Use torch.save() with a dictionary containing 'model_state_dict' and 'optimizer_state_dict'. For example: torch.save({'model_state_dict': model.state_dict(), 'optimizer_state_dict': optimizer.state_dict()}, PATH)
Click to reveal answer
intermediate
How do you load a checkpoint with optimizer state in PyTorch?
Load the checkpoint dictionary with torch.load(), then call model.load_state_dict() and optimizer.load_state_dict() with the saved states. This restores both model weights and optimizer parameters.
Click to reveal answer
beginner
What happens if you save only the model state but not the optimizer state?
If you save only the model state, training can resume but optimizer settings like momentum or learning rate schedules will reset. This may cause slower or unstable training continuation.
Click to reveal answer
What does the optimizer state include when saved in a checkpoint?
✗ Incorrect
The optimizer state includes learning rate, momentum, and other internal variables needed to continue training smoothly.
Which PyTorch function is used to save a checkpoint?
✗ Incorrect
torch.save() is used to save checkpoint files including model and optimizer states.
How do you restore the optimizer state from a checkpoint?
✗ Incorrect
You restore optimizer state by calling optimizer.load_state_dict() with the saved optimizer state dictionary.
What is the risk of not saving the optimizer state when checkpointing?
✗ Incorrect
Without optimizer state, training resumes but optimizer settings reset, which can slow or destabilize training.
Which dictionary keys are commonly used to save model and optimizer states in PyTorch checkpoints?
✗ Incorrect
The standard keys are 'model_state_dict' for model weights and 'optimizer_state_dict' for optimizer parameters.
Explain how to save and load a checkpoint in PyTorch that includes both the model and optimizer states.
Think about saving and restoring both model weights and optimizer parameters.
You got /6 concepts.
Why is it important to save the optimizer state when checkpointing during training?
Consider what happens if optimizer state is lost.
You got /4 concepts.