Experiment - Checkpoint with optimizer state
Problem:You are training a neural network using PyTorch. You want to save your model's progress so you can resume training later without losing optimizer state.
Current Metrics:Training accuracy: 85%, Validation accuracy: 82%, Loss: 0.45
Issue:Currently, you save only the model weights. When resuming training, optimizer state is lost, causing slower convergence and unstable training.