0
0
PyTorchml~8 mins

Checkpoint with optimizer state in PyTorch - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Checkpoint with optimizer state
Which metric matters for this concept and WHY

When saving a checkpoint with optimizer state, the key metric to watch is training loss. This is because the optimizer state helps the model continue learning smoothly from where it left off. If the loss after loading the checkpoint is similar to before saving, it means the optimizer state was restored correctly and training can continue effectively.

Confusion matrix or equivalent visualization (ASCII)

Checkpointing itself does not have a confusion matrix. Instead, we check if the model parameters and optimizer state are restored correctly by comparing training metrics before and after loading.

    Before saving checkpoint:
    Epoch 5 - Loss: 0.45

    After loading checkpoint:
    Epoch 5 - Loss: 0.45
    

If loss values match closely, it means the checkpoint with optimizer state was saved and loaded properly.

Precision vs Recall (or equivalent tradeoff) with concrete examples

Checkpointing with optimizer state is about continuity in training, not classification metrics like precision or recall. The tradeoff here is between saving frequently to avoid losing progress and saving too often which can slow training.

  • Saving too rarely risks losing many training steps if interrupted.
  • Saving too often wastes time and storage.

Good practice is to save checkpoints at meaningful intervals, including optimizer state, so training can resume exactly where it stopped.

What "good" vs "bad" metric values look like for this use case

Good checkpointing means:

  • After loading, training loss continues smoothly without jumps.
  • Optimizer state is restored, so learning rate schedules and momentum continue correctly.
  • Model accuracy or other metrics improve as expected after resuming.

Bad checkpointing means:

  • Loss suddenly increases or training stalls after loading.
  • Optimizer state missing causes learning rate resets or momentum loss.
  • Training metrics degrade or behave erratically after resuming.
Metrics pitfalls (accuracy paradox, data leakage, overfitting indicators)

Common pitfalls when checkpointing with optimizer state include:

  • Not saving optimizer state: Leads to loss of momentum and learning rate info, causing slower or unstable training after resume.
  • Partial checkpointing: Saving only model weights but not optimizer state can cause unexpected metric jumps.
  • Data leakage: If checkpointing includes validation data accidentally, metrics may be misleading.
  • Overfitting: Resuming training without proper early stopping can cause overfitting, seen in metrics worsening.
Your model has 98% accuracy but 12% recall on fraud. Is it good?

No, this is not good for fraud detection. Even if accuracy is high, a recall of 12% means the model misses 88% of fraud cases. For fraud detection, high recall is critical to catch as many frauds as possible. This shows why looking at multiple metrics is important.

Key Result
Saving and restoring optimizer state ensures training loss continues smoothly after resuming, confirming checkpoint correctness.