Gradient clipping is a technique to keep the training stable by limiting how big the model's updates can be. The key metric to watch is the training loss and gradient norm. If gradients get too large, the loss can jump or become NaN (not a number). Clipping helps keep gradients in a safe range, so the loss decreases smoothly. Monitoring the gradient norm before and after clipping shows if clipping is working.
Gradient clipping in PyTorch - Model Metrics & Evaluation
Gradient clipping does not directly relate to classification metrics like confusion matrix. Instead, we visualize gradient norms and loss values.
Epoch | Gradient Norm Before Clipping | Gradient Norm After Clipping | Training Loss
---------------------------------------------------------------
1 | 15.2 | 5.0 | 2.3
2 | 12.7 | 5.0 | 1.8
3 | 20.5 | 5.0 | 1.2
4 | 4.8 | 4.8 | 0.9
5 | 3.2 | 3.2 | 0.7
This shows clipping keeps gradients from exploding (too big), helping loss go down steadily.
Think of gradient clipping like setting a speed limit for a car. Without a limit, the car (model updates) might speed dangerously (explode gradients), causing crashes (training failure). But if the limit is too low, the car moves too slowly (small updates), and training takes forever or gets stuck.
So, the tradeoff is between too much clipping (slow learning) and too little clipping (unstable training). Finding the right clipping value balances fast learning and stable updates.
- Good: Gradient norms before clipping sometimes exceed the threshold, but after clipping they stay below it. Training loss decreases smoothly without sudden jumps or NaNs.
- Bad: Gradient norms explode to very large values, causing loss to jump or become NaN. Or clipping is too aggressive, gradients are always very small, and loss decreases very slowly or plateaus.
- Ignoring gradient norms: Not monitoring gradient sizes can hide exploding gradients causing training failure.
- Clipping too early or too late: Applying clipping only after training is unstable wastes time; applying too aggressively slows learning.
- Using wrong clipping method: Clipping by value vs clipping by norm have different effects; norm clipping is usually better.
- Confusing loss spikes: Sudden loss jumps might be due to other bugs, not just gradients.
Your model's training loss jumps to NaN after a few steps. Gradient norms before clipping are very large (e.g., 100), but after clipping they are capped at 5. Is your gradient clipping working well? What should you do?
Answer: Clipping is limiting gradients to 5, but loss still becomes NaN, so clipping alone is not enough. You might need to lower the clipping threshold, reduce learning rate, or check for other bugs. Gradient clipping helps but does not fix all training issues.