Model Pipeline - StepLR and MultiStepLR
This pipeline shows how learning rate schedulers StepLR and MultiStepLR adjust the learning rate during training to help the model learn better and faster.
This pipeline shows how learning rate schedulers StepLR and MultiStepLR adjust the learning rate during training to help the model learn better and faster.
Loss
1.0 |*
0.9 | *
0.8 | *
0.7 | *
0.6 | *
0.5 | *
0.4 | *
0.3 | *
+----------------
1 2 3 4 5 6 7 8 9 10 Epochs| Epoch | Loss ↓ | Accuracy ↑ | Observation |
|---|---|---|---|
| 1 | 0.85 | 0.60 | Initial training with learning rate 0.1 |
| 2 | 0.70 | 0.68 | Loss decreased, accuracy improved |
| 3 | 0.60 | 0.72 | Learning rate unchanged for StepLR, decreased for MultiStepLR |
| 4 | 0.55 | 0.75 | Model continues to improve |
| 5 | 0.50 | 0.78 | StepLR reduces learning rate by gamma=0.5 here |
| 6 | 0.45 | 0.80 | Lower learning rate helps fine-tune weights |
| 7 | 0.42 | 0.82 | MultiStepLR reduces learning rate at this milestone |
| 8 | 0.40 | 0.83 | Training stabilizes with smaller learning rate |
| 9 | 0.38 | 0.84 | Model converges further |
| 10 | 0.36 | 0.85 | Final epoch with lowest learning rate |