0
0
PytorchHow-ToBeginner · 4 min read

How to Use ReduceLROnPlateau in PyTorch for Learning Rate Scheduling

In PyTorch, use torch.optim.lr_scheduler.ReduceLROnPlateau to reduce the learning rate when a monitored metric, like validation loss, stops improving. Call scheduler.step(metric_value) after each epoch with the metric to adjust the learning rate automatically.
📐

Syntax

The ReduceLROnPlateau scheduler is initialized with an optimizer and parameters to control when and how the learning rate reduces.

  • optimizer: The optimizer whose learning rate you want to adjust.
  • mode: 'min' or 'max', depending on whether the metric should be minimized or maximized.
  • factor: The factor by which the learning rate will be reduced (new_lr = lr * factor).
  • patience: Number of epochs with no improvement after which learning rate will be reduced.
  • threshold: Minimum change in the monitored metric to qualify as improvement.
  • verbose: If True, prints a message when learning rate is reduced.

After each epoch, call scheduler.step(metric_value) with the current metric value.

python
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
    optimizer,
    mode='min',
    factor=0.1,
    patience=10,
    threshold=1e-4,
    verbose=True
)

# After each epoch:
scheduler.step(validation_loss)
💻

Example

This example shows training a simple model on dummy data and using ReduceLROnPlateau to reduce the learning rate when validation loss stops improving.

python
import torch
import torch.nn as nn
import torch.optim as optim

# Simple model
class SimpleModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = nn.Linear(10, 1)
    def forward(self, x):
        return self.linear(x)

model = SimpleModel()
optimizer = optim.SGD(model.parameters(), lr=0.1)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.5, patience=2, verbose=True)

# Dummy data
inputs = torch.randn(20, 10)
targets = torch.randn(20, 1)
criterion = nn.MSELoss()

val_losses = [0.5, 0.4, 0.35, 0.35, 0.36, 0.37, 0.34, 0.33, 0.33, 0.32]

for epoch in range(10):
    model.train()
    optimizer.zero_grad()
    outputs = model(inputs)
    loss = criterion(outputs, targets)
    loss.backward()
    optimizer.step()

    val_loss = val_losses[epoch]
    print(f"Epoch {epoch+1}, Validation Loss: {val_loss:.4f}, Learning Rate: {optimizer.param_groups[0]['lr']:.5f}")
    scheduler.step(val_loss)
Output
Epoch 1, Validation Loss: 0.5000, Learning Rate: 0.10000 Epoch 2, Validation Loss: 0.4000, Learning Rate: 0.10000 Epoch 3, Validation Loss: 0.3500, Learning Rate: 0.10000 Epoch 4, Validation Loss: 0.3500, Learning Rate: 0.10000 Epoch 5, Validation Loss: 0.3600, Learning Rate: 0.10000 Epoch 6, Validation Loss: 0.3700, Learning Rate: 0.05000 Epoch 7, Validation Loss: 0.3400, Learning Rate: 0.05000 Epoch 8, Validation Loss: 0.3300, Learning Rate: 0.05000 Epoch 9, Validation Loss: 0.3300, Learning Rate: 0.05000 Epoch 10, Validation Loss: 0.3200, Learning Rate: 0.05000
⚠️

Common Pitfalls

  • Not calling scheduler.step(metric) with the metric value after each epoch; calling it without arguments will not work.
  • Using the wrong mode ('min' vs 'max') for the metric can prevent learning rate reduction.
  • Setting patience too low or too high can cause premature or delayed learning rate changes.
  • Forgetting to pass the correct metric (e.g., validation loss) instead of training loss or accuracy.
python
wrong_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer)
# Wrong: calling step without metric
# wrong_scheduler.step()  # This will raise an error

# Right way:
# wrong_scheduler.step(validation_loss)
📊

Quick Reference

ParameterDescriptionDefault
optimizerOptimizer to adjust learning rate forRequired
mode'min' to reduce on metric decrease, 'max' on increase'min'
factorFactor to multiply learning rate by0.1
patienceEpochs to wait before reducing LR10
thresholdMinimum change to qualify as improvement1e-4
verbosePrint message when LR changesFalse
cooldownEpochs to wait after LR change before resuming normal operation0
min_lrLower bound on learning rate0

Key Takeaways

Use ReduceLROnPlateau to lower learning rate when a monitored metric stops improving.
Always call scheduler.step(metric_value) with the current metric after each epoch.
Set mode='min' for metrics like loss and 'max' for metrics like accuracy.
Adjust patience and factor to control how quickly and how much the learning rate changes.
Verbose=True helps track when learning rate reductions happen during training.