0
0
PyTorchml~5 mins

ReduceLROnPlateau in PyTorch

Choose your learning style9 modes available
Introduction
ReduceLROnPlateau helps your model learn better by lowering the learning speed when progress slows down.
When your model's training loss stops improving for a few steps.
When validation accuracy stays the same for several checks.
When you want to avoid wasting time with a learning rate that is too high.
When you want to automatically adjust learning rate without manual tuning.
Syntax
PyTorch
torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=10, verbose=False, threshold=1e-4, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-8)
mode='min' means the scheduler looks for the metric to decrease (like loss).
factor is how much to reduce the learning rate (e.g., 0.1 means reduce by 10 times).
Examples
Reduces learning rate by half if loss does not improve for 5 checks.
PyTorch
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.5, patience=5)
Reduces learning rate by 10 times if accuracy does not improve for 3 checks.
PyTorch
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='max', factor=0.1, patience=3)
Sample Model
This example shows how ReduceLROnPlateau lowers the learning rate when the loss stops improving for 2 epochs.
PyTorch
import torch
import torch.nn as nn
import torch.optim as optim

# Simple model
model = nn.Linear(10, 1)

# Optimizer
optimizer = optim.SGD(model.parameters(), lr=0.1)

# Scheduler to reduce LR when loss plateaus
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.5, patience=2, verbose=True)

# Dummy data
inputs = torch.randn(5, 10)
targets = torch.randn(5, 1)

criterion = nn.MSELoss()

losses = [0.5, 0.4, 0.4, 0.4, 0.3, 0.3, 0.3]

for epoch, loss_val in enumerate(losses, 1):
    optimizer.zero_grad()
    outputs = model(inputs)
    loss = criterion(outputs, targets)
    loss.backward()
    optimizer.step()

    # Instead of real loss, we simulate loss_val to test scheduler
    scheduler.step(loss_val)
    print(f"Epoch {epoch}, Loss: {loss_val}, Learning Rate: {optimizer.param_groups[0]['lr']}")
OutputSuccess
Important Notes
Remember to call scheduler.step(metric) with the metric you want to monitor (like validation loss).
Verbose=True helps you see when the learning rate changes.
Patience controls how many checks to wait before reducing the learning rate.
Summary
ReduceLROnPlateau lowers learning rate when progress stops.
It helps models learn better by adjusting speed automatically.
Use it by giving the metric to monitor after each epoch.