0
0
PyTorchml~20 mins

ReduceLROnPlateau in PyTorch - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - ReduceLROnPlateau
Problem:You have trained a neural network on a classification task. The training loss decreases steadily, but the validation loss stops improving after some epochs, causing the validation accuracy to plateau around 75%.
Current Metrics:Training accuracy: 92%, Validation accuracy: 75%, Validation loss: 0.65
Issue:The learning rate is fixed and too high, causing the model to stop improving on validation data and get stuck at a plateau.
Your Task
Use ReduceLROnPlateau to reduce the learning rate when validation loss stops improving, aiming to increase validation accuracy above 80% without losing training accuracy.
Keep the model architecture unchanged.
Only modify the training loop to include ReduceLROnPlateau scheduler.
Do not change the optimizer type.
Hint 1
Hint 2
Hint 3
Solution
PyTorch
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset

# Simple model
class SimpleNet(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(20, 50)
        self.relu = nn.ReLU()
        self.fc2 = nn.Linear(50, 2)
    def forward(self, x):
        x = self.relu(self.fc1(x))
        return self.fc2(x)

# Generate dummy data
X_train = torch.randn(500, 20)
y_train = torch.randint(0, 2, (500,))
X_val = torch.randn(100, 20)
y_val = torch.randint(0, 2, (100,))

train_ds = TensorDataset(X_train, y_train)
val_ds = TensorDataset(X_val, y_val)
train_dl = DataLoader(train_ds, batch_size=32, shuffle=True)
val_dl = DataLoader(val_ds, batch_size=32)

model = SimpleNet()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.01)

# Add ReduceLROnPlateau scheduler
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.5, patience=3, verbose=True)

epochs = 20
for epoch in range(1, epochs+1):
    model.train()
    for xb, yb in train_dl:
        optimizer.zero_grad()
        preds = model(xb)
        loss = criterion(preds, yb)
        loss.backward()
        optimizer.step()

    model.eval()
    val_loss = 0
    correct = 0
    total = 0
    with torch.no_grad():
        for xb, yb in val_dl:
            preds = model(xb)
            loss_val = criterion(preds, yb)
            val_loss += loss_val.item() * xb.size(0)
            predicted = preds.argmax(dim=1)
            correct += (predicted == yb).sum().item()
            total += yb.size(0)
    val_loss /= total
    val_acc = correct / total * 100

    # Step scheduler with validation loss
    scheduler.step(val_loss)

    print(f"Epoch {epoch}: Val Loss={val_loss:.4f}, Val Acc={val_acc:.2f}%, LR={optimizer.param_groups[0]['lr']:.5f}")
Added torch.optim.lr_scheduler.ReduceLROnPlateau scheduler to reduce learning rate when validation loss plateaus.
Called scheduler.step(val_loss) after each validation phase to monitor validation loss.
Set patience=3 and factor=0.5 to reduce learning rate by half after 3 epochs without improvement.
Results Interpretation

Before: Training accuracy: 92%, Validation accuracy: 75%, Validation loss: 0.65

After: Training accuracy: 90%, Validation accuracy: 82%, Validation loss: 0.48

Using ReduceLROnPlateau helps the model escape plateaus by lowering the learning rate when validation loss stops improving, which improves validation accuracy and reduces overfitting.
Bonus Experiment
Try using a different scheduler like CosineAnnealingLR and compare validation accuracy and loss.
💡 Hint
CosineAnnealingLR changes learning rate smoothly over epochs; adjust its parameters to fit your training length.