0
0
PyTorchml~20 mins

ReduceLROnPlateau in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
ReduceLROnPlateau Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
1:30remaining
Understanding ReduceLROnPlateau behavior

What does the ReduceLROnPlateau scheduler do in PyTorch?

AIt increases the learning rate after every epoch regardless of performance.
BIt resets the model weights to initial values when loss plateaus.
CIt reduces the learning rate when a monitored metric stops improving.
DIt stops training automatically when the validation loss stops decreasing.
Attempts:
2 left
💡 Hint

Think about what happens when the model's performance metric does not improve.

Predict Output
intermediate
2:00remaining
Output of learning rate after plateau

Given the following PyTorch code snippet, what will be the learning rate after 3 epochs if the validation loss does not improve?

PyTorch
import torch
import torch.optim as optim

model_params = [torch.nn.Parameter(torch.randn(2, 2, requires_grad=True))]
optimizer = optim.SGD(model_params, lr=0.1)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=2)

val_losses = [0.5, 0.5, 0.5]

for epoch, loss in enumerate(val_losses):
    scheduler.step(loss)
    print(f"Epoch {epoch+1} LR: {optimizer.param_groups[0]['lr']}")
A
Epoch 1 LR: 0.1
Epoch 2 LR: 0.01
Epoch 3 LR: 0.001
B
Epoch 1 LR: 0.1
Epoch 2 LR: 0.1
Epoch 3 LR: 0.010000000000000002
C
Epoch 1 LR: 0.1
Epoch 2 LR: 0.1
Epoch 3 LR: 0.1
D
Epoch 1 LR: 0.1
Epoch 2 LR: 0.05
Epoch 3 LR: 0.025
Attempts:
2 left
💡 Hint

Remember the patience is 2, so learning rate reduces after 2 epochs without improvement.

Model Choice
advanced
1:30remaining
Choosing correct scheduler for validation accuracy

You want to reduce the learning rate when validation accuracy stops improving. Which mode should you use in ReduceLROnPlateau?

A'max', because accuracy should increase to improve.
B'min', because accuracy should decrease to improve.
C'min', because accuracy is a loss metric.
D'max', because accuracy should decrease to improve.
Attempts:
2 left
💡 Hint

Think about whether accuracy is better when higher or lower.

Hyperparameter
advanced
1:30remaining
Effect of patience parameter in ReduceLROnPlateau

What is the effect of setting a high patience value in ReduceLROnPlateau?

AThe learning rate reduces immediately after one bad epoch.
BThe learning rate stays fixed and never changes.
CThe learning rate increases after many epochs without improvement.
DThe learning rate reduces only after many epochs without improvement.
Attempts:
2 left
💡 Hint

Patience controls how long the scheduler waits before reducing LR.

🔧 Debug
expert
2:30remaining
Why does ReduceLROnPlateau not reduce LR as expected?

Consider this code snippet:

import torch
import torch.optim as optim

model_params = [torch.nn.Parameter(torch.randn(2, 2, requires_grad=True))]
optimizer = optim.Adam(model_params, lr=0.01)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.5, patience=1)

val_losses = [0.3, 0.3, 0.3, 0.3]

for epoch, loss in enumerate(val_losses):
    scheduler.step(loss)
    print(f"Epoch {epoch+1} LR: {optimizer.param_groups[0]['lr']}")

Why does the learning rate never reduce?

ABecause the scheduler.step() must be called with a metric that decreases, but here loss is constant, so no improvement triggers LR reduction.
BBecause the optimizer is Adam, which does not support learning rate changes.
CBecause the validation loss never improves, so LR stays the same.
DBecause patience=1 means LR reduces only after 3 epochs without improvement.
Attempts:
2 left
💡 Hint

Check how the scheduler detects improvement and what happens if the metric stays the same.