Complete the code to create a ReduceLROnPlateau scheduler that monitors validation loss.
scheduler = torch.optim.lr_scheduler.[1](optimizer, mode='min')
The ReduceLROnPlateau scheduler reduces the learning rate when a metric has stopped improving. Here, it monitors validation loss, so mode='min' is used.
Complete the code to call the scheduler step function with the validation loss value.
scheduler.[1](val_loss)The step() method of ReduceLROnPlateau is called with the current validation loss to check if learning rate should be reduced.
Fix the error in the scheduler initialization by filling the blank with the correct patience value.
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, patience=[1])The patience parameter expects an integer number of epochs to wait before reducing the learning rate. It should be 5, not a string or invalid value.
Fill both blanks to create a scheduler that reduces learning rate by a factor of 0.1 after 3 epochs without improvement.
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, factor=[1], patience=[2])
The factor controls how much the learning rate is reduced (0.1 means reduce to 10%). The patience is the number of epochs to wait (3 here) before reducing.
Fill all three blanks to create a training loop that updates the scheduler with validation loss and prints the learning rate.
for epoch in range(num_epochs): train() val_loss = validate() scheduler.[1](val_loss) lr = optimizer.param_groups[0]['[2]'] print(f"Epoch {epoch+1}, Learning Rate: {lr:.6f}") # The scheduler step method is called with the validation loss, and the learning rate key is '[3]'.
The scheduler's step() method is called with the validation loss. The learning rate is accessed via the key 'lr' in the optimizer's parameter groups.