Complete the code to create a learning rate scheduler that decreases the learning rate by a factor of 0.1 every 10 epochs.
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=[1], gamma=0.1)
The StepLR scheduler reduces the learning rate every step_size epochs. Here, setting step_size=10 means the learning rate drops every 10 epochs.
Complete the code to initialize a cosine annealing learning rate scheduler with 50 epochs.
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=[1])T_max too low causing multiple cycles.T_max with learning rate value.The T_max parameter sets the number of epochs for one cosine annealing cycle. Here, it should be 50 to match the total epochs.
Fix the error in the code to correctly update the learning rate scheduler after each epoch.
for epoch in range(num_epochs): train() validate() [1]
optimizer.step() instead of scheduler.step().update() or reset().After each epoch, you must call scheduler.step() to update the learning rate according to the scheduler's policy.
Fill both blanks to create a learning rate scheduler that reduces the learning rate by 10% every 5 epochs.
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=[1], gamma=[2])
gamma=0.1 which reduces learning rate by 90%, not 10%.step_size incorrectly.The step_size is 5 to reduce every 5 epochs, and gamma is 0.9 to reduce the learning rate by 10% (multiply by 0.9).
Fill all three blanks to create a dictionary comprehension that maps each epoch to its learning rate from the lrs list (the scheduler's history), only for epochs where the learning rate is greater than 0.001.
lr_history = {epoch: lr for epoch, lr in enumerate(lrs) if lr [1] [2] and epoch [3] 10}This comprehension keeps epochs where the learning rate is greater than 0.001 and the epoch number is less than 10.