0
0
PyTorchml~10 mins

Why learning rate strategy affects convergence in PyTorch - Test Your Understanding

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to set the learning rate for the optimizer.

PyTorch
optimizer = torch.optim.SGD(model.parameters(), lr=[1])
Drag options to blanks, or click blank then click option'
Aloss
Bmodel
C0.01
Depoch
Attempts:
3 left
💡 Hint
Common Mistakes
Using a variable name instead of a number for learning rate.
Setting learning rate too high or zero.
2fill in blank
medium

Complete the code to apply a learning rate scheduler that reduces the learning rate every 10 epochs.

PyTorch
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=[1], gamma=0.1)
Drag options to blanks, or click blank then click option'
A10
B0.1
C20
D5
Attempts:
3 left
💡 Hint
Common Mistakes
Confusing gamma with step_size.
Using a float instead of an integer for step_size.
3fill in blank
hard

Fix the error in the training loop to update the learning rate scheduler correctly.

PyTorch
for epoch in range(num_epochs):
    train(model, optimizer)
    loss = validate(model)
    [1]  # update learning rate scheduler
Drag options to blanks, or click blank then click option'
Amodel.train()
Boptimizer.step()
Closs.backward()
Dscheduler.step()
Attempts:
3 left
💡 Hint
Common Mistakes
Calling optimizer.step() instead of scheduler.step().
Forgetting to call scheduler.step() at all.
4fill in blank
hard

Fill both blanks to create a dictionary comprehension that maps epoch numbers to learning rates using scheduler.get_last_lr().

PyTorch
lr_dict = {epoch: [1] for epoch in range(num_epochs) if epoch [2] 5}
Drag options to blanks, or click blank then click option'
Ascheduler.get_last_lr()[0]
B>
C<
Depoch
Attempts:
3 left
💡 Hint
Common Mistakes
Using epoch instead of scheduler.get_last_lr()[0] for learning rate.
Using wrong comparison operator in the condition.
5fill in blank
hard

Fill all three blanks to create a training loop that updates the optimizer, scheduler, and prints the learning rate.

PyTorch
for epoch in range(num_epochs):
    optimizer.zero_grad()
    output = model(data)
    loss = criterion(output, target)
    loss.backward()
    optimizer.[1]()
    scheduler.[2]()
    print(f"Epoch {epoch}: lr = {scheduler.get_last_lr()[[3]]}")
Drag options to blanks, or click blank then click option'
Astep
C0
Dzero_grad
Attempts:
3 left
💡 Hint
Common Mistakes
Calling zero_grad() instead of step() on optimizer.
Using wrong index for get_last_lr().