0
0
PyTorchml~10 mins

Mixed precision training (AMP) in PyTorch - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to enable automatic mixed precision during the forward pass.

PyTorch
with torch.cuda.amp.[1]():
    output = model(input)
Drag options to blanks, or click blank then click option'
Aautocast
Bno_grad
Cgrad_enabled
Dinference_mode
Attempts:
3 left
💡 Hint
Common Mistakes
Using no_grad disables gradient calculation but does not enable mixed precision.
Using grad_enabled or inference_mode does not enable AMP.
2fill in blank
medium

Complete the code to create a GradScaler for mixed precision training.

PyTorch
scaler = torch.cuda.amp.[1]()
Drag options to blanks, or click blank then click option'
Aautocast
BGradScaler
CGradManager
DScaler
Attempts:
3 left
💡 Hint
Common Mistakes
Using autocast instead of GradScaler.
Using non-existent classes like GradManager or Scaler.
3fill in blank
hard

Fix the error in the code to properly scale the loss before backward pass.

PyTorch
scaler.scale(loss).[1]()
Drag options to blanks, or click blank then click option'
Abackward
Bstep
Cupdate
Dzero_grad
Attempts:
3 left
💡 Hint
Common Mistakes
Calling step() or update() on the scaled loss instead of backward().
Calling zero_grad() on the scaled loss.
4fill in blank
hard

Fill both blanks to correctly update the optimizer and scaler after backward pass.

PyTorch
scaler.[1](optimizer)
scaler.[2]()
Drag options to blanks, or click blank then click option'
Astep
Bupdate
Czero_grad
Dscale
Attempts:
3 left
💡 Hint
Common Mistakes
Calling zero_grad() instead of step() or update().
Mixing the order of step() and update().
5fill in blank
hard

Fill all three blanks to complete the training step using AMP correctly.

PyTorch
optimizer.[1]()
with torch.cuda.amp.[2]():
    output = model(input)
    loss = loss_fn(output, target)
scaler.scale(loss).[3]()
scaler.step(optimizer)
scaler.update()
Drag options to blanks, or click blank then click option'
Azero_grad
Bautocast
Cbackward
Dstep
Attempts:
3 left
💡 Hint
Common Mistakes
Forgetting to zero gradients before backward pass.
Not using autocast context for forward pass.
Calling step on scaled loss instead of backward.