Complete the code to enable automatic mixed precision during the forward pass.
with torch.cuda.amp.[1](): output = model(input)
no_grad disables gradient calculation but does not enable mixed precision.grad_enabled or inference_mode does not enable AMP.The torch.cuda.amp.autocast() context manager enables automatic mixed precision for the operations inside it.
Complete the code to create a GradScaler for mixed precision training.
scaler = torch.cuda.amp.[1]()autocast instead of GradScaler.GradManager or Scaler.The GradScaler helps scale gradients to avoid underflow during mixed precision training.
Fix the error in the code to properly scale the loss before backward pass.
scaler.scale(loss).[1]()step() or update() on the scaled loss instead of backward().zero_grad() on the scaled loss.The backward() method computes gradients of the scaled loss.
Fill both blanks to correctly update the optimizer and scaler after backward pass.
scaler.[1](optimizer) scaler.[2]()
zero_grad() instead of step() or update().step() and update().After scaling gradients and backward pass, scaler.step(optimizer) updates optimizer parameters, and scaler.update() adjusts the scale for next iteration.
Fill all three blanks to complete the training step using AMP correctly.
optimizer.[1]() with torch.cuda.amp.[2](): output = model(input) loss = loss_fn(output, target) scaler.scale(loss).[3]() scaler.step(optimizer) scaler.update()
Before the forward pass, optimizer.zero_grad() clears old gradients. The autocast() context enables mixed precision during forward. Then scaler.scale(loss).backward() computes scaled gradients.