0
0
PyTorchml~10 mins

no_grad context manager in PyTorch - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to disable gradient calculation during inference.

PyTorch
with torch.[1]():
    output = model(input_tensor)
Drag options to blanks, or click blank then click option'
Ano_grad
Bgrad_enabled
Cenable_grad
Dset_grad_enabled
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'enable_grad' instead of 'no_grad' will enable gradients, not disable them.
Forgetting to use a context manager and calculating gradients unnecessarily.
2fill in blank
medium

Complete the code to prevent gradient tracking for the tensor operation.

PyTorch
with torch.no_grad():
    result = tensor.[1](2)
Drag options to blanks, or click blank then click option'
Adetach
Bbackward
Crequires_grad_
Dpow
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'backward' which computes gradients, not a tensor operation.
Using 'detach' which returns a tensor without gradient tracking but is not an operation raising to a power.
3fill in blank
hard

Fix the error in the code to correctly disable gradient calculation.

PyTorch
with torch.no_grad[1]:
    prediction = model(data)
Drag options to blanks, or click blank then click option'
A[]
B{}
C()
D.
Attempts:
3 left
💡 Hint
Common Mistakes
Forgetting parentheses causes a syntax error.
Using square brackets or braces instead of parentheses.
4fill in blank
hard

Fill both blanks to create a dictionary of squared values without tracking gradients.

PyTorch
with torch.[1]():
    squares = {x: x[2]2 for x in range(1, 6)}
Drag options to blanks, or click blank then click option'
Ano_grad
B**
C*
Denable_grad
Attempts:
3 left
💡 Hint
Common Mistakes
Using '*' instead of '**' for squaring.
Using 'enable_grad' which enables gradients instead of disabling.
5fill in blank
hard

Fill all three blanks to compute model output without gradients and convert it to a numpy array.

PyTorch
with torch.[1]():
    output = model(input_tensor)
    numpy_output = output.[2]().cpu().[3]()
Drag options to blanks, or click blank then click option'
Ano_grad
Bdetach
Cnumpy
Drequires_grad_
Attempts:
3 left
💡 Hint
Common Mistakes
Forgetting to detach before converting to numpy causes errors.
Using 'requires_grad_' which changes gradient tracking but is not for detaching.