0
0
PyTorchml~10 mins

Detaching from computation graph in PyTorch - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to detach the tensor from the computation graph.

PyTorch
import torch
x = torch.tensor([1.0, 2.0, 3.0], requires_grad=True)
y = x * 2
z = y.sum()
detached = y.[1]()
Drag options to blanks, or click blank then click option'
Arequires_grad_
Bclone
Cdetach
Dbackward
Attempts:
3 left
💡 Hint
Common Mistakes
Using clone() does not detach from the graph.
Using backward() is for computing gradients, not detaching.
Using requires_grad_() changes gradient tracking but does not detach.
2fill in blank
medium

Complete the code to convert a detached tensor back to a NumPy array.

PyTorch
import torch
x = torch.tensor([4.0, 5.0, 6.0], requires_grad=True)
y = x * 3
detached = y.[1]()
numpy_array = detached.numpy()
Drag options to blanks, or click blank then click option'
Adetach
Bclone
Ccpu
Dnumpy
Attempts:
3 left
💡 Hint
Common Mistakes
Trying to convert a tensor with gradients directly to NumPy causes errors.
Using clone() instead of detach() does not stop gradient tracking.
3fill in blank
hard

Fix the error in the code by detaching the tensor before converting to NumPy.

PyTorch
import torch
x = torch.tensor([7.0, 8.0, 9.0], requires_grad=True)
y = x * 4
numpy_array = y.[1]().numpy()
Drag options to blanks, or click blank then click option'
Adetach
Bclone
Ccpu
Drequires_grad_
Attempts:
3 left
💡 Hint
Common Mistakes
Calling numpy() directly on a tensor with requires_grad=True causes errors.
Using clone() does not detach the tensor.
4fill in blank
hard

Fill both blanks to detach a tensor and move it to CPU before converting to NumPy.

PyTorch
import torch
x = torch.tensor([10.0, 11.0, 12.0], requires_grad=True)
y = x * 5
result = y.[1]().[2]().numpy()
Drag options to blanks, or click blank then click option'
Adetach
Bcpu
Cclone
Dnumpy
Attempts:
3 left
💡 Hint
Common Mistakes
Swapping the order causes errors.
Using clone() instead of detach() does not stop gradient tracking.
5fill in blank
hard

Fill all three blanks to detach a tensor, move it to CPU, and convert it to a NumPy array.

PyTorch
import torch
x = torch.tensor([13.0, 14.0, 15.0], requires_grad=True)
y = x * 6
numpy_array = y.[1]().[2]().[3]()
Drag options to blanks, or click blank then click option'
Adetach
Bcpu
Cnumpy
Dclone
Attempts:
3 left
💡 Hint
Common Mistakes
Calling numpy() before detach() causes errors.
Using clone() instead of detach() does not detach the tensor.