Complete the code to detach the tensor from the computation graph.
import torch x = torch.tensor([1.0, 2.0, 3.0], requires_grad=True) y = x * 2 z = y.sum() detached = y.[1]()
Using detach() creates a new tensor that shares data but is detached from the computation graph, so gradients won't flow back.
Complete the code to convert a detached tensor back to a NumPy array.
import torch x = torch.tensor([4.0, 5.0, 6.0], requires_grad=True) y = x * 3 detached = y.[1]() numpy_array = detached.numpy()
First, detach the tensor to stop gradient tracking, then move it to CPU if needed, and finally convert to NumPy array. Here, only detach() is needed before numpy().
Fix the error in the code by detaching the tensor before converting to NumPy.
import torch x = torch.tensor([7.0, 8.0, 9.0], requires_grad=True) y = x * 4 numpy_array = y.[1]().numpy()
Calling detach() removes the tensor from the computation graph, allowing safe conversion to NumPy.
Fill both blanks to detach a tensor and move it to CPU before converting to NumPy.
import torch x = torch.tensor([10.0, 11.0, 12.0], requires_grad=True) y = x * 5 result = y.[1]().[2]().numpy()
First detach the tensor to stop gradient tracking, then move it to CPU to ensure compatibility before converting to NumPy.
Fill all three blanks to detach a tensor, move it to CPU, and convert it to a NumPy array.
import torch x = torch.tensor([13.0, 14.0, 15.0], requires_grad=True) y = x * 6 numpy_array = y.[1]().[2]().[3]()
Detach the tensor to stop gradient tracking, move it to CPU, then convert it to a NumPy array.