Complete the code to create a tensor with gradient tracking enabled.
import torch x = torch.tensor([1.0, 2.0, 3.0], requires_grad=[1])
Setting requires_grad=True tells PyTorch to track operations on the tensor for automatic differentiation.
Complete the code to perform a dynamic computation where the number of operations depends on input size.
import torch x = torch.tensor([1.0, 2.0, 3.0], requires_grad=True) result = 0 for i in range(x.size(0)): result += x[i] * [1]
Using i inside the loop makes the computation graph dynamic, changing with input size.
Fix the error in the code to correctly compute gradients with a dynamic graph.
import torch x = torch.tensor([1.0, 2.0, 3.0], requires_grad=True) result = 0 for i in range(x.size(0)): result += x[i] * i result.[1]()
The backward() function computes gradients for tensors with requires_grad=True.
Fill both blanks to create a dynamic computation graph that sums squares of input elements greater than 2.
import torch x = torch.tensor([1.0, 3.0, 4.0], requires_grad=True) result = 0 for val in x: if val [1] 2: result += val [2] 2 result.backward()
The condition val > 2 selects elements greater than 2, and val ** 2 squares them.
Fill all three blanks to build a dynamic graph that creates a dictionary of squares for even numbers in a list.
numbers = [1, 2, 3, 4, 5] squares = { [1]: [2] for [3] in numbers if [3] % 2 == 0 }
We use num as the key and num ** 2 as the value for even numbers iterated by num.