The requires_grad flag tells PyTorch if it should track changes to a tensor so it can learn from them. This helps the model improve by adjusting values during training.
requires_grad flag in PyTorch
tensor = torch.tensor(data, requires_grad=True) # or requires_grad=False
Set requires_grad=True if you want PyTorch to track operations on this tensor.
Set requires_grad=False to stop tracking and save memory.
x that PyTorch will track for gradients.import torch x = torch.tensor([1.0, 2.0, 3.0], requires_grad=True)
y that will not track gradients.y = torch.tensor([4.0, 5.0, 6.0], requires_grad=False)
x to stop tracking gradients.x.requires_grad_(False)This program creates a tensor x that tracks gradients. It computes a simple function and then calculates gradients of z with respect to x. The gradients show how much z changes if x changes.
import torch # Create a tensor with requires_grad=True x = torch.tensor([2.0, 3.0], requires_grad=True) # Define a simple function y = x^2 + 3x y = x**2 + 3*x # Compute the sum to get a scalar output z = y.sum() # Compute gradients z.backward() # Print gradients of x print(x.grad)
Gradients are only computed for tensors with requires_grad=True.
You can turn off gradient tracking temporarily using with torch.no_grad(): for inference.
Changing requires_grad after tensor creation can be done with tensor.requires_grad_(True or False).
requires_grad tells PyTorch to track operations for learning.
Set it to True for tensors you want to update during training.
Set it to False to save memory or freeze parts of a model.