0
0
PyTorchml~5 mins

Gradient access (.grad) in PyTorch

Choose your learning style9 modes available
Introduction

We use .grad to see how much each number in our model changes the result. This helps the model learn better.

When you want to check how the model is learning after one step.
When you want to debug if gradients are being calculated correctly.
When you want to manually update model weights using gradients.
When you want to understand which parts of the model affect the output most.
When you want to implement custom training loops.
Syntax
PyTorch
tensor.grad

tensor must have requires_grad=True to track gradients.

.grad holds the gradient after calling backward().

Examples
This example calculates the gradient of y = x² at x=2. The gradient is 4.
PyTorch
import torch
x = torch.tensor(2.0, requires_grad=True)
y = x ** 2

y.backward()
print(x.grad)
Here, the gradient of sum is 1 for each element, so the output is a tensor of ones.
PyTorch
import torch
w = torch.tensor([1.0, 2.0, 3.0], requires_grad=True)
z = w.sum()
z.backward()
print(w.grad)
Sample Model

This program calculates the gradient of y = 3x³ at x=3. The gradient is the slope of y at that point.

PyTorch
import torch

# Create a tensor with gradient tracking
x = torch.tensor(3.0, requires_grad=True)

# Define a simple function y = 3x^3
y = 3 * x ** 3

# Compute gradients
y.backward()

# Print the gradient of x
grad = x.grad
print(f"Gradient of y with respect to x: {grad}")
OutputSuccess
Important Notes

If .grad is None, it means backward() was not called or requires_grad was False.

Gradients accumulate by default, so call zero_() on gradients before new backward passes if needed.

Summary

.grad shows how much a tensor affects the output.

You get gradients after calling backward().

Gradients help update model weights to learn better.