0
0
PyTorchml~5 mins

no_grad context manager in PyTorch

Choose your learning style9 modes available
Introduction

The no_grad context manager tells PyTorch not to track operations for gradients. This saves memory and speeds up computations when you only want to make predictions, not train.

When you want to test a model without changing it.
When you want to make predictions on new data quickly.
When you want to save memory during inference.
When you want to evaluate model accuracy without training.
When you want to disable gradient tracking temporarily.
Syntax
PyTorch
with torch.no_grad():
    # code here runs without tracking gradients

Use with torch.no_grad(): to wrap code where gradients are not needed.

This is often used during model evaluation or inference.

Examples
Run model prediction without tracking gradients.
PyTorch
with torch.no_grad():
    output = model(input_tensor)
Use no_grad when looping over data for inference.
PyTorch
with torch.no_grad():
    for data in dataloader:
        predictions = model(data)
Compare normal forward pass with one inside no_grad.
PyTorch
output = model(input_tensor)  # gradients tracked

with torch.no_grad():
    output_no_grad = model(input_tensor)  # no gradients tracked
Sample Model

This code shows how to run a model forward pass normally and inside no_grad. It prints outputs and whether gradients are tracked.

PyTorch
import torch
import torch.nn as nn

# Simple model
class SimpleModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = nn.Linear(2, 1)
    def forward(self, x):
        return self.linear(x)

model = SimpleModel()

# Input tensor
input_tensor = torch.tensor([[1.0, 2.0], [3.0, 4.0]])

# Forward pass with gradient tracking
output = model(input_tensor)
print(f"Output with grad tracking: {output}")

# Forward pass without gradient tracking
with torch.no_grad():
    output_no_grad = model(input_tensor)
print(f"Output without grad tracking: {output_no_grad}")

# Check if gradients are tracked
print(f"Requires grad (normal): {output.requires_grad}")
print(f"Requires grad (no_grad): {output_no_grad.requires_grad}")
OutputSuccess
Important Notes

Inside no_grad, PyTorch does not build the computation graph.

This reduces memory use and speeds up inference.

Do not use no_grad during training when you need gradients.

Summary

no_grad disables gradient tracking temporarily.

Use it during model evaluation or inference to save memory and time.

Wrap code with with torch.no_grad(): to apply it.