0
0
PyTorchml~20 mins

no_grad context manager in PyTorch - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - no_grad context manager
Problem:You have a PyTorch model that you want to use to make predictions on new data without updating the model weights. Currently, your code computes gradients during prediction, which wastes memory and slows down inference.
Current Metrics:During prediction, memory usage is high and inference is slow because gradients are computed unnecessarily.
Issue:The model is computing gradients during inference, causing inefficient memory use and slower prediction speed.
Your Task
Use the no_grad context manager to disable gradient calculation during model inference, reducing memory usage and speeding up prediction.
Do not change the model architecture or training code.
Only modify the inference code to include no_grad context.
Hint 1
Hint 2
Hint 3
Solution
PyTorch
import torch
import torch.nn as nn

# Simple model definition
class SimpleModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = nn.Linear(10, 1)
    def forward(self, x):
        return self.linear(x)

# Create model and dummy input
model = SimpleModel()
model.eval()  # Set model to evaluation mode
input_tensor = torch.randn(5, 10)

# Inference without no_grad (inefficient)
output_with_grad = model(input_tensor)
print('Output with grad:', output_with_grad)

# Inference with no_grad (efficient)
with torch.no_grad():
    output_no_grad = model(input_tensor)
print('Output with no_grad:', output_no_grad)

# Check if outputs are the same
print('Outputs equal:', torch.allclose(output_with_grad, output_no_grad))
Added 'with torch.no_grad():' context around the model inference code.
Kept model in evaluation mode to disable dropout and batchnorm updates.
Verified outputs before and after to ensure predictions are identical.
Results Interpretation

Before: Model inference computed gradients, causing higher memory use and slower speed.

After: Using no_grad disables gradient tracking, reducing memory and speeding up inference without changing predictions.

The <code>no_grad</code> context manager is essential for efficient model inference in PyTorch. It stops PyTorch from tracking operations for gradients, saving memory and computation when you only need predictions.
Bonus Experiment
Try using the no_grad context manager during evaluation on a larger dataset and measure the difference in inference time and memory usage.
💡 Hint
Use torch.utils.data.DataLoader to load batches and time the inference loop with and without no_grad.