How to Get Model Parameters in PyTorch: Syntax and Examples
In PyTorch, you can get model parameters using the
model.parameters() method, which returns an iterator over all parameters (weights and biases). To get parameter names along with values, use model.named_parameters().Syntax
The main ways to get model parameters in PyTorch are:
model.parameters(): Returns an iterator over all parameter tensors.model.named_parameters(): Returns an iterator over (name, parameter) pairs.
These methods help you access weights and biases inside your model.
python
parameters = model.parameters() named_parameters = model.named_parameters()
Example
This example shows how to define a simple neural network and print its parameters using both parameters() and named_parameters().
python
import torch import torch.nn as nn class SimpleNet(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(4, 3) self.fc2 = nn.Linear(3, 2) def forward(self, x): x = self.fc1(x) x = self.fc2(x) return x model = SimpleNet() # Get parameters only print('Parameters:') for param in model.parameters(): print(param.shape) # Get named parameters print('\nNamed Parameters:') for name, param in model.named_parameters(): print(f'{name}: {param.shape}')
Output
Parameters:
torch.Size([3, 4])
torch.Size([3])
torch.Size([2, 3])
torch.Size([2])
Named Parameters:
fc1.weight: torch.Size([3, 4])
fc1.bias: torch.Size([3])
fc2.weight: torch.Size([2, 3])
fc2.bias: torch.Size([2])
Common Pitfalls
Common mistakes when getting model parameters include:
- Trying to access parameters before model initialization.
- Confusing
parameters()(values only) withnamed_parameters()(names and values). - Modifying parameters without using
with torch.no_grad(), which can break gradient tracking.
python
import torch import torch.nn as nn model = nn.Linear(2, 1) # Wrong: Trying to print parameter names using parameters() only for param in model.parameters(): print(param) # No names here # Right: Use named_parameters() to get names for name, param in model.named_parameters(): print(name, param.shape)
Output
tensor([[ 0.1234, -0.5678]], requires_grad=True)
tensor([0.4321], requires_grad=True)
weight torch.Size([1, 2])
bias torch.Size([1])
Quick Reference
| Method | Description |
|---|---|
| model.parameters() | Iterator over all parameter tensors (weights, biases) |
| model.named_parameters() | Iterator over (name, parameter) pairs |
| param.data | Access raw tensor data of a parameter |
| param.requires_grad | Check if parameter will be updated during training |
Key Takeaways
Use model.parameters() to get all parameter tensors in PyTorch models.
Use model.named_parameters() to get parameter names along with tensors.
Always ensure the model is initialized before accessing parameters.
Avoid modifying parameters directly without disabling gradient tracking.
Parameter shapes help understand model layer sizes and connections.