How to Use MobileNet in PyTorch: Simple Guide with Example
To use
MobileNet in PyTorch, import it from torchvision.models, load a pretrained model with mobilenet_v2(pretrained=True), and pass input tensors through it to get predictions. You can fine-tune or use it directly for image classification tasks.Syntax
The basic syntax to load MobileNet in PyTorch is:
from torchvision.models import mobilenet_v2: imports the MobileNet V2 model.model = mobilenet_v2(pretrained=True): loads the pretrained MobileNet V2 model.model.eval(): sets the model to evaluation mode for inference.output = model(input_tensor): runs the input through the model to get predictions.
python
from torchvision.models import mobilenet_v2 import torch # Load pretrained MobileNet V2 model model = mobilenet_v2(pretrained=True) # Set model to evaluation mode model.eval() # Example input tensor with batch size 1, 3 color channels, 224x224 image input_tensor = torch.randn(1, 3, 224, 224) # Get model output output = model(input_tensor)
Example
This example shows how to load MobileNet V2 pretrained on ImageNet, prepare a dummy input, run inference, and get the predicted class index.
python
from torchvision.models import mobilenet_v2 import torch # Load pretrained MobileNet V2 model = mobilenet_v2(pretrained=True) model.eval() # Create a dummy input tensor (batch size 1, 3 channels, 224x224 pixels) input_tensor = torch.randn(1, 3, 224, 224) # Run the model to get output logits with torch.no_grad(): output = model(input_tensor) # Get predicted class index predicted_class = torch.argmax(output, dim=1).item() print(f"Predicted class index: {predicted_class}")
Output
Predicted class index: 485
Common Pitfalls
Common mistakes when using MobileNet in PyTorch include:
- Not setting the model to
eval()mode during inference, which can cause inconsistent results. - Passing input tensors with wrong shape or without normalization expected by the model.
- Forgetting to disable gradient computation with
torch.no_grad()during inference, which wastes memory. - Using
pretrained=Truewithout internet access, causing errors.
Always preprocess images to 224x224 size and normalize with ImageNet mean and std for best results.
python
from torchvision.models import mobilenet_v2 import torch # Wrong: model in train mode during inference model = mobilenet_v2(pretrained=True) # model.train() # This is wrong for inference input_tensor = torch.randn(1, 3, 224, 224) # Wrong: no torch.no_grad() during inference output = model(input_tensor) # wastes memory # Right way: model.eval() with torch.no_grad(): output = model(input_tensor)
Quick Reference
Here is a quick summary to use MobileNet in PyTorch:
| Step | Description |
|---|---|
| Import model | from torchvision.models import mobilenet_v2 |
| Load pretrained | model = mobilenet_v2(pretrained=True) |
| Set eval mode | model.eval() before inference |
| Prepare input | Tensor shape (1, 3, 224, 224), normalized |
| Run inference | with torch.no_grad(): output = model(input) |
| Get prediction | predicted_class = torch.argmax(output, dim=1).item() |
Key Takeaways
Always load MobileNet with pretrained=True for ready-to-use weights.
Set model.eval() and use torch.no_grad() during inference to save memory and get correct results.
Input tensors must be 4D with shape (batch_size, 3, 224, 224) and properly normalized.
Use torch.argmax on model output to get predicted class index.
Common errors include forgetting eval mode and incorrect input shapes.