0
0
PyTorchml~5 mins

Mobile deployment (PyTorch Mobile)

Choose your learning style9 modes available
Introduction
Mobile deployment lets you run AI models directly on phones or tablets without needing internet. This makes apps faster and works even offline.
You want your app to recognize images or sounds on a phone without sending data online.
You need quick AI responses in a mobile game or tool.
You want to save data and battery by running AI locally on the device.
You want to keep user data private by not sending it to servers.
You want your app to work even when there is no internet connection.
Syntax
PyTorch
import torch

# Load your trained PyTorch model
model = torch.load('model.pth')
model.eval()

# Convert to TorchScript for mobile
scripted_model = torch.jit.script(model)

# Save the scripted model for mobile use
scripted_model.save('model_mobile.pt')
Use torch.jit.script or torch.jit.trace to convert your model to TorchScript format.
The saved .pt file can be loaded in mobile apps using PyTorch Mobile libraries.
Examples
Converts the model using scripting, which works well for models with control flow.
PyTorch
scripted_model = torch.jit.script(model)
Converts the model using tracing with an example input, good for simple feed-forward models.
PyTorch
example_input = torch.randn(1, 3, 224, 224)
scripted_model = torch.jit.trace(model, example_input)
Saves the TorchScript model to a file that can be loaded on mobile devices.
PyTorch
scripted_model.save('model_mobile.pt')
Sample Model
This code defines a simple model, converts it to TorchScript, saves it for mobile, reloads it, and runs a test prediction.
PyTorch
import torch
import torch.nn as nn

# Define a simple model
class SimpleModel(nn.Module):
    def __init__(self):
        super(SimpleModel, self).__init__()
        self.linear = nn.Linear(10, 2)
    def forward(self, x):
        return self.linear(x)

# Create and train dummy model (skip actual training for simplicity)
model = SimpleModel()
model.eval()

# Convert to TorchScript
scripted_model = torch.jit.script(model)

# Save for mobile
scripted_model.save('simple_model_mobile.pt')

# Load back the scripted model
loaded_model = torch.jit.load('simple_model_mobile.pt')

# Test prediction
input_tensor = torch.randn(1, 10)
output = loaded_model(input_tensor)
print('Model output:', output)
OutputSuccess
Important Notes
TorchScript models are optimized and can run efficiently on mobile devices.
Make sure to test the scripted model after conversion to catch any issues.
PyTorch Mobile supports Android and iOS platforms with specific libraries to load these models.
Summary
Mobile deployment runs AI models directly on phones for speed and privacy.
Convert PyTorch models to TorchScript format using scripting or tracing.
Save the TorchScript model as a .pt file to use in mobile apps.