0
0
PyTorchml~5 mins

TorchScript export in PyTorch

Choose your learning style9 modes available
Introduction
TorchScript export lets you save your PyTorch model so it can run fast and work without Python. This helps when you want to use the model in apps or share it easily.
You want to run your PyTorch model on a device without Python installed.
You need to speed up model loading and inference in production.
You want to share your model with others who don't use Python.
You want to deploy your model in mobile or embedded systems.
You want to save a model with its computation graph for later use.
Syntax
PyTorch
import torch

# Convert a PyTorch model to TorchScript
scripted_model = torch.jit.script(your_model)

# Or trace the model with example input
scripted_model = torch.jit.trace(your_model, example_input)

# Save the scripted model
scripted_model.save('model_scripted.pt')
Use torch.jit.script when your model has control flow like if or loops.
Use torch.jit.trace when your model is mostly straight-line code and you have example inputs.
Examples
This converts the model using scripting, which analyzes the code including conditions and loops.
PyTorch
scripted_model = torch.jit.script(model)
This converts the model by tracing its operations with example input data.
PyTorch
scripted_model = torch.jit.trace(model, example_input)
This saves the TorchScript model to a file for later loading or deployment.
PyTorch
scripted_model.save('model_scripted.pt')
Sample Model
This example shows how to define a simple model with a condition, convert it to TorchScript using scripting, save it, load it back, and run inference.
PyTorch
import torch
import torch.nn as nn

# Define a simple model
class SimpleModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = nn.Linear(3, 1)
    def forward(self, x):
        if x.sum() > 0:
            return self.linear(x)
        else:
            return self.linear(-x)

model = SimpleModel()

# Create example input
example_input = torch.randn(1, 3)

# Convert model to TorchScript using scripting (because of if condition)
scripted_model = torch.jit.script(model)

# Save the scripted model
scripted_model.save('simple_model_scripted.pt')

# Load the model back
loaded_model = torch.jit.load('simple_model_scripted.pt')

# Run inference
output = loaded_model(example_input)
print(f'Input: {example_input}')
print(f'Output: {output}')
OutputSuccess
Important Notes
TorchScript models run faster because they are optimized and do not need Python at runtime.
Scripting supports more complex models with Python logic, while tracing is simpler but less flexible.
Always test the scripted or traced model to make sure it behaves like the original.
Summary
TorchScript export saves PyTorch models for fast, standalone use.
Use scripting for models with conditions and tracing for simple models.
Saved TorchScript models can be loaded and run without Python.