0
0
PyTorchml~8 mins

TorchScript export in PyTorch - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - TorchScript export
Which metric matters for TorchScript export and WHY

TorchScript export is about saving a PyTorch model so it can run fast and independently from Python. The key metric to check after export is model output consistency. This means the exported model should give the same predictions as the original PyTorch model. We check this by comparing outputs on the same input data.

Additionally, inference speed is important because TorchScript aims to make models faster and easier to deploy.

Confusion matrix or equivalent visualization

Since TorchScript export is about model saving and running, we don't use a confusion matrix here. Instead, we compare outputs before and after export.

Original model output: [0.8, 0.1, 0.1]
Exported model output: [0.8, 0.1, 0.1]

Difference (L2 norm): 0.0001 (very small)
    

This shows the exported model predicts almost exactly the same as the original.

Tradeoff: Output consistency vs export flexibility

When exporting with TorchScript, you can choose tracing or scripting. Tracing is fast but may miss some dynamic code paths, causing output differences. Scripting handles dynamic code better but can be more complex.

If you want perfect output match, scripting is safer. If you want faster export and your model is simple, tracing might be enough.

What "good" vs "bad" output consistency looks like

Good: The exported model outputs differ from the original by less than 1e-5 on test inputs. Predictions and probabilities match closely.

Bad: Large differences in outputs or different predicted classes. This means the export failed to capture model logic correctly.

Common pitfalls when exporting with TorchScript
  • Using tracing on models with dynamic control flow causes wrong outputs.
  • Not testing exported model outputs on varied inputs.
  • Ignoring differences in floating point precision or device (CPU vs GPU).
  • Overlooking that some PyTorch operations are not supported in TorchScript.
Self-check question

Your original PyTorch model and TorchScript exported model give 98% accuracy on test data, but the exported model's outputs differ significantly on some inputs. Is this export good for production?

Answer: No. Even if accuracy looks high, large output differences mean the exported model may behave unpredictably on new data. You should fix export issues to ensure consistent predictions.

Key Result
Output consistency between original and exported model is key to trust TorchScript export.