0
0
PyTorchml~20 mins

Model packaging (.mar files) in PyTorch - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Model packaging (.mar files)
Problem:You have trained a PyTorch image classification model but it is only saved as a .pt file. You want to package it into a .mar file for easy deployment with TorchServe.
Current Metrics:Model accuracy on test data: 85%. Model saved as model.pt file.
Issue:The model is not packaged for deployment. TorchServe requires a .mar file which bundles the model and handler code.
Your Task
Package the existing PyTorch model into a .mar file that TorchServe can use for deployment.
You must use torch-model-archiver tool.
You cannot retrain the model.
You must include a custom handler for preprocessing and postprocessing.
Hint 1
Hint 2
Hint 3
Solution
PyTorch
import torch
import torchvision.models as models

# Assume model.pt is already saved
# handler.py content:
handler_code = '''
from ts.torch_handler.base_handler import BaseHandler
import torch
from torchvision import transforms
from PIL import Image
import io

class CustomHandler(BaseHandler):
    def preprocess(self, data):
        image = data[0].get("data") or data[0].get("body")
        image = Image.open(io.BytesIO(image)).convert("RGB")
        transform = transforms.Compose([
            transforms.Resize(256),
            transforms.CenterCrop(224),
            transforms.ToTensor(),
            transforms.Normalize(mean=[0.485, 0.456, 0.406],
                                 std=[0.229, 0.224, 0.225])
        ])
        return transform(image).unsqueeze(0)

    def inference(self, input_tensor):
        with torch.no_grad():
            output = self.model(input_tensor)
        return output

    def postprocess(self, inference_output):
        probabilities = torch.nn.functional.softmax(inference_output[0], dim=0)
        return probabilities.tolist()
'''

with open('handler.py', 'w') as f:
    f.write(handler_code)

# Command to create .mar file (run in shell):
# torch-model-archiver --model-name my_model --version 1.0 --serialized-file model.pt --handler handler.py --export-path model_store --force

# After running above command, the my_model.mar file will be in model_store folder.

# To verify loading the model in TorchServe (example):
# torchserve --start --model-store model_store --models my_model=my_model.mar
Created a custom handler.py file with preprocess, inference, and postprocess methods.
Used torch-model-archiver CLI to package model.pt and handler.py into my_model.mar.
Prepared the .mar file for deployment with TorchServe.
Results Interpretation

Before: Model saved only as model.pt file, not deployable directly.

After: Model packaged as my_model.mar file including custom handler, ready for TorchServe deployment.

Packaging a PyTorch model into a .mar file bundles the model and processing code, making deployment easier and standardized with TorchServe.
Bonus Experiment
Try packaging the model with a default handler instead of a custom one and compare deployment ease.
💡 Hint
Use the built-in image_classifier handler provided by TorchServe and see if it fits your preprocessing needs.