A .mar file is used in PyTorch model deployment. What does it mainly contain?
Think about what you need to run a model in production.
A .mar file bundles the model's weights, code, and config so it can be served easily.
Consider the command below to create a .mar file:
torch-model-archiver --model-name mymodel --version 1.0 --serialized-file model.pt --handler image_classifier --export-path model_store
What will be the name of the generated .mar file in model_store?
The version is appended to the model name in the .mar file.
The .mar file name format is {model-name}-{version}.mar.
You want to add custom preprocessing code to your PyTorch model serving. Which file should you include in the .mar package?
Custom preprocessing is done in the handler code.
The handler script defines how input data is processed before inference.
You want to configure TorchServe to process 16 inputs at once. Which configuration parameter should you set?
Batch size for serving is a runtime config, not training.
Batch size during serving is set in the config.properties file used by TorchServe.
You created a .mar file with this command:
torch-model-archiver --model-name faultymodel --version 1.0 --serialized-file model.pt --handler custom_handler.py --export-path model_store
When starting TorchServe, it fails to load the model with an error about missing handler. What is the most likely cause?
Check how the handler argument is specified in torch-model-archiver.
The --handler option expects a handler name or a path inside the archive, not a direct file path.