0
0
PyTorchml~20 mins

Model packaging (.mar files) in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Model Packaging Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
1:30remaining
What is the main purpose of a .mar file in PyTorch model deployment?

A .mar file is used in PyTorch model deployment. What does it mainly contain?

AIt packages the model's weights, code, and configuration for serving.
BIt stores only the raw training data used for the model.
CIt is a log file recording model training progress.
DIt contains only the model's architecture without weights.
Attempts:
2 left
💡 Hint

Think about what you need to run a model in production.

Predict Output
intermediate
1:30remaining
What is the output of this TorchServe packaging command?

Consider the command below to create a .mar file:

torch-model-archiver --model-name mymodel --version 1.0 --serialized-file model.pt --handler image_classifier --export-path model_store

What will be the name of the generated .mar file in model_store?

Amymodel.mar
Bmymodel-1.0.mar
Cmodel.pt.mar
Dimage_classifier.mar
Attempts:
2 left
💡 Hint

The version is appended to the model name in the .mar file.

Model Choice
advanced
2:00remaining
Which file must be included in the .mar archive for custom preprocessing?

You want to add custom preprocessing code to your PyTorch model serving. Which file should you include in the .mar package?

AA custom handler Python script.
BThe raw training dataset file.
CA JSON file with training hyperparameters.
DThe model's .pt weights file only.
Attempts:
2 left
💡 Hint

Custom preprocessing is done in the handler code.

Hyperparameter
advanced
2:00remaining
Which option correctly sets the batch size for TorchServe during model serving?

You want to configure TorchServe to process 16 inputs at once. Which configuration parameter should you set?

ASet <code>batch_size=16</code> in the training script before packaging.
BSet <code>batch_size=16</code> in the <code>torch-model-archiver</code> command.
CSet <code>batch_size=16</code> in the model's <code>config.properties</code> file.
DSet <code>batch_size=16</code> in the model's .pt file.
Attempts:
2 left
💡 Hint

Batch size for serving is a runtime config, not training.

🔧 Debug
expert
2:30remaining
Why does this .mar file fail to load in TorchServe?

You created a .mar file with this command:

torch-model-archiver --model-name faultymodel --version 1.0 --serialized-file model.pt --handler custom_handler.py --export-path model_store

When starting TorchServe, it fails to load the model with an error about missing handler. What is the most likely cause?

AThe model.pt file is corrupted and cannot be loaded.
BThe export-path directory does not exist.
CThe version number must be omitted for custom handlers.
DThe handler argument should be the handler name, not the file path.
Attempts:
2 left
💡 Hint

Check how the handler argument is specified in torch-model-archiver.