Recall & Review
beginner
What is TorchServe?
TorchServe is a tool that helps you easily deploy PyTorch models so they can serve predictions in real time.
Click to reveal answer
beginner
Name two main files needed to serve a PyTorch model with TorchServe.
You need a model file (.pt or .pth) and a model handler file that tells TorchServe how to process inputs and outputs.
Click to reveal answer
intermediate
What command starts the TorchServe server after model registration?
The command is `torchserve --start --model-store model_store --models my_model=your_model.mar`.
Click to reveal answer
beginner
Why do you create a .mar file in TorchServe setup?
A .mar file packages your model and handler so TorchServe can load and serve it efficiently.
Click to reveal answer
beginner
What is the purpose of the model-store directory in TorchServe?
It stores all the packaged model archive (.mar) files that TorchServe can load to serve models.
Click to reveal answer
Which file format is used to package models for TorchServe?
✗ Incorrect
TorchServe uses .mar files to package models and handlers for deployment.
What command is used to create a .mar file?
✗ Incorrect
The torch-model-archiver command packages the model and handler into a .mar file.
Which directory holds the .mar files for TorchServe to load?
✗ Incorrect
The model-store directory is the default place for .mar files in TorchServe.
What does the model handler file do?
✗ Incorrect
The handler file tells TorchServe how to handle input data and format output predictions.
Which command starts the TorchServe server?
✗ Incorrect
The command torchserve --start launches the TorchServe server to serve models.
Explain the steps to set up TorchServe for a PyTorch model.
Think about packaging, storing, and starting the server.
You got /5 concepts.
Describe the role of the model handler in TorchServe setup.
It acts like a translator between data and model.
You got /4 concepts.