0
0
PyTorchml~20 mins

TorchServe setup in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
TorchServe Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
1:30remaining
Understanding TorchServe Model Archiving

What is the primary purpose of the torch-model-archiver tool in TorchServe?

ATo train a PyTorch model using distributed GPUs
BTo visualize the model architecture graphically
CTo convert a PyTorch model into TensorFlow format
DTo package a trained PyTorch model and its dependencies into a .mar file for serving
Attempts:
2 left
💡 Hint

Think about how TorchServe loads models for inference.

Predict Output
intermediate
1:30remaining
Predicting Output of TorchServe Registration Command

What will be the output message when running this command to register a model with TorchServe?

torchserve --start --model-store model_store --models mymodel=mymodel.mar
ATorchServe started successfully and model 'mymodel' is registered and ready for inference.
BError: Model store directory 'model_store' not found.
CSyntaxError: invalid command line option.
DTorchServe started but model 'mymodel' failed to load due to missing handler.
Attempts:
2 left
💡 Hint

Assume the model store and .mar file exist and are correct.

Model Choice
advanced
2:00remaining
Choosing the Correct Handler for Custom Model

You have a PyTorch model that takes two inputs and returns a dictionary of outputs. Which handler type should you use in TorchServe to serve this model?

AUse the default <code>image_classifier</code> handler
BUse the <code>text_classifier</code> handler
CCreate a custom handler by subclassing <code>BaseHandler</code>
DUse the <code>object_detector</code> handler
Attempts:
2 left
💡 Hint

Default handlers expect specific input/output formats.

Hyperparameter
advanced
1:30remaining
Configuring Batch Size in TorchServe

Which configuration file and parameter should you modify to change the batch size for inference requests in TorchServe?

AModify <code>model-config.yaml</code> and set <code>max_batch_size</code> under the model entry
BModify <code>config.properties</code> and set <code>batch_size</code>
CModify <code>handler.py</code> to manually batch inputs
DModify <code>torchserve.conf</code> and set <code>max_batch_size</code>
Attempts:
2 left
💡 Hint

Batch size is usually set per model in a YAML config.

Metrics
expert
2:00remaining
Interpreting TorchServe Metrics Output

After running TorchServe with metrics enabled, you see this output snippet:

{"model_name": "mymodel", "inference_count": 1000, "average_latency": 25.3}

What does the average_latency value represent?

AThe average time in milliseconds taken to load the model at startup
BThe average time in milliseconds taken to process each inference request
CThe average time in seconds between two consecutive inference requests
DThe average time in microseconds taken to serialize the model
Attempts:
2 left
💡 Hint

Latency usually measures processing time per request.