Challenge - 5 Problems
ONNX Runtime Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
ONNX Runtime: Model output shape
Given the following PyTorch model exported to ONNX and loaded with ONNX Runtime, what is the shape of the output tensor after inference?
PyTorch
import torch import onnxruntime as ort import numpy as np class SimpleModel(torch.nn.Module): def __init__(self): super().__init__() self.linear = torch.nn.Linear(4, 3) def forward(self, x): return self.linear(x) model = SimpleModel() # Export to ONNX dummy_input = torch.randn(1, 4) torch.onnx.export(model, dummy_input, 'simple.onnx', input_names=['input'], output_names=['output']) # Load with ONNX Runtime session = ort.InferenceSession('simple.onnx') input_name = session.get_inputs()[0].name input_data = np.random.randn(1, 4).astype(np.float32) outputs = session.run(None, {input_name: input_data}) output = outputs[0] output.shape
Attempts:
2 left
💡 Hint
Remember the linear layer transforms input features from 4 to 3, batch size is 1.
✗ Incorrect
The model input has shape (1, 4). The linear layer outputs (1, 3) because it maps 4 features to 3 outputs for each batch item.
❓ Model Choice
intermediate1:30remaining
Choosing ONNX Runtime for inference
Which of the following is the main advantage of using ONNX Runtime for model inference compared to running PyTorch models directly?
Attempts:
2 left
💡 Hint
Think about hardware compatibility and performance.
✗ Incorrect
ONNX Runtime is designed to run models efficiently on various hardware like CPU, GPU, and specialized accelerators, improving portability and speed.
❓ Hyperparameter
advanced1:30remaining
Batch size effect on ONNX Runtime inference
If you increase the batch size of input data when running inference with ONNX Runtime, which of the following is true?
Attempts:
2 left
💡 Hint
Think about how processing more samples at once affects speed.
✗ Incorrect
Larger batch sizes mean more data processed at once, increasing latency per batch but often improving overall throughput due to parallelism.
🔧 Debug
advanced1:30remaining
ONNX Runtime inference error diagnosis
You run ONNX Runtime inference with input data shape (1, 5) but the model expects input shape (1, 4). What error will ONNX Runtime most likely raise?
Attempts:
2 left
💡 Hint
Consider what happens if input shape does not match model expectation.
✗ Incorrect
ONNX Runtime checks input shapes and raises an InvalidArgument error if the input shape does not match the model's expected shape.
❓ Metrics
expert2:00remaining
Evaluating ONNX Runtime inference speed
You measure ONNX Runtime inference time for 1000 samples with batch size 10 and get 2 seconds total. What is the approximate throughput in samples per second?
Attempts:
2 left
💡 Hint
Throughput = total samples / total time.
✗ Incorrect
Throughput = 1000 samples / 2 seconds = 500 samples per second.