ONNX Runtime helps you run machine learning models fast and easily on many devices. It makes using models from different tools simple.
0
0
ONNX Runtime in Computer Vision
Introduction
You want to run a trained model on your computer without retraining.
You have a model from another tool and want to use it in your app.
You want faster predictions on images or text using a ready model.
You want to run the same model on different devices like PC or phone.
You want to test how well a model works on new data quickly.
Syntax
Computer Vision
import onnxruntime as ort # Load the model session = ort.InferenceSession('model.onnx') # Prepare input data as a dictionary inputs = {'input_name': input_array} # Run the model to get outputs outputs = session.run(None, inputs)
Replace 'model.onnx' with your ONNX model file path.
Input names must match the model's expected input names.
Examples
Load a handwritten digit model and run it on random image data.
Computer Vision
import onnxruntime as ort import numpy as np session = ort.InferenceSession('mnist.onnx') input_name = session.get_inputs()[0].name input_data = np.random.rand(1, 1, 28, 28).astype('float32') outputs = session.run(None, {input_name: input_data})
Run the model and print the first output array.
Computer Vision
import onnxruntime as ort session = ort.InferenceSession('model.onnx') input_name = session.get_inputs()[0].name inputs = {input_name: your_numpy_array} outputs = session.run(None, inputs) print(outputs[0])
Sample Model
This code loads an ONNX model that adds 1 to each input number. It runs the model on three numbers and prints the results.
Computer Vision
import onnxruntime as ort import numpy as np # Load a simple ONNX model (for example, a model that adds 1 to input) # Here we create a dummy input and run the model session = ort.InferenceSession('add_one.onnx') # Get the input name from the model input_name = session.get_inputs()[0].name # Create input data: a batch of 3 numbers input_data = np.array([[1.0], [2.0], [3.0]], dtype=np.float32) # Run the model outputs = session.run(None, {input_name: input_data}) # Print the output print('Input:', input_data) print('Output:', outputs[0])
OutputSuccess
Important Notes
ONNX Runtime supports many hardware accelerations for faster results.
Always check the input and output names with session.get_inputs() and session.get_outputs().
Input data must match the model's expected shape and data type.
Summary
ONNX Runtime lets you run machine learning models easily on many devices.
You load a model, prepare inputs, and get outputs with simple Python code.
It works well for quick testing and deploying models without extra training.