GPU vs CPU inference tradeoffs in MLOps - Performance Comparison
When running machine learning models, choosing between GPU and CPU affects how fast predictions happen.
We want to understand how the time to get results changes as the input size grows on each device.
Analyze the time complexity of this inference code snippet.
for batch in data_loader:
inputs = batch.to(device) # device is 'cpu' or 'gpu'
outputs = model(inputs) # run inference
results.append(outputs.cpu())
# data_loader yields batches of size b
# total data size is n
This code runs inference on batches of data either on CPU or GPU and collects results.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Loop over batches to run model inference.
- How many times: Approximately n/b times, where n is total data size and b is batch size.
As input size n grows, the number of batches grows roughly proportionally, so total inference time grows too.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | ~10/b batches, fast inference |
| 100 | ~100/b batches, moderate inference time |
| 1000 | ~1000/b batches, longer inference time |
Pattern observation: Total time grows roughly linearly with input size n.
Time Complexity: O(n)
This means inference time grows in direct proportion to how much data you process.
[X] Wrong: "GPU inference always runs in constant time regardless of input size."
[OK] Correct: GPU speeds up parallel work but still processes all data, so time grows with input size.
Understanding how inference time scales helps you explain tradeoffs in real projects and shows you grasp performance basics.
"What if we increase batch size b significantly? How would the time complexity change or stay the same?"