Indexing with ellipsis in NumPy - Time & Space Complexity
We want to understand how fast numpy runs when we use ellipsis for indexing.
Specifically, how does the time change when we select parts of big arrays with ellipsis?
Analyze the time complexity of the following code snippet.
import numpy as np
arr = np.random.rand(1000, 1000, 10)
result = arr[..., 5]
This code selects the 6th element from the last dimension of a 3D array using ellipsis.
Look for loops or repeated steps inside the operation.
- Primary operation: Constructing a strided view by adjusting strides, shape, and base pointer.
- How many times: Constant time, O(1), independent of array size.
When the array size grows, the execution time remains constant.
| Input Size (n x n x 10) | Approx. Operations |
|---|---|
| 10 x 10 x 10 | O(1) |
| 100 x 100 x 10 | O(1) |
| 1000 x 1000 x 10 | O(1) |
Pattern observation: The operations are constant regardless of array dimensions, as NumPy slicing creates a view without copying data.
Time Complexity: O(1)
This means the time is constant and independent of the number of elements in the array dimensions, because it creates a lightweight view.
[X] Wrong: "Using ellipsis takes O(n^2) time proportional to the output size."
[OK] Correct: Ellipsis expands to basic slicing (:, :, 5), which NumPy implements in constant time via views, without accessing or copying elements.
Understanding how indexing affects performance helps you write efficient data code and explain your choices clearly.
What if we changed the code to select a slice like arr[..., 2:7]? How would the time complexity change?