Combining fancy and slice indexing in NumPy - Time & Space Complexity
We want to understand how the time needed changes when we combine fancy and slice indexing in numpy arrays.
How does the work grow when we pick many elements using both methods together?
Analyze the time complexity of the following code snippet.
import numpy as np
arr = np.arange(1000)
indices = [2, 5, 7, 10]
result = arr[indices[1:3]]
This code picks elements from a numpy array using a slice of a list of indices (fancy indexing combined with slicing).
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Accessing elements at selected indices from the array.
- How many times: Once for each index in the slice (here 2 times for indices 5 and 7).
As the number of indices selected grows, the work grows proportionally.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 element accesses |
| 100 | 100 element accesses |
| 1000 | 1000 element accesses |
Pattern observation: The time grows linearly with the number of indices selected.
Time Complexity: O(k)
This means the time grows directly with how many elements you pick using the combined indexing.
[X] Wrong: "Combining fancy and slice indexing makes the operation take time proportional to the whole array size."
[OK] Correct: Actually, numpy only accesses the elements you ask for, so time depends on how many indices you select, not the full array size.
Understanding how indexing time grows helps you write efficient data selection code and explain your choices clearly in real projects or interviews.
"What if we replaced the slice with a full list of indices? How would the time complexity change?"