Negative indexing in NumPy - Time & Space Complexity
We want to understand how fast operations run when using negative indexing in numpy arrays.
Specifically, how does the time to access elements change as the array grows?
Analyze the time complexity of the following code snippet.
import numpy as np
arr = np.arange(1000)
value = arr[-1]
value2 = arr[-10]
value3 = arr[-100]
This code accesses elements from the end of a numpy array using negative indexes.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Accessing an element by index in the array.
- How many times: Three times, each direct access to a single element.
Accessing an element by negative index is just like positive index access; it directly finds the element.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 3 |
| 100 | 3 |
| 1000 | 3 |
Pattern observation: The number of operations stays the same no matter how big the array is.
Time Complexity: O(1)
This means accessing an element by negative index takes the same short time no matter the array size.
[X] Wrong: "Negative indexing makes the program slower because it has to count from the end."
[OK] Correct: Numpy directly calculates the position internally, so it does not count through elements one by one.
Knowing that negative indexing is fast helps you write clean and efficient code without worrying about speed.
"What if we accessed a slice using negative indexes instead of single elements? How would the time complexity change?"