Array indexing and slicing in Data Analysis Python - Time & Space Complexity
We want to understand how fast array indexing and slicing run as the data size grows.
How does the time to get or slice parts of an array change when the array gets bigger?
Analyze the time complexity of the following code snippet.
import numpy as np
arr = np.arange(1000)
value = arr[500] # Indexing
sub_arr = arr[100:200] # Slicing
This code gets one element by index and then gets a slice (view) of the array.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Accessing one element by index and creating a slice view.
- How many times: Both indexing and slicing are single constant-time operations (no loops).
Getting one element by index takes the same time no matter the array size.
Getting a slice takes constant time no matter the array size or slice length (creates a view).
| Input Size (n) | Approx. Operations for Indexing | Approx. Operations for Slicing |
|---|---|---|
| 10 | 1 | 1 |
| 100 | 1 | 1 |
| 1000 | 1 | 1 |
Pattern observation: Both indexing and slicing times stay constant regardless of array or slice size.
Time Complexity: O(1) for indexing, O(1) for slicing
Both indexing and slicing are constant time no matter the array size; NumPy slicing creates a view without copying data.
[X] Wrong: "Slicing an array copies data and takes O(k) time where k is slice length."
[OK] Correct: NumPy basic slicing creates a view sharing memory with the original, so it's O(1) like indexing.
Knowing how indexing and slicing scale helps you write efficient data code and answer questions about data access speed clearly.
"What if we slice the entire array instead of a small part? How would the time complexity change? (Hint: still O(1))"