Slicing rows and columns in NumPy - Time & Space Complexity
We want to understand how the time to slice parts of a numpy array changes as the array gets bigger.
Specifically, how does selecting rows or columns affect the work done?
Analyze the time complexity of the following code snippet.
import numpy as np
arr = np.random.rand(1000, 1000)
# Slice first 10 rows and all columns
slice_rows = arr[:10, :]
# Slice all rows and first 10 columns
slice_cols = arr[:, :10]
This code creates a large 2D array and slices a small part of rows or columns from it.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Adjusting array metadata (shape, strides, base pointer).
- How many times: Constant time; no traversal or copying of elements.
Slicing creates a view (not a copy), so time is independent of array size or slice size.
| Input Size (n x n) | Slice Time |
|---|---|
| 10 x 10 | O(1) |
| 100 x 100 | O(1) |
| 1000 x 1000 | O(1) |
Pattern observation: Constant time regardless of array or slice dimensions.
Time Complexity: O(1)
Slicing returns a view sharing the same data; only metadata is updated in constant time.
[X] Wrong: "Slicing copies elements, so time grows with slice size (e.g., O(n))."
[OK] Correct: Numpy slicing creates a view without copying data; access is lazy until modification.
Knowing slicing is O(1) helps optimize data pipelines; use .copy() explicitly only when needed.
"What if we slice a square block of size k x k (k = n/2)? How would the time complexity change?"
Answer: Still O(1); view creation is always constant time.