0
0
NumPydata~5 mins

Slicing rows and columns in NumPy - Time & Space Complexity

Choose your learning style9 modes available
Time Complexity: Slicing rows and columns
O(1)
Understanding Time Complexity

We want to understand how the time to slice parts of a numpy array changes as the array gets bigger.

Specifically, how does selecting rows or columns affect the work done?

Scenario Under Consideration

Analyze the time complexity of the following code snippet.

import numpy as np

arr = np.random.rand(1000, 1000)

# Slice first 10 rows and all columns
slice_rows = arr[:10, :]

# Slice all rows and first 10 columns
slice_cols = arr[:, :10]

This code creates a large 2D array and slices a small part of rows or columns from it.

Identify Repeating Operations

Identify the loops, recursion, array traversals that repeat.

  • Primary operation: Adjusting array metadata (shape, strides, base pointer).
  • How many times: Constant time; no traversal or copying of elements.
How Execution Grows With Input

Slicing creates a view (not a copy), so time is independent of array size or slice size.

Input Size (n x n)Slice Time
10 x 10O(1)
100 x 100O(1)
1000 x 1000O(1)

Pattern observation: Constant time regardless of array or slice dimensions.

Final Time Complexity

Time Complexity: O(1)

Slicing returns a view sharing the same data; only metadata is updated in constant time.

Common Mistake

[X] Wrong: "Slicing copies elements, so time grows with slice size (e.g., O(n))."

[OK] Correct: Numpy slicing creates a view without copying data; access is lazy until modification.

Interview Connect

Knowing slicing is O(1) helps optimize data pipelines; use .copy() explicitly only when needed.

Self-Check

"What if we slice a square block of size k x k (k = n/2)? How would the time complexity change?"

Answer: Still O(1); view creation is always constant time.