np.broadcast_to() for explicit broadcasting in NumPy - Time & Space Complexity
We want to understand how the time needed to run np.broadcast_to() changes as the broadcast shape grows.
Specifically, how does the operation scale when we expand an array to a larger shape?
Analyze the time complexity of the following code snippet.
import numpy as np
arr = np.array([1, 2, 3])
broadcasted = np.broadcast_to(arr, (4, 3))
This code takes a 1D array of length 3 and broadcasts it to a 2D array with 4 rows and 3 columns.
Look for repeated steps that take time.
- Primary operation: Computing the shape and strides for the new readonly view.
- How many times: Constant time (once), independent of output shape size.
As the output size grows, the operation time does not grow.
| Output Size (n) | Approx. Operations |
|---|---|
| 10 | Constant (~1) |
| 100 | Constant (~1) |
| 1000 | Constant (~1) |
Pattern observation: The time is constant, independent of the total number of elements in the broadcasted shape.
Time Complexity: O(1)
This means the time to broadcast is constant regardless of the size of the output array.
[X] Wrong: "Broadcasting copies/materializes data, so time grows with output size."
[OK] Correct: Broadcasting creates a lazy readonly view; only shape and strides metadata are set up in constant time, no data copying.
Understanding how broadcasting scales helps you reason about performance when working with large arrays in data science tasks.
What if we broadcasted to a shape with one very large dimension and one very small dimension? How would the time complexity change?