np.clip() for bounding values in NumPy - Time & Space Complexity
We want to understand how the time taken by np.clip() changes as the size of the input array grows.
Specifically, how does the work increase when we have more numbers to bound?
Analyze the time complexity of the following code snippet.
import numpy as np
arr = np.random.randn(1000)
bounded = np.clip(arr, a_min=-1, a_max=1)
This code creates an array of 1000 random numbers and then limits each value to be between -1 and 1.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Checking and bounding each element in the array.
- How many times: Once for every element in the input array.
As the array size grows, the number of elements to check and bound grows the same way.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 checks and bounds |
| 100 | About 100 checks and bounds |
| 1000 | About 1000 checks and bounds |
Pattern observation: The work grows directly in proportion to the number of elements.
Time Complexity: O(n)
This means the time taken grows linearly with the number of elements to bound.
[X] Wrong: "np.clip() runs in constant time no matter the array size because it's a single function call."
[OK] Correct: Even though it's one function call, it processes each element inside, so the time depends on how many elements there are.
Knowing how functions like np.clip() scale helps you explain performance clearly and choose the right tools for big data.
"What if we used np.clip() on a 2D array instead of 1D? How would the time complexity change?"