np.random.rand() and random arrays in NumPy - Time & Space Complexity
We want to understand how the time to create random arrays grows as the array size increases.
How does the work done by np.random.rand() change when we ask for bigger arrays?
Analyze the time complexity of the following code snippet.
import numpy as np
# Create a random array of size n
n = 1000
random_array = np.random.rand(n)
This code creates a one-dimensional array of length n filled with random numbers between 0 and 1.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Generating each random number independently.
- How many times: Exactly
ntimes, once for each element in the array.
As the array size n grows, the number of random numbers generated grows linearly.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | 10 random number generations |
| 100 | 100 random number generations |
| 1000 | 1000 random number generations |
Pattern observation: Doubling the size doubles the work needed.
Time Complexity: O(n)
This means the time to create the array grows directly in proportion to the number of elements requested.
[X] Wrong: "Generating a random array is constant time because it's just one function call."
[OK] Correct: The function call creates many random numbers, one for each element, so the time grows with the array size.
Understanding how random data generation scales helps you reason about data preparation steps in real projects and interviews.
"What if we generate a 2D array with shape (n, n) using np.random.rand(n, n)? How would the time complexity change?"