Random variable generation in SciPy - Time & Space Complexity
When we generate random numbers using scipy, we want to know how the time to get these numbers changes as we ask for more.
We ask: How does the time grow when we generate more random values?
Analyze the time complexity of the following code snippet.
from scipy.stats import norm
# Generate n random values from a normal distribution
def generate_random_values(n):
samples = norm.rvs(size=n)
return samples
values = generate_random_values(1000)
This code generates n random numbers from a normal distribution using scipy.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Generating each random number one by one internally.
- How many times: Exactly
ntimes, once for each requested random value.
As we ask for more random numbers, the time grows roughly in direct proportion.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 random number generations |
| 100 | About 100 random number generations |
| 1000 | About 1000 random number generations |
Pattern observation: Doubling the number of values roughly doubles the work done.
Time Complexity: O(n)
This means the time to generate random numbers grows linearly with how many numbers you want.
[X] Wrong: "Generating 1000 random numbers takes the same time as generating 10 because computers are fast."
[OK] Correct: Even though computers are fast, each random number requires work, so more numbers mean more time.
Understanding how time grows with input size helps you explain performance clearly and shows you know how algorithms scale in real tasks.
"What if we generate random numbers in batches of 100 instead of all at once? How would the time complexity change?"