Random data generation in Python - Time & Space Complexity
When we generate random data, we want to know how long it takes as we ask for more data.
We ask: How does the time grow when we create more random items?
Analyze the time complexity of the following code snippet.
import random
def generate_random_list(n):
result = []
for _ in range(n):
result.append(random.randint(1, 100))
return result
This code creates a list of n random numbers between 1 and 100.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The loop that runs
ntimes to add random numbers. - How many times: Exactly once for each number requested, so
ntimes.
Each time we ask for one more random number, the work grows by one step.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 random picks and adds |
| 100 | About 100 random picks and adds |
| 1000 | About 1000 random picks and adds |
Pattern observation: The work grows directly with the number of items requested.
Time Complexity: O(n)
This means the time to generate random data grows in a straight line as you ask for more numbers.
[X] Wrong: "Generating random numbers is instant and does not depend on how many we want."
[OK] Correct: Each random number takes a small amount of time, so more numbers mean more total time.
Understanding how time grows with data size helps you write efficient code and explain your thinking clearly in interviews.
"What if we generated random numbers inside a nested loop? How would the time complexity change?"