Sieve of Eratosthenes Find All Primes in DSA Python - Time & Space Complexity
We want to understand how the time needed to find all prime numbers grows as the input number gets bigger.
Specifically, how does the work change when we increase the limit up to which we find primes?
Analyze the time complexity of the following code snippet.
def sieve_of_eratosthenes(n):
primes = [True] * (n + 1)
primes[0], primes[1] = False, False
p = 2
while p * p <= n:
if primes[p]:
for i in range(p * p, n + 1, p):
primes[i] = False
p += 1
return [x for x in range(n + 1) if primes[x]]
This code finds all prime numbers up to n by marking multiples of each prime as not prime.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: Marking multiples of each prime number as not prime inside the inner for-loop.
- How many times: The outer while-loop runs roughly up to the square root of n, and the inner loop marks multiples starting from p squared up to n.
As n grows, the number of operations grows a bit faster than n but much slower than n squared.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 20 to 30 operations |
| 100 | About 300 to 400 operations |
| 1000 | About 4000 to 5000 operations |
Pattern observation: The operations grow roughly proportional to n times the logarithm of the logarithm of n, which is slower than n log n but faster than linear.
Time Complexity: O(n log log n)
This means the time needed grows a bit faster than n but much slower than n squared, making it efficient for large n.
[X] Wrong: "The algorithm checks every number against all smaller numbers, so it must be O(n²)."
[OK] Correct: The algorithm only marks multiples of primes, skipping many numbers quickly, so it does much less work than checking all pairs.
Understanding this algorithm's time complexity shows you can analyze efficient number-based algorithms, a useful skill for many coding challenges.
"What if we started marking multiples from 2*p instead of p*p? How would the time complexity change?"