Sieve of Eratosthenes Find All Primes in DSA C - Time & Space Complexity
We want to understand how the time needed to find all prime numbers grows as the input number gets bigger.
How does the number of steps change when we increase the limit for primes?
Analyze the time complexity of the following code snippet.
void sieve(int n) {
bool prime[n+1];
for (int i = 0; i <= n; i++)
prime[i] = true;
prime[0] = prime[1] = false;
for (int p = 2; p * p <= n; p++) {
if (prime[p]) {
for (int i = p * p; i <= n; i += p)
prime[i] = false;
}
}
}
This code marks all prime numbers up to n by crossing out multiples of each prime.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The inner loop that marks multiples of each prime as not prime.
- How many times: For each prime p, it runs roughly n/p times starting from p*p up to n.
As n grows, the number of operations grows a bit faster than n but much less than n squared.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 20 to 30 steps |
| 100 | About 300 to 400 steps |
| 1000 | About 4000 to 5000 steps |
Pattern observation: The operations grow roughly proportional to n times the logarithm of the logarithm of n, which is slower than n log n but faster than n.
Time Complexity: O(n log log n)
This means the time grows a bit faster than n but much slower than n squared, making it efficient for large n.
[X] Wrong: "The inner loop runs n times for each number, so the time is O(n²)."
[OK] Correct: The inner loop runs fewer times for larger primes because it starts at p squared and skips multiples, so total work is much less than n squared.
Understanding this time complexity shows you can analyze nested loops with decreasing work, a useful skill for many algorithm problems.
"What if we started marking multiples from 2*p instead of p*p? How would the time complexity change?"
