Local optimization (peephole) in Compiler Design - Time & Space Complexity
Local optimization, or peephole optimization, improves small parts of code to run faster.
We want to know how the time to optimize grows as the code size grows.
Analyze the time complexity of the following peephole optimization code snippet.
for i in range(len(instructions) - window_size + 1):
window = instructions[i:i+window_size]
if can_optimize(window):
optimized = optimize(window)
instructions[i:i+window_size] = optimized
This code looks at small groups of instructions and tries to replace them with faster ones.
- Primary operation: Sliding over the instruction list with a fixed-size window.
- How many times: Once for each instruction minus the window size plus one.
As the number of instructions grows, the optimizer checks more windows, but each window is small and fixed.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 checks |
| 100 | About 100 checks |
| 1000 | About 1000 checks |
Pattern observation: The number of checks grows directly with the number of instructions.
Time Complexity: O(n)
This means the time to optimize grows in a straight line as the code gets longer.
[X] Wrong: "Peephole optimization takes constant time no matter how big the code is."
[OK] Correct: The optimizer must check each part of the code, so more code means more checks.
Understanding how local optimization scales helps you explain compiler efficiency clearly and confidently.
"What if the window size grew with the input size? How would the time complexity change?"