Left factoring in Compiler Design - Time & Space Complexity
Left factoring is a technique used in compiler design to rewrite grammar rules for easier parsing.
We want to understand how the time to apply left factoring grows as the grammar size increases.
Analyze the time complexity of the following left factoring process.
for each non-terminal N in grammar:
find common prefixes among N's productions
if common prefix found:
create new non-terminal N_prime
rewrite productions factoring out prefix
This code finds common beginnings in grammar rules and rewrites them to remove ambiguity.
Look for repeated checks and rewrites in the grammar.
- Primary operation: Comparing productions to find common prefixes.
- How many times: For each non-terminal, all pairs of its productions are compared.
As the number of productions grows, the comparisons increase quickly.
| Input Size (n productions) | Approx. Operations |
|---|---|
| 10 | About 45 comparisons |
| 100 | About 4,950 comparisons |
| 1000 | About 499,500 comparisons |
Pattern observation: The number of comparisons grows roughly with the square of the number of productions.
Time Complexity: O(n^2)
This means the time to left factor grows quickly as the number of productions increases, roughly by the square of that number.
[X] Wrong: "Left factoring takes linear time because it just scans each production once."
[OK] Correct: Actually, it compares pairs of productions to find common prefixes, which leads to many more checks than just scanning once.
Understanding how left factoring scales helps you explain parser design choices clearly and shows your grasp of grammar processing.
"What if the grammar had a fixed maximum number of productions per non-terminal? How would that affect the time complexity?"