Context-free grammars in Compiler Design - Time & Space Complexity
We want to understand how the time to process a context-free grammar grows as the input size increases.
Specifically, how does parsing or generating strings from the grammar scale with input length?
Analyze the time complexity of a simple top-down parser for a context-free grammar.
function parse(symbol, input) {
if (symbol is terminal) {
return input starts with symbol
} else {
for each production of symbol {
if (parse all parts of production matches input) {
return true
}
}
return false
}
}
This code tries to match input strings by recursively expanding grammar symbols.
Look for repeated parsing calls and expansions.
- Primary operation: Recursive calls to parse each grammar symbol and its productions.
- How many times: Potentially many times for each substring of input, depending on grammar rules.
As input length grows, the parser tries more ways to match parts of the input with grammar rules.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | Hundreds to thousands of recursive checks |
| 100 | Thousands to millions of checks |
| 1000 | Millions to billions of checks |
Pattern observation: The number of operations grows very quickly, often more than just doubling as input size increases.
Time Complexity: O(2^n)
This means the time can grow exponentially with input length, making parsing slow for large inputs without optimization.
[X] Wrong: "Parsing a context-free grammar always takes linear time because it just reads the input once."
[OK] Correct: Recursive parsing can try many combinations of rules and substrings, causing exponential growth in work.
Understanding how parsing time grows helps you explain why some grammars are harder to parse and why efficient algorithms matter.
"What if we added memoization to remember previous parse results? How would the time complexity change?"