Why is left factoring applied to a grammar in compiler design?
Think about what problem left factoring solves in top-down parsers.
Left factoring is used to remove common prefixes in productions to make the grammar suitable for predictive parsing by eliminating ambiguity caused by those prefixes.
Given the grammar:S -> if E then S else S | if E then S | other
Which is the correct left factored form?
Look for the common prefix in the productions and factor it out.
The productions starting with 'if E then S' share a common prefix. Left factoring extracts this prefix and introduces a new production S' to handle the optional 'else S'.
Consider the grammar:A -> a B | a C | d
Which problem does this grammar present for a predictive parser?
Check if productions start with the same symbol.
The productions 'a B' and 'a C' share the prefix 'a', which causes predictive parsing conflicts. Left factoring is needed to resolve this.
Which statement correctly distinguishes left factoring from left recursion removal?
Consider what each technique targets in grammar transformations.
Left factoring targets common prefixes to help predictive parsing, while left recursion removal targets productions where a non-terminal calls itself first to avoid infinite recursion.
After applying left factoring to a grammar, what is the effect on the FIRST sets of the affected non-terminals?
Think about whether left factoring changes the terminals that can appear first.
Left factoring restructures productions by factoring out common prefixes but does not change the set of terminals that can appear first, so FIRST sets remain unchanged.