[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5. The Bison Parser Algorithm

As Bison reads tokens, it pushes them onto a stack along with their semantic values. The stack is called the parser stack. Pushing a token is traditionally called shifting.

For example, suppose the infix calculator has read ‘1 + 5 *’, with a ‘3’ to come. The stack will have four elements, one for each token that was shifted.

But the stack does not always have an element for each token read. When the last n tokens and groupings shifted match the components of a grammar rule, they can be combined according to that rule. This is called reduction. Those tokens and groupings are replaced on the stack by a single grouping whose symbol is the result (left hand side) of that rule. Running the rule’s action is part of the process of reduction, because this is what computes the semantic value of the resulting grouping.

For example, if the infix calculator’s parser stack contains this:

 
1 + 5 * 3

and the next input token is a newline character, then the last three elements can be reduced to 15 via the rule:

 
expr: expr '*' expr;

Then the stack contains just these three elements:

 
1 + 15

At this point, another reduction can be made, resulting in the single value 16. Then the newline token can be shifted.

The parser tries, by shifts and reductions, to reduce the entire input down to a single grouping whose symbol is the grammar’s start-symbol (see section Languages and Context-Free Grammars).

This kind of parser is known in the literature as a bottom-up parser.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.1 Lookahead Tokens

The Bison parser does not always reduce immediately as soon as the last n tokens and groupings match a rule. This is because such a simple strategy is inadequate to handle most languages. Instead, when a reduction is possible, the parser sometimes “looks ahead” at the next token in order to decide what to do.

When a token is read, it is not immediately shifted; first it becomes the lookahead token, which is not on the stack. Now the parser can perform one or more reductions of tokens and groupings on the stack, while the lookahead token remains off to the side. When no more reductions should take place, the lookahead token is shifted onto the stack. This does not mean that all possible reductions have been done; depending on the token type of the lookahead token, some rules may choose to delay their application.

Here is a simple case where lookahead is needed. These three rules define expressions which contain binary addition operators and postfix unary factorial operators (‘!’), and allow parentheses for grouping.

 
expr:
  term '+' expr
| term
;
term:
  '(' expr ')'
| term '!'
| "number"
;

Suppose that the tokens ‘1 + 2’ have been read and shifted; what should be done? If the following token is ‘)’, then the first three tokens must be reduced to form an expr. This is the only valid course, because shifting the ‘)’ would produce a sequence of symbols term ')', and no rule allows this.

If the following token is ‘!’, then it must be shifted immediately so that ‘2 !’ can be reduced to make a term. If instead the parser were to reduce before shifting, ‘1 + 2’ would become an expr. It would then be impossible to shift the ‘!’ because doing so would produce on the stack the sequence of symbols expr '!'. No rule allows that sequence.

The lookahead token is stored in the variable yychar. Its semantic value and location, if any, are stored in the variables yylval and yylloc. See section Special Features for Use in Actions.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.2 Shift/Reduce Conflicts

Suppose we are parsing a language which has if-then and if-then-else statements, with a pair of rules like this:

 
if_stmt:
  "if" expr "then" stmt
| "if" expr "then" stmt "else" stmt
;

Here "if", "then" and "else" are terminal symbols for specific keyword tokens.

When the "else" token is read and becomes the lookahead token, the contents of the stack (assuming the input is valid) are just right for reduction by the first rule. But it is also legitimate to shift the "else", because that would lead to eventual reduction by the second rule.

This situation, where either a shift or a reduction would be valid, is called a shift/reduce conflict. Bison is designed to resolve these conflicts by choosing to shift, unless otherwise directed by operator precedence declarations. To see the reason for this, let’s contrast it with the other alternative.

Since the parser prefers to shift the "else", the result is to attach the else-clause to the innermost if-statement, making these two inputs equivalent:

 
if x then if y then win; else lose;

if x then do; if y then win; else lose; end;

But if the parser chose to reduce when possible rather than shift, the result would be to attach the else-clause to the outermost if-statement, making these two inputs equivalent:

 
if x then if y then win; else lose;

if x then do; if y then win; end; else lose;

The conflict exists because the grammar as written is ambiguous: either parsing of the simple nested if-statement is legitimate. The established convention is that these ambiguities are resolved by attaching the else-clause to the innermost if-statement; this is what Bison accomplishes by choosing to shift rather than reduce. (It would ideally be cleaner to write an unambiguous grammar, but that is very hard to do in this case.) This particular ambiguity was first encountered in the specifications of Algol 60 and is called the “dangling else” ambiguity.

To avoid warnings from Bison about predictable, legitimate shift/reduce conflicts, you can use the %expect n declaration. There will be no warning as long as the number of shift/reduce conflicts is exactly n, and Bison will report an error if there is a different number. See section Suppressing Conflict Warnings. However, we don’t recommend the use of %expect (except ‘%expect 0’!), as an equal number of conflicts does not mean that they are the same. When possible, you should rather use precedence directives to fix the conflicts explicitly (see section Using Precedence For Non Operators).

The definition of if_stmt above is solely to blame for the conflict, but the conflict does not actually appear without additional rules. Here is a complete Bison grammar file that actually manifests the conflict:

 
%%
stmt:
  expr
| if_stmt
;
if_stmt:
  "if" expr "then" stmt
| "if" expr "then" stmt "else" stmt
;
expr:
  "identifier"
;

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.3 Operator Precedence

Another situation where shift/reduce conflicts appear is in arithmetic expressions. Here shifting is not always the preferred resolution; the Bison declarations for operator precedence allow you to specify when to shift and when to reduce.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.3.1 When Precedence is Needed

Consider the following ambiguous grammar fragment (ambiguous because the input ‘1 - 2 * 3’ can be parsed in two different ways):

 
expr:
  expr '-' expr
| expr '*' expr
| expr '<' expr
| '(' expr ')'
…
;

Suppose the parser has seen the tokens ‘1’, ‘-’ and ‘2’; should it reduce them via the rule for the subtraction operator? It depends on the next token. Of course, if the next token is ‘)’, we must reduce; shifting is invalid because no single rule can reduce the token sequence ‘- 2 )’ or anything starting with that. But if the next token is ‘*’ or ‘<’, we have a choice: either shifting or reduction would allow the parse to complete, but with different results.

To decide which one Bison should do, we must consider the results. If the next operator token op is shifted, then it must be reduced first in order to permit another opportunity to reduce the difference. The result is (in effect) ‘1 - (2 op 3)’. On the other hand, if the subtraction is reduced before shifting op, the result is ‘(1 - 2) op 3’. Clearly, then, the choice of shift or reduce should depend on the relative precedence of the operators ‘-’ and op: ‘*’ should be shifted first, but not ‘<’.

What about input such as ‘1 - 2 - 5’; should this be ‘(1 - 2) - 5’ or should it be ‘1 - (2 - 5)’? For most operators we prefer the former, which is called left association. The latter alternative, right association, is desirable for assignment operators. The choice of left or right association is a matter of whether the parser chooses to shift or reduce when the stack contains ‘1 - 2’ and the lookahead token is ‘-’: shifting makes right-associativity.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.3.2 Specifying Operator Precedence

Bison allows you to specify these choices with the operator precedence declarations %left and %right. Each such declaration contains a list of tokens, which are operators whose precedence and associativity is being declared. The %left declaration makes all those operators left-associative and the %right declaration makes them right-associative. A third alternative is %nonassoc, which declares that it is a syntax error to find the same operator twice “in a row”. The last alternative, %precedence, allows to define only precedence and no associativity at all. As a result, any associativity-related conflict that remains will be reported as an compile-time error. The directive %nonassoc creates run-time error: using the operator in a associative way is a syntax error. The directive %precedence creates compile-time errors: an operator can be involved in an associativity-related conflict, contrary to what expected the grammar author.

The relative precedence of different operators is controlled by the order in which they are declared. The first precedence/associativity declaration in the file declares the operators whose precedence is lowest, the next such declaration declares the operators whose precedence is a little higher, and so on.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.3.3 Specifying Precedence Only

Since POSIX Yacc defines only %left, %right, and %nonassoc, which all defines precedence and associativity, little attention is paid to the fact that precedence cannot be defined without defining associativity. Yet, sometimes, when trying to solve a conflict, precedence suffices. In such a case, using %left, %right, or %nonassoc might hide future (associativity related) conflicts that would remain hidden.

The dangling else ambiguity (see section Shift/Reduce Conflicts) can be solved explicitly. This shift/reduce conflicts occurs in the following situation, where the period denotes the current parsing state:

 
if e1 then if  e2 then s1 . else s2

The conflict involves the reduction of the rule ‘IF expr THEN stmt’, which precedence is by default that of its last token (THEN), and the shifting of the token ELSE. The usual disambiguation (attach the else to the closest if), shifting must be preferred, i.e., the precedence of ELSE must be higher than that of THEN. But neither is expected to be involved in an associativity related conflict, which can be specified as follows.

 
%precedence THEN
%precedence ELSE

The unary-minus is another typical example where associativity is usually over-specified, see Infix Notation Calculator: calc. The %left directive is traditionally used to declare the precedence of NEG, which is more than needed since it also defines its associativity. While this is harmless in the traditional example, who knows how NEG might be used in future evolutions of the grammar…


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.3.4 Precedence Examples

In our example, we would want the following declarations:

 
%left '<'
%left '-'
%left '*'

In a more complete example, which supports other operators as well, we would declare them in groups of equal precedence. For example, '+' is declared with '-':

 
%left '<' '>' '=' "!=" "<=" ">="
%left '+' '-'
%left '*' '/'

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.3.5 How Precedence Works

The first effect of the precedence declarations is to assign precedence levels to the terminal symbols declared. The second effect is to assign precedence levels to certain rules: each rule gets its precedence from the last terminal symbol mentioned in the components. (You can also specify explicitly the precedence of a rule. See section Context-Dependent Precedence.)

Finally, the resolution of conflicts works by comparing the precedence of the rule being considered with that of the lookahead token. If the token’s precedence is higher, the choice is to shift. If the rule’s precedence is higher, the choice is to reduce. If they have equal precedence, the choice is made based on the associativity of that precedence level. The verbose output file made by ‘-v’ (see section Invoking Bison) says how each conflict was resolved.

Not all rules and not all tokens have precedence. If either the rule or the lookahead token has no precedence, then the default is to shift.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.3.6 Using Precedence For Non Operators

Using properly precedence and associativity directives can help fixing shift/reduce conflicts that do not involve arithmetics-like operators. For instance, the “dangling else” problem (see section Shift/Reduce Conflicts) can be solved elegantly in two different ways.

In the present case, the conflict is between the token "else" willing to be shifted, and the rule ‘if_stmt: "if" expr "then" stmt’, asking for reduction. By default, the precedence of a rule is that of its last token, here "then", so the conflict will be solved appropriately by giving "else" a precedence higher than that of "then", for instance as follows:

 
%precedence "then"
%precedence "else"

Alternatively, you may give both tokens the same precedence, in which case associativity is used to solve the conflict. To preserve the shift action, use right associativity:

 
%right "then" "else"

Neither solution is perfect however. Since Bison does not provide, so far, “scoped” precedence, both force you to declare the precedence of these keywords with respect to the other operators your grammar. Therefore, instead of being warned about new conflicts you would be unaware of (e.g., a shift/reduce conflict due to ‘if test then 1 else 2 + 3’ being ambiguous: ‘if test then 1 else (2 + 3)’ or ‘(if test then 1 else 2) + 3’?), the conflict will be already “fixed”.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.4 Context-Dependent Precedence

Often the precedence of an operator depends on the context. This sounds outlandish at first, but it is really very common. For example, a minus sign typically has a very high precedence as a unary operator, and a somewhat lower precedence (lower than multiplication) as a binary operator.

The Bison precedence declarations can only be used once for a given token; so a token has only one precedence declared in this way. For context-dependent precedence, you need to use an additional mechanism: the %prec modifier for rules.

The %prec modifier declares the precedence of a particular rule by specifying a terminal symbol whose precedence should be used for that rule. It’s not necessary for that symbol to appear otherwise in the rule. The modifier’s syntax is:

 
%prec terminal-symbol

and it is written after the components of the rule. Its effect is to assign the rule the precedence of terminal-symbol, overriding the precedence that would be deduced for it in the ordinary way. The altered rule precedence then affects how conflicts involving that rule are resolved (see section Operator Precedence).

Here is how %prec solves the problem of unary minus. First, declare a precedence for a fictitious terminal symbol named UMINUS. There are no tokens of this type, but the symbol serves to stand for its precedence:

 
…
%left '+' '-'
%left '*'
%left UMINUS

Now the precedence of UMINUS can be used in specific rules:

 
exp:
  …
| exp '-' exp
  …
| '-' exp %prec UMINUS

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.5 Parser States

The function yyparse is implemented using a finite-state machine. The values pushed on the parser stack are not simply token type codes; they represent the entire sequence of terminal and nonterminal symbols at or near the top of the stack. The current state collects all the information about previous input which is relevant to deciding what to do next.

Each time a lookahead token is read, the current parser state together with the type of lookahead token are looked up in a table. This table entry can say, “Shift the lookahead token.” In this case, it also specifies the new parser state, which is pushed onto the top of the parser stack. Or it can say, “Reduce using rule number n.” This means that a certain number of tokens or groupings are taken off the top of the stack, and replaced by one grouping. In other words, that number of states are popped from the stack, and one new state is pushed.

There is one other alternative: the table can say that the lookahead token is erroneous in the current state. This causes error processing to begin (see section Error Recovery).


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.6 Reduce/Reduce Conflicts

A reduce/reduce conflict occurs if there are two or more rules that apply to the same sequence of input. This usually indicates a serious error in the grammar.

For example, here is an erroneous attempt to define a sequence of zero or more word groupings.

 
sequence:
  %empty         { printf ("empty sequence\n"); }
| maybeword
| sequence word  { printf ("added word %s\n", $2); }
;
maybeword:
  %empty    { printf ("empty maybeword\n"); }
| word      { printf ("single word %s\n", $1); }
;

The error is an ambiguity: there is more than one way to parse a single word into a sequence. It could be reduced to a maybeword and then into a sequence via the second rule. Alternatively, nothing-at-all could be reduced into a sequence via the first rule, and this could be combined with the word using the third rule for sequence.

There is also more than one way to reduce nothing-at-all into a sequence. This can be done directly via the first rule, or indirectly via maybeword and then the second rule.

You might think that this is a distinction without a difference, because it does not change whether any particular input is valid or not. But it does affect which actions are run. One parsing order runs the second rule’s action; the other runs the first rule’s action and the third rule’s action. In this example, the output of the program changes.

Bison resolves a reduce/reduce conflict by choosing to use the rule that appears first in the grammar, but it is very risky to rely on this. Every reduce/reduce conflict must be studied and usually eliminated. Here is the proper way to define sequence:

 
sequence:
  %empty         { printf ("empty sequence\n"); }
| sequence word  { printf ("added word %s\n", $2); }
;

Here is another common error that yields a reduce/reduce conflict:

 
sequence:
  %empty
| sequence words
| sequence redirects
;
words:
  %empty
| words word
;
redirects:
  %empty
| redirects redirect
;

The intention here is to define a sequence which can contain either word or redirect groupings. The individual definitions of sequence, words and redirects are error-free, but the three together make a subtle ambiguity: even an empty input can be parsed in infinitely many ways!

Consider: nothing-at-all could be a words. Or it could be two words in a row, or three, or any number. It could equally well be a redirects, or two, or any number. Or it could be a words followed by three redirects and another words. And so on.

Here are two ways to correct these rules. First, to make it a single level of sequence:

 
sequence:
  %empty
| sequence word
| sequence redirect
;

Second, to prevent either a words or a redirects from being empty:

 
sequence:
  %empty
| sequence words
| sequence redirects
;
words:
  word
| words word
;
redirects:
  redirect
| redirects redirect
;

Yet this proposal introduces another kind of ambiguity! The input ‘word word’ can be parsed as a single words composed of two ‘word’s, or as two one-word words (and likewise for redirect/redirects). However this ambiguity is now a shift/reduce conflict, and therefore it can now be addressed with precedence directives.

To simplify the matter, we will proceed with word and redirect being tokens: "word" and "redirect".

To prefer the longest words, the conflict between the token "word" and the rule ‘sequence: sequence words’ must be resolved as a shift. To this end, we use the same techniques as exposed above, see Using Precedence For Non Operators. One solution relies on precedences: use %prec to give a lower precedence to the rule:

 
%precedence "word"
%precedence "sequence"
%%
sequence:
  %empty
| sequence word      %prec "sequence"
| sequence redirect  %prec "sequence"
;
words:
  word
| words "word"
;

Another solution relies on associativity: provide both the token and the rule with the same precedence, but make them right-associative:

 
%right "word" "redirect"
%%
sequence:
  %empty
| sequence word      %prec "word"
| sequence redirect  %prec "redirect"
;

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.7 Mysterious Conflicts

Sometimes reduce/reduce conflicts can occur that don’t look warranted. Here is an example:

 
%%
def: param_spec return_spec ',';
param_spec:
  type
| name_list ':' type
;
return_spec:
  type
| name ':' type
;
type: "id";

name: "id";
name_list:
  name
| name ',' name_list
;

It would seem that this grammar can be parsed with only a single token of lookahead: when a param_spec is being read, an "id" is a name if a comma or colon follows, or a type if another "id" follows. In other words, this grammar is LR(1).

However, for historical reasons, Bison cannot by default handle all LR(1) grammars. In this grammar, two contexts, that after an "id" at the beginning of a param_spec and likewise at the beginning of a return_spec, are similar enough that Bison assumes they are the same. They appear similar because the same set of rules would be active—the rule for reducing to a name and that for reducing to a type. Bison is unable to determine at that stage of processing that the rules would require different lookahead tokens in the two contexts, so it makes a single parser state for them both. Combining the two contexts causes a conflict later. In parser terminology, this occurrence means that the grammar is not LALR(1).

For many practical grammars (specifically those that fall into the non-LR(1) class), the limitations of LALR(1) result in difficulties beyond just mysterious reduce/reduce conflicts. The best way to fix all these problems is to select a different parser table construction algorithm. Either IELR(1) or canonical LR(1) would suffice, but the former is more efficient and easier to debug during development. See section LR Table Construction, for details. (Bison’s IELR(1) and canonical LR(1) implementations are experimental. More user feedback will help to stabilize them.)

If you instead wish to work around LALR(1)’s limitations, you can often fix a mysterious conflict by identifying the two parser states that are being confused, and adding something to make them look distinct. In the above example, adding one rule to return_spec as follows makes the problem go away:

 
…
return_spec:
  type
| name ':' type
| "id" "bogus"       /* This rule is never used.  */
;

This corrects the problem because it introduces the possibility of an additional active rule in the context after the "id" at the beginning of return_spec. This rule is not active in the corresponding context in a param_spec, so the two contexts receive distinct parser states. As long as the token "bogus" is never generated by yylex, the added rule cannot alter the way actual input is parsed.

In this particular example, there is another way to solve the problem: rewrite the rule for return_spec to use "id" directly instead of via name. This also causes the two confusing contexts to have different sets of active rules, because the one for return_spec activates the altered rule for return_spec rather than the one for name.

 
param_spec:
  type
| name_list ':' type
;
return_spec:
  type
| "id" ':' type
;

For a more detailed exposition of LALR(1) parsers and parser generators, see section DeRemer 1982.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.8 Tuning LR

The default behavior of Bison’s LR-based parsers is chosen mostly for historical reasons, but that behavior is often not robust. For example, in the previous section, we discussed the mysterious conflicts that can be produced by LALR(1), Bison’s default parser table construction algorithm. Another example is Bison’s %define parse.error verbose directive, which instructs the generated parser to produce verbose syntax error messages, which can sometimes contain incorrect information.

In this section, we explore several modern features of Bison that allow you to tune fundamental aspects of the generated LR-based parsers. Some of these features easily eliminate shortcomings like those mentioned above. Others can be helpful purely for understanding your parser.

Most of the features discussed in this section are still experimental. More user feedback will help to stabilize them.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.8.1 LR Table Construction

For historical reasons, Bison constructs LALR(1) parser tables by default. However, LALR does not possess the full language-recognition power of LR. As a result, the behavior of parsers employing LALR parser tables is often mysterious. We presented a simple example of this effect in Mysterious Conflicts.

As we also demonstrated in that example, the traditional approach to eliminating such mysterious behavior is to restructure the grammar. Unfortunately, doing so correctly is often difficult. Moreover, merely discovering that LALR causes mysterious behavior in your parser can be difficult as well.

Fortunately, Bison provides an easy way to eliminate the possibility of such mysterious behavior altogether. You simply need to activate a more powerful parser table construction algorithm by using the %define lr.type directive.

Directive: %define lr.type type

Specify the type of parser tables within the LR(1) family. The accepted values for type are:

(This feature is experimental. More user feedback will help to stabilize it.)

For example, to activate IELR, you might add the following directive to you grammar file:

 
%define lr.type ielr

For the example in Mysterious Conflicts, the mysterious conflict is then eliminated, so there is no need to invest time in comprehending the conflict or restructuring the grammar to fix it. If, during future development, the grammar evolves such that all mysterious behavior would have disappeared using just LALR, you need not fear that continuing to use IELR will result in unnecessarily large parser tables. That is, IELR generates LALR tables when LALR (using a deterministic parsing algorithm) is sufficient to support the full language-recognition power of LR. Thus, by enabling IELR at the start of grammar development, you can safely and completely eliminate the need to consider LALR’s shortcomings.

While IELR is almost always preferable, there are circumstances where LALR or the canonical LR parser tables described by Knuth (see section Knuth 1965) can be useful. Here we summarize the relative advantages of each parser table construction algorithm within Bison:

For a more detailed exposition of the mysterious behavior in LALR parsers and the benefits of IELR, see section Denny 2008 March, and Denny 2010 November.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.8.2 Default Reductions

After parser table construction, Bison identifies the reduction with the largest lookahead set in each parser state. To reduce the size of the parser state, traditional Bison behavior is to remove that lookahead set and to assign that reduction to be the default parser action. Such a reduction is known as a default reduction.

Default reductions affect more than the size of the parser tables. They also affect the behavior of the parser:

For canonical LR, the only default reduction that Bison enables by default is the accept action, which appears only in the accepting state, which has no other action and is thus a defaulted state. However, the default accept action does not delay any yylex invocation or syntax error detection because the accept action ends the parse.

For LALR and IELR, Bison enables default reductions in nearly all states by default. There are only two exceptions. First, states that have a shift action on the error token do not have default reductions because delayed syntax error detection could then prevent the error token from ever being shifted in that state. However, parser state merging can cause the same effect anyway, and LAC fixes it in both cases, so future versions of Bison might drop this exception when LAC is activated. Second, GLR parsers do not record the default reduction as the action on a lookahead token for which there is a conflict. The correct action in this case is to split the parse instead.

To adjust which states have default reductions enabled, use the %define lr.default-reduction directive.

Directive: %define lr.default-reduction where

Specify the kind of states that are permitted to contain default reductions. The accepted values of where are:

(The ability to specify where default reductions are permitted is experimental. More user feedback will help to stabilize it.)


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.8.3 LAC

Canonical LR, IELR, and LALR can suffer from a couple of problems upon encountering a syntax error. First, the parser might perform additional parser stack reductions before discovering the syntax error. Such reductions can perform user semantic actions that are unexpected because they are based on an invalid token, and they cause error recovery to begin in a different syntactic context than the one in which the invalid token was encountered. Second, when verbose error messages are enabled (see section The Error Reporting Function yyerror), the expected token list in the syntax error message can both contain invalid tokens and omit valid tokens.

The culprits for the above problems are %nonassoc, default reductions in inconsistent states (see section Default Reductions), and parser state merging. Because IELR and LALR merge parser states, they suffer the most. Canonical LR can suffer only if %nonassoc is used or if default reductions are enabled for inconsistent states.

LAC (Lookahead Correction) is a new mechanism within the parsing algorithm that solves these problems for canonical LR, IELR, and LALR without sacrificing %nonassoc, default reductions, or state merging. You can enable LAC with the %define parse.lac directive.

Directive: %define parse.lac value

Enable LAC to improve syntax error handling.

(This feature is experimental. More user feedback will help to stabilize it. Moreover, it is currently only available for deterministic parsers in C.)

Conceptually, the LAC mechanism is straight-forward. Whenever the parser fetches a new token from the scanner so that it can determine the next parser action, it immediately suspends normal parsing and performs an exploratory parse using a temporary copy of the normal parser state stack. During this exploratory parse, the parser does not perform user semantic actions. If the exploratory parse reaches a shift action, normal parsing then resumes on the normal parser stacks. If the exploratory parse reaches an error instead, the parser reports a syntax error. If verbose syntax error messages are enabled, the parser must then discover the list of expected tokens, so it performs a separate exploratory parse for each token in the grammar.

There is one subtlety about the use of LAC. That is, when in a consistent parser state with a default reduction, the parser will not attempt to fetch a token from the scanner because no lookahead is needed to determine the next parser action. Thus, whether default reductions are enabled in consistent states (see section Default Reductions) affects how soon the parser detects a syntax error: immediately when it reaches an erroneous token or when it eventually needs that token as a lookahead to determine the next parser action. The latter behavior is probably more intuitive, so Bison currently provides no way to achieve the former behavior while default reductions are enabled in consistent states.

Thus, when LAC is in use, for some fixed decision of whether to enable default reductions in consistent states, canonical LR and IELR behave almost exactly the same for both syntactically acceptable and syntactically unacceptable input. While LALR still does not support the full language-recognition power of canonical LR and IELR, LAC at least enables LALR’s syntax error handling to correctly reflect LALR’s language-recognition power.

There are a few caveats to consider when using LAC:

While the LAC algorithm shares techniques that have been recognized in the parser community for years, for the publication that introduces LAC, see section Denny 2010 May.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.8.4 Unreachable States

If there exists no sequence of transitions from the parser’s start state to some state s, then Bison considers s to be an unreachable state. A state can become unreachable during conflict resolution if Bison disables a shift action leading to it from a predecessor state.

By default, Bison removes unreachable states from the parser after conflict resolution because they are useless in the generated parser. However, keeping unreachable states is sometimes useful when trying to understand the relationship between the parser and the grammar.

Directive: %define lr.keep-unreachable-state value

Request that Bison allow unreachable states to remain in the parser tables. value must be a Boolean. The default is false.

There are a few caveats to consider:


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.9 Generalized LR (GLR) Parsing

Bison produces deterministic parsers that choose uniquely when to reduce and which reduction to apply based on a summary of the preceding input and on one extra token of lookahead. As a result, normal Bison handles a proper subset of the family of context-free languages. Ambiguous grammars, since they have strings with more than one possible sequence of reductions cannot have deterministic parsers in this sense. The same is true of languages that require more than one symbol of lookahead, since the parser lacks the information necessary to make a decision at the point it must be made in a shift-reduce parser. Finally, as previously mentioned (see section Mysterious Conflicts), there are languages where Bison’s default choice of how to summarize the input seen so far loses necessary information.

When you use the ‘%glr-parser’ declaration in your grammar file, Bison generates a parser that uses a different algorithm, called Generalized LR (or GLR). A Bison GLR parser uses the same basic algorithm for parsing as an ordinary Bison parser, but behaves differently in cases where there is a shift-reduce conflict that has not been resolved by precedence rules (see section Operator Precedence) or a reduce-reduce conflict. When a GLR parser encounters such a situation, it effectively splits into a several parsers, one for each possible shift or reduction. These parsers then proceed as usual, consuming tokens in lock-step. Some of the stacks may encounter other conflicts and split further, with the result that instead of a sequence of states, a Bison GLR parsing stack is what is in effect a tree of states.

In effect, each stack represents a guess as to what the proper parse is. Additional input may indicate that a guess was wrong, in which case the appropriate stack silently disappears. Otherwise, the semantics actions generated in each stack are saved, rather than being executed immediately. When a stack disappears, its saved semantic actions never get executed. When a reduction causes two stacks to become equivalent, their sets of semantic actions are both saved with the state that results from the reduction. We say that two stacks are equivalent when they both represent the same sequence of states, and each pair of corresponding states represents a grammar symbol that produces the same segment of the input token stream.

Whenever the parser makes a transition from having multiple states to having one, it reverts to the normal deterministic parsing algorithm, after resolving and executing the saved-up actions. At this transition, some of the states on the stack will have semantic values that are sets (actually multisets) of possible actions. The parser tries to pick one of the actions by first finding one whose rule has the highest dynamic precedence, as set by the ‘%dprec’ declaration. Otherwise, if the alternative actions are not ordered by precedence, but there the same merging function is declared for both rules by the ‘%merge’ declaration, Bison resolves and evaluates both and then calls the merge function on the result. Otherwise, it reports an ambiguity.

It is possible to use a data structure for the GLR parsing tree that permits the processing of any LR(1) grammar in linear time (in the size of the input), any unambiguous (not necessarily LR(1)) grammar in quadratic worst-case time, and any general (possibly ambiguous) context-free grammar in cubic worst-case time. However, Bison currently uses a simpler data structure that requires time proportional to the length of the input times the maximum number of stacks required for any prefix of the input. Thus, really ambiguous or nondeterministic grammars can require exponential time and space to process. Such badly behaving examples, however, are not generally of practical interest. Usually, nondeterminism in a grammar is local—the parser is “in doubt” only for a few tokens at a time. Therefore, the current data structure should generally be adequate. On LR(1) portions of a grammar, in particular, it is only slightly slower than with the deterministic LR(1) Bison parser.

For a more detailed exposition of GLR parsers, see section Scott 2000.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.10 Memory Management, and How to Avoid Memory Exhaustion

The Bison parser stack can run out of memory if too many tokens are shifted and not reduced. When this happens, the parser function yyparse calls yyerror and then returns 2.

Because Bison parsers have growing stacks, hitting the upper limit usually results from using a right recursion instead of a left recursion, see Recursive Rules.

By defining the macro YYMAXDEPTH, you can control how deep the parser stack can become before memory is exhausted. Define the macro with a value that is an integer. This value is the maximum number of tokens that can be shifted (and not reduced) before overflow.

The stack space allowed is not necessarily allocated. If you specify a large value for YYMAXDEPTH, the parser normally allocates a small stack at first, and then makes it bigger by stages as needed. This increasing allocation happens automatically and silently. Therefore, you do not need to make YYMAXDEPTH painfully small merely to save space for ordinary inputs that do not need much stack.

However, do not allow YYMAXDEPTH to be a value so large that arithmetic overflow could occur when calculating the size of the stack space. Also, do not allow YYMAXDEPTH to be less than YYINITDEPTH.

The default value of YYMAXDEPTH, if you do not define it, is 10000.

You can control how much stack is allocated initially by defining the macro YYINITDEPTH to a positive integer. For the deterministic parser in C, this value must be a compile-time constant unless you are assuming C99 or some other target language or compiler that allows variable-length arrays. The default is 200.

Do not allow YYINITDEPTH to be greater than YYMAXDEPTH.

You can generate a deterministic parser containing C++ user code from the default (C) skeleton, as well as from the C++ skeleton (see section C++ Parsers). However, if you do use the default skeleton and want to allow the parsing stack to grow, be careful not to use semantic types or location types that require non-trivial copy constructors. The C skeleton bypasses these constructors when copying data to new, larger stacks.


[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

This document was generated by Rick Perry on December 29, 2013 using texi2html 1.82.