4.7.4 Constructing LALR Parsing Tables

We now introduce our last parser construction method, the LALR (lookahead-LR) technique. This method is often used in practice, because the tables obtained by it are considerably smaller than the canonical LR tables, yet most common syntactic constructs of programming languages can be expressed conveniently by an LALR grammar. The same is almost true for SLR grammars, but there are a few constructs that cannot be conveniently handled by SLR techniques (see Example 4.48, for example).

For a comparison of parser size, the SLR and LALR tables for a grammar always have the same number of states, and this number is typically several hundred states for a language like C. The canonical LR table would typically have several thousand states for the same-size language. Thus, it is much easier and more economical to construct SLR and LALR tables than the canonical LR tables.

By way of introduction, let us again consider grammar (4.55), whose sets of LR(1) items were shown in Fig. 4.41. Take a pair of similar looking states, such as I4 and I7. Each of these states has only items with first component C→d@. In I4, the lookaheads are c or d; in I7, $ is the only lookahead.

To see the difference between the roles of I4 and I7 in the parser, note that the grammar generates the regular language c*dc*d. When reading an input cc···cdcc···cd, the parser shifts the first group of c's and their following d onto the stack, entering state 4 after reading the d. The parser then calls for a reduction by C→d, provided the next input symbol is c or d. The requirement that c or d follow makes sense, since these are the symbols that could begin strings in c*d. If $ follows the first d, we have an input like ccd, which is not in the language, and state 4 correctly declares an error if $ is the next input.

The parser enters state 7 after reading the second d. Then, the parser must see $ on the input, or it started with a string not of the form c *dc*d. It thus makes sense that state 7 should reduce by C→d on input $ and declare error on inputs c or d.

Let us now replace I4 and I7 by I47, the union of I4 and I7, consisting of the set of three items represented by [C→d@, c/d/$]. The goto’s on d to I4 or I7 from I0, I2, I3, and I6 now enter I47. The action of state 47 is to reduce on any input. The revised parser behaves essentially like the original, although it might reduce d to C in circumstances where the original would declare error, for example, on input like ccd or cdcdc. The error will eventually be caught; in fact, it will be caught before any more input symbols are shifted.

More generally, we can look for sets of LR(1) items having the same core, that is, set of first components, and we may merge these sets with common cores into one set of items. For example, in Fig. 4.41, I 4 and I 7 form such a pair, with core {C→d@}. Similarly, I3 and I6 form another pair, with core {C→c@C, C→@cC , C→@d}. There is one more pair, I8 and I9, with common core {C→cC@}. Note that, in general, a core is a set of LR(0) items for the grammar at hand, and that an LR(1) grammar may produce more than two sets of items with the same core.

Since the core of GOTO(I, X) depends only on the core of I, the goto’s of merged sets can themselves be merged. Thus, there is no problem revising the goto function as we merge sets of items. The action functions are modified to reflect the non-error actions of all sets of items in the merger.

Suppose we have an LR(1) grammar, that is, one whose sets of LR(1) items produce no parsing-action conflicts. If we replace all states having the same core with their union, it is possible that the resulting union will have a conflict, but it is unlikely for the following reason: Suppose in the union there is a conflict on lookahead a because there is an item [A→α@, a] calling for a reduction by A→α, and there is another item [B→β@aγ, b] calling for a shift. Then some set of items from which the union was formed has item [A→α@, a], and since the cores of all these states are the same, it must have an item [B→β@aγ, c] for some c. But then this state has the same shift/reduce conflict on a, and the grammar was not LR(1) as we assumed. Thus, the merging of states with common cores can never produce a shift/reduce conflict that was not present in one of the original states, because shift actions depend only on the core, not the lookahead.

It is possible, however, that a merger will produce a reduce/reduce conflict, as the following example shows.

Example 4.58: Consider the grammar

S’→S

S→a A d | b B d | a B e | b A e

A→c

B→c

which generates the four strings acd, ace, bcd, and bce. The reader can check that the grammar is LR(1) by constructing the sets of items. Up on doing so, we find the set of items {[A→c@, d]; [B→c@, e]} valid for viable prefix ac and {[A→c@, e]; [B→c@, d]} valid for bc. Neither of these sets has a conflict, and their cores are the same. However, their union, which is

A→c@, d/e

B→c@, d/e

generates a reduce/reduce conflict, since reductions by both A→c and B→c are called for on inputs d and e. □

We are now prepared to give the first of two LALR table-construction algorithms. The general idea is to construct the sets of LR(1) items, and if no conflicts arise, merge sets with common cores. We then construct the parsing table from the collection of merged sets of items. The method we are about to describe serves primarily as a definition of LALR(1) grammars. Constructing the entire collection of LR(1) sets of items requires too much space and time to be useful in practice.

Algorithm 4.59: An easy, but space-consuming LALR table construction.

INPUT: An augmented grammar G’.

OUTPUT: The LALR parsing-table functions ACTION and GOTO for G’.

METHOD:

1. Construct C = {I0, I1, …, In}, the collection of sets of LR(1) items.

2. For each core present among the set of LR(1) items, find all sets having that core, and replace these sets by their union.

3. Let C’ = {J0, J1, …, Jm} be the resulting sets of LR(1) items. The parsing actions for state i are constructed from Ji in the same manner as in Algorithm 4.56. If there is a parsing action conflict, the algorithm fails to produce a parser, and the grammar is said not to be LALR(1).

4. The GOTO table is constructed as follows. If J is the union of one or more sets of LR(1) items, that is, J = I1∪I2∪…∪Ik, then the cores of GOTO(I1, X), GOTO (I2, X), …, GOTO(Ik, X) are the same, since I1, I2, …, Ik all have the same core. Let K be the union of all sets of items having the same core as GOTO(I1, X). Then GOTO(J, X) = K.

The table produced by Algorithm 4.59 is called the LALR parsing table for G. If there are no parsing action conflicts, then the given grammar is said to be an LALR(1) grammar. The collection of sets of items constructed in step (3) is called the LALR(1) col lection.

Example 4.60: Again consider grammar (4.55) whose GOTO graph was shown in Fig. 4.41. As we mentioned, there are three pairs of sets of items that can be merged. I3 and I6 are replaced by their union:

I36 :

C→c@C, c/d/$

C→@cC, c/d/$

C→@d, c/d/$

I4 and I7 are replaced by their union:

I47 :

C→d@, c/d/$

and I8 and I9 are replaced by their union:

I89 :

C→cC@, c/d/$

The LALR action and goto functions for the condensed sets of items are shown in Fig. 4.43.

STATE

ACTION

GOTO

c

d

$

S

C

0

s36

s47

1

2

1

acc

2

s36

s47

5

36

s36

s47

89

47

r3

r3

r3

5

r1

89

r2

r2

r2

Figure 4.43: LALR parsing table for the grammar of Example 4.54

To see how the GOTO's are computed, consider GOTO (I36, C). In the original set of LR(1) items, GOTO(I3, C) = I8 , and I8 is now part of I89 , so we make GOTO(I36, C) be I89. We could have arrived at the same conclusion if we considered I6, the other part of I36. That is, GOTO(I6, C) = I9 , and I9 is now part of I89 . For another example, consider GOTO(I2, c), an entry that is exercised after the shift action of I2 on input c. In the original sets of LR(1) items, GOTO(I2, c) = I6. Since I6 is now part of I36 , GOTO(I2, c) becomes I36. Thus, the entry in Fig. 4.43 for state 2 and input c is made s36, meaning shift and push state 36 onto the stack.

When presented with a string from the language c*dc*d, both the LR parser of Fig. 4.42 and the LALR parser of Fig. 4.43 make exactly the same sequence of shifts and reductions, although the names of the states on the stack may differ. For instance, if the LR parser puts I3 or I6 on the stack, the LALR parser will put I36 on the stack. This relationship holds in general for an LALR grammar. The LR and LALR parsers will mimic one another on correct inputs.

When presented with erroneous input, the LALR parser may proceed to do some reductions after the LR parser has declared an error. However, the LALR parser will never shift another symbol after the LR parser declares an error.

For example, on input ccd followed by $, the LR parser of Fig. 4.42 will put

0 3 3 4

on the stack, and in state 4 will discover an error, because $ is the next input symbol and state 4 has action error on $. In contrast, the LALR parser of Fig.4.43 will make the corresponding moves, putting

0 36 36 47

on the stack. But state 47 on input $ has action reduce C→d. The LALR parser will thus change its stack to

0 36 36 89

Now the action of state 89 on input $ is reduce C→cC. The stack becomes

0 36 89

whereupon a similar reduction is called for, obtaining stack

0 2

Finally, state 2 has action error on input $, so the error is now discovered.

4.7.4 Constructing LALR Parsing Tables的更多相关文章

  1. 4.7.5 Efficient Construction of LALR Parsing Tables

    4.7.5 Efficient Construction of LALR Parsing Tables There are several modifications we can make to A ...

  2. 4.7.6 Compaction of LR Parsing Tables

    4.7.6 Compaction of LR Parsing Tables A typical programming language grammar with 50 to 100 terminal ...

  3. 4.7.3 Canonical LR(1) Parsing Tables

    4.7.3 Canonical LR(1) Parsing Tables We now give the rules for constructing the LR(1) ACTION and GOT ...

  4. 4.4 Top-Down Parsing

    4.4 Top-Down Parsing Top-down parsing can be viewed as the problem of constructing a parse tree for ...

  5. 基于虎书实现LALR(1)分析并生成GLSL编译器前端代码(C#)

    基于虎书实现LALR(1)分析并生成GLSL编译器前端代码(C#) 为了完美解析GLSL源码,获取其中的信息(都有哪些in/out/uniform等),我决定做个GLSL编译器的前端(以后简称编译器或 ...

  6. (转)Understanding C parsers generated by GNU Bison

    原文链接:https://www.cs.uic.edu/~spopuri/cparser.html Satya Kiran PopuriGraduate StudentUniversity of Il ...

  7. 4.8 Using Ambiguous Grammars

    4.8 Using Ambiguous Grammars It is a fact that every ambiguous grammar fails to be LR and thus is no ...

  8. Lexer and parser generators (ocamllex, ocamlyacc)

    Chapter 12 Lexer and parser generators (ocamllex, ocamlyacc) This chapter describes two program gene ...

  9. 4.9 Parser Generators

    4.9 Parser Generators This section shows how a parser generator can be used to facilitate the constr ...

随机推荐

  1. YOLOv3配置(win10+opencv3.40+cuda9.1+cudnn7.1+vs2015)

    最近心血来潮想学一下YOLOv3,于是就去网上看了YOLOv3在win10下的配置教程.在配置过程中塌坑无数,花了很多时间和精力,所以我想就此写一篇博客来介绍在在win10+vs2015的环境下如何配 ...

  2. Spider-Python爬虫之聚焦爬虫与通用爬虫的区别

    为什么要学习爬虫? 学习爬虫,可以私人订制一个搜索引擎. 大数据时代,要进行数据分析,首先要有数据源. 对于很多SEO从业者来说,从而可以更好地进行搜索引擎优化. 什么是网络爬虫? 模拟客户端发送网络 ...

  3. 慕课笔记利用css进行布局【混合布局】

    <html> <head> <title>混合布局学习</title> <style type="text/css"> ...

  4. HDU 5420 Victor and Proposition

    Victor and Proposition Time Limit: 6000ms Memory Limit: 524288KB This problem will be judged on HDU. ...

  5. 《C语言程序设计(第四版)》阅读心得(三)

    第八章  指针 1.一个变量的地址称为该变量的指针 一个专门用来存放另一变量的地址(即指针),称它为指针变量.指针变量的值是指针(即地址) 如上图2000是变量的指针,pointer是指针变量, 赋值 ...

  6. CentOS7 Firewall防火墙配置用法详解

    centos 7中防火墙是一个非常的强大的功能了,但对于centos 7中在防火墙中进行了升级了,下面我们一起来详细的看看关于centos 7中防火墙使用方法.   FirewallD 提供了支持网络 ...

  7. 第五章、 Linux 常用網路指令

    http://linux.vbird.org/linux_server/0140networkcommand.php     第五章. Linux 常用網路指令 切換解析度為 800x600 最近更新 ...

  8. [luoguP1082] 同余方程(扩展欧几里得)

    传送门 ax≡1(mod b) 这个式子就是 a * x % b == 1 % b 相当于 a * x - b * y == 1 只有当 gcd(a,b) == 1 时才有解,也就是说 ax + by ...

  9. next_permitation

    了解一个C++ STL的函数 next_permitation 可用于生成全排列 如下例子 #include <iostream> #include <stdio.h> #in ...

  10. PatentTips - Register file supporting transactional processing

    BACKGROUND OF THE INVENTION With the rise of multi-core, multi-threaded data processing systems, a k ...