LEX
C compiler with lex and yacc. Note: There are 1 shift/reduce conflicts, correctly resolved by default: IF '(' expression ')' statement ELSE statement. Solve unary via%prec. Solve with lexer INCOP. Union define tokens, pass yylval.str from lex to yacc%option yylineno use linenon. Yacc recursive $$ $1. Lex and Yacc can do anything, but calling them easy to use, I wouldn't dare. I'd like to be able to embed the lexer in my program eventually, so I don't want to depend on JVM or something which may not be installed. Falken May 20 '12 at 16:47.
Lex specifications:
A Lex program (the .l file) consists of three parts:
declarations
%%
translation rules
%%
YACC
How does this yacc works?
Difference between LEX and YACC
Instruction Formats
These formats are classified by length in bytes, use of the base registers, and object code format.The five instruction classes of use to the general user are listed below.
FormatLengthUse
Namein bytes
RR2Register to register transfers.
RS4Register to storage and register from storage
RX4Register to indexed storage and register from indexed storage
SI4Storage immediate
SS6Storage–to–Storage.These have two variants,
each of which we shall discuss soon.
The compilation process is a sequence of various phases. Each phase takes input from its previous stage, has its own representation of source program, and feeds its output to the next phase of the compiler. Let us understand the phases of a compiler.
The first phase of scanner works as a text scanner. This phase scans the source code as a stream of characters and converts it into meaningful lexemes. Lexical analyzer represents these lexemes in the form of tokens as:
The next phase is called the syntax analysis orparsing. It takes the token produced by lexical analysis as input and generates a parse tree (or syntax tree). In this phase, token arrangements are checked against the source code grammar, i.e. the parser checks if the expression made by the tokens is syntactically correct.
Semantic analysis checks whether the parse tree constructed follows the rules of language. For example, assignment of values is between compatible data types, and adding string to an integer. Also, the semantic analyzer keeps track of identifiers, their types and expressions; whether identifiers are declared before use or not etc. The semantic analyzer produces an annotated syntax tree as an output.
After semantic analysis the compiler generates an intermediate code of the source code for the target machine. It represents a program for some abstract machine. It is in between the high-level language and the machine language. This intermediate code should be generated in such a way that it makes it easier to be translated into the target machine code.
The next phase does code optimization of the intermediate code. Optimization can be assumed as something that removes unnecessary code lines, and arranges the sequence of statements in order to speed up the program execution without wasting resources (CPU, memory).
In this phase, the code generator takes the optimized representation of the intermediate code and maps it to the target machine language. The code generator translates the intermediate code into a sequence of (generally) re-locatable machine code. Sequence of instructions of machine code performs the task as the intermediate code would do.
It is a data-structure maintained throughout all the phases of a compiler. All the identifier’s names along with their types are stored here. The symbol table makes it easier for the compiler to quickly search the identifier record and retrieve it. The symbol table is also used
Lex & Yaccfor Compiler Writing
Some of the most time consuming and tedious parts of writinga compiler involve the lexical scanning and syntax analysis. Luckily there isfreely available software to assist in these functions. While they will not doeverything for you, they will enable faster implementation of the basicfunctions. Lex and Yacc arethe most commonly used packages with Lex managing thetoken recognition and Yacc handling the syntax. Theywork well together, but conceivably can be used individually as well.
Both operate in a similar manner in which instructions fortoken recognition or grammar are written in a special file format. The textfiles are then read by lex and/or yaccto produce c code. This resulting source code is compiled to make the finalapplication. In practice the lexical instruction file has a“.l” suffix and the grammar file has a “.y” suffix. This process is shown inFigure 1.
Figure 1.
The file format for a lex fileconsists of (4) basic sections
%{
//header c code
%}
//definitions
%%
//rules
%%
//subroutines
The format for a yacc file issimilar, but includes a few extras.
%tokens RESERVED, WORDS, GO,HERE
%{
//header c code
%}
//definitions
%%
//rules
%%
//subroutines
These formats and general usage will be covered in greaterdetail in the following (4) sections. In general it is best not to modify theresulting c code as it is overwritten each time lexor yacc is run. Most desired functionality can be handledwithin the lexical and grammar files, but there are some things that aredifficult to achieve that may require editing of the c file.
As a side note, the functionality of these programs has beenduplicated by the GNU open source projects Flex and Bison. These can be usedinterchangeably with Lex and Yaccfor everything this document will cover and most other uses as well.
Here are some good references for further study:
The Lex & Yaccpage – has great links to references for lex,
Nice tutorial for use of lex &yacc together
http://epaperpress.com/lexandyacc