E-Book Details:
Title: | Compilers: Principles, Techniques, And Tools |
Publisher: | Addison Wesley Publishing Company |
Author: | Alfred V. Aho, Monica S. Lam, Ravi Sethi |
Edition: | Hardcover, 2nd Edition Aug 2006 |
Format: | PDF |
ISBN: | 0321486811 |
EAN: | 9780321486813, 978-0321486813 |
No. of Pages: | 1040 |
Book Description:
This book provides the foundation for understanding the theory and pracitce of compilers. Revised and updated, it reflects the current state of compilation. Every chapter has been completely revised to reflect developments in software engineering, programming languages, and computer architecture that have occurred since 1986, when the last edition published. The authors, recognizing that few readers will ever go on to construct a compiler, retain their focus on the broader set of problems faced in software design and software development. Computer scientists, developers, and aspiring students that want to learn how to build, maintain, and execute a compiler for a major programming language.
ABOUT THE AUTHOR:
Alfred V. Aho is Lawrence Gussman Professor of Computer Science at Columbia University. Professor Aho has won several awards including the Great Teacher Award for 2003 from the Society of Columbia Graduates and the IEEE John von Neumann Medal. He is a member of the National Academy of Engineering and a fellow of the ACM and IEEE.
Monica S. Lam is a Professor of Computer Science at Stanford University, was the Chief Scientist at Tensilica and the founding CEO of moka5. She led the SUIF project which produced one of the most popular research compilers, and pioneered numerous compiler techniques used in industry.
Ravi Sethi launched the research organization in Avaya and is president of Avaya Labs. Previously, he was a senior vice president at Bell Labs in Murray Hill and chief technical officer for communications software at Lucent Technologies. He has held teaching positions at the Pennsylvania State University and the University of Arizona, and has taught at Princeton University and Rutgers. He is a fellow of the ACM.
Jeffrey Ullman is CEO of Gradiance and a Stanford W. Ascherman Professor of Computer Science at Stanford University. His research interests include database theory, database integration, data mining, and education using the information infrastructure. He is a member of the National Academy of Engineering, a fellow of the ACM, and winner of the Karlstrom Award and Knuth Prize.
Table of Contents:
It takes at least two quarters or even two semesters to cover all or most of the material in this book. It is common to cover the first half in an undergraduate course and the second half of the book - stressing code optimization – in a second course at the graduate or mezzanine level. Here is an outline of the
It takes at least two quarters or even two semesters to cover all or most of the material in this book. It is common to cover the first half in an undergraduate course and the second half of the book - stressing code optimization – in a second course at the graduate or mezzanine level. Here is an outline of the
chapters:
Chapter 1 contains motivational material and also presents some background issues in computer architecture and programming-language principles.
Chapter 2 develops a miniature compiler and introduces many of the important concepts, which are then developed in later chapters. The compiler itself
appears in the appendix.
Chapter 3 covers lexical analysis, regular expressions, finite-state machines, and scanner-generator tools. This material is fundamental to text-processing of all sorts.
Chapter 4 covers the major parsing methods, top-down (recursive-descent, LL) and bottom-up (LR and its variants).
Chapter 5 introduces the principal ideas in syntax-directed definitions and syntax-directed translations.
Chapter 6 takes the theory of Chapter 5 and shows how to use it to generate intermediate code for a typical programming language.
Chapter 7 covers run-time environments, especially management of the run-time stack and garbage collection.
Chapter 8 is on object-code generation. It covers construction of basic blocks, generation of code from expressions and basic blocks, and register-allocation techniques.
Chapter 9 introduces the technology of code optimization, including flow graphs, dat a-flow frameworks, and iterative algorithms for solving these frameworks.
Chapter 10 covers instruction-level optimization. The emphasis is on the extraction of parallelism from small sequences of instructions and scheduling them on single processors that can do more than one thing at once.
Chapter 11 talks about larger-scale parallelism detection and exploit ation. Here, the emphasis is on numeric codes that have many tight loops that range over multidimensional arrays.
Chapter 12 is on interprocedural analysis. It covers pointer analysis, aliasing, and data-flow analysis that takes into account the sequence of procedure calls that reach a given point in the code.
0 comments :
Post a Comment