This paper addresses the problem of automating the process of programming in general, and offers a solution for the scientific computation application domain. The goal of an automated programming system is to minimize...
详细信息
CORAL is a deductive database system that supports a rich declarative language, provides a wide range of evaluation methods, and allows a combination of declarative and imperative programming. The data can be persiste...
详细信息
ISBN:
(纸本)0897915925
CORAL is a deductive database system that supports a rich declarative language, provides a wide range of evaluation methods, and allows a combination of declarative and imperative programming. The data can be persistent on disk or can reside in main-memory. We describe the architecture and implementation of CORAL. There were two important goals in the design of the CORAL architecture: (1) to integrate the different evaluation strategies in a reasonable fashion, and (2) to allow users to influence the optimization techniques used so as to exploit the full power of the CORAL implementation. A CORAL declarative program can be organized as a collection of interacting modules and this modular structure is the key to satisfying both these goals. The high level module interface allows modules with different evaluation techniques to interact in a transparent fashion. Further, users can optionally tailor the execution of a program by selecting from among a wide range of control choices at the level of each module. CORAL also has an interface with C++, and users can program in a combination of declarative CORAL, and C++ extended with CORAL primitives. A high degree of extensibility is provided by allowing C++ programmers to use the class structure of C++ to enhance the CORAL implementation.
An implementation of interprocedural constant propagation must model the transmission of values through each procedure. In the framework proposed by Callahan, Cooper, Kennedy, and Torczon in 1986, this intraprocedural...
ISBN:
(纸本)9780897915984
An implementation of interprocedural constant propagation must model the transmission of values through each procedure. In the framework proposed by Callahan, Cooper, Kennedy, and Torczon in 1986, this intraprocedural propagation is modeled with a jump function. While Callahan et al. propose several kinds of jump functions, they give no data to help choose between them. This paper reports on a comparative study of jump function implementations. It shows that different jump functions produce different numbers of useful constants; it suggests a particular function, called the pass-through parameter jump function, as the most cost-effective in practice.
A data breakpoint associates debugging actions with programmer-specified conditions on the memory state of an executing program. Data breakpoints provide a means for discovering program bugs that are tedious or imposs...
ISBN:
(纸本)9780897915984
A data breakpoint associates debugging actions with programmer-specified conditions on the memory state of an executing program. Data breakpoints provide a means for discovering program bugs that are tedious or impossible to isolate using control breakpoints alone. In practice, programmers rarely use data breakpoints, because they are either unimplemented or prohibitively slow in available debugging software. In this paper, we present the design and implementation of a practical data breakpoint facility.A data breakpoint facility must monitor all memory updates performed by the program being debugged. We implemented and evaluated two complementary techniques for reducing the overhead of monitoring memory updates. First, we checked write instructions by inserting checking code directly into the program being debugged. The checks use a segmented bitmap data structure that minimizes address lookup complexity. Second, we developed data flow algorithms that eliminate checks on some classes of write instructions but may increase the complexity of the remaining *** evaluated these techniques on the SPARC using the SPEC benchmarks. Checking each write instruction using a segmented bitmap achieved an average overhead of 42%. This overhead is independent of the number of breakpoints in use. Data flow analysis eliminated an average of 79% of the dynamic write checks. For scientific programs such the NAS kernels, analysis reduced write checks by a factor of ten or more. On the SPARC these optimizations reduced the average overhead to 25%.
CORAL is a deductive database system that supports a rich declarative language, provides a wide range of evaluation methods, and allows a combination of declarative and imperative programming. The data can be persiste...
ISBN:
(纸本)9780897915922
CORAL is a deductive database system that supports a rich declarative language, provides a wide range of evaluation methods, and allows a combination of declarative and imperative programming. The data can be persistent on disk or can reside in main-memory. We describe the architecture and implementation of *** were two important goals in the design of the CORAL architecture: (1) to integrate the different evaluation strategies in a reasonable fashion, and (2) to allow users to influence the optimization techniques used so as to exploit the full power of the CORAL implementation. A CORAL declarative program can be organized as a collection of interacting modules and this modular structure is the key to satisfying both these goals. The high level module interface allows modules with different evaluation techniques to interact in a transparent fashion. Further, users can optionally tailor the execution of a program by selecting from among a wide range of control choices at the level of each *** also has an interface with C++, and users can program in a combination of declarative CORAL, and C++ extended with CORAL primitives. A high degree of extensibility is provided by allowing C++ programmers to use the class structure of C++ to enhance the CORAL implementation.
Alphonse is a program transformation system that uses dynamic dependency analysis and incremental computation techniques to automatically generate efficient dynamic implementations from simple exhaustive imperative pr...
详细信息
ISBN:
(纸本)0897914759
Alphonse is a program transformation system that uses dynamic dependency analysis and incremental computation techniques to automatically generate efficient dynamic implementations from simple exhaustive imperative program specifications.
The design and implementation of Distributed Eiffel, a languagedesigned and implemented for distributed programming, on top of the Clouds operating system by extending the object-oriented language Eiffel are discusse...
详细信息
ISBN:
(纸本)0818625856
The design and implementation of Distributed Eiffel, a languagedesigned and implemented for distributed programming, on top of the Clouds operating system by extending the object-oriented language Eiffel are discussed. The language presents a programming paradigm based on objects of multiple granularity. While large-grained persistent objects serve as units of distribution, fine-grained objects are used to describe and manipulate entities within these units. The languagedesign makes it possible to implement both shared-memory and message-passing models of parallel programming within a single programming paradigm. The language provides features with which the programmer can declaratively fine-tune synchronization at any desired object granularity and maximize concurrency. With the primitives provided, it is possible to combine and control both data migration and computation migration effectively, at the language level. The design addresses such issues as parameter passing, asynchronous invocations and result claiming, and concurrency control.
Tachyon Common Lisp is an efficient and portable implementation of Common Lisp 2nd Edition. The design objective of Tachyon is to apply both advanced optimization technology developed for RISC processors and Lisp opti...
详细信息
ISBN:
(纸本)0897914813
Tachyon Common Lisp is an efficient and portable implementation of Common Lisp 2nd Edition. The design objective of Tachyon is to apply both advanced optimization technology developed for RISC processors and Lisp optimization techniques. The compiler generates very fast codes comparable to, and sometimes faster than the code generated by UNIX C compiler. Comparing with the most widely used commercial Common Lisp, Tachyon Common Lisp compiled code is 2 times faster and the interpreter is 6 times faster than the Lisp in Gabriel benchmark suit. Tachyon Common Lisp is the fastest among the Lisp systems known to the authors.
Two major paradigms in computer programminglanguages are imperative and declarative programming. The authors describe a scheme for languages that integrate specific features from these two paradigms into a new framew...
详细信息
ISBN:
(纸本)0818625856
Two major paradigms in computer programminglanguages are imperative and declarative programming. The authors describe a scheme for languages that integrate specific features from these two paradigms into a new framework: constraint imperative programming. The authors discuss the design and implementation of a particular instance of this framework, Kaleidoscope'90. From the imperative paradigm, constraint imperative programming adopts explicit control flow, state, and assignment. From the declarative paradigm, it adopts explicit, system-maintained constraints (relations that should hold). There is a strong practical motivation for making this integration: in a typical application, some portions are most clearly described using imperative constructs, while other portions are most clearly described using constraints. By using a constraint imperative language, the most suitable paradigm can be used, as appropriate.
A simple and efficient algorithm for generating bottom-up rewrite system (BURS) tables is described. A small prototype implementation produces tables 10 to 30 times more quickly than the best current techniques. The a...
详细信息
ISBN:
(纸本)0897914759
A simple and efficient algorithm for generating bottom-up rewrite system (BURS) tables is described. A small prototype implementation produces tables 10 to 30 times more quickly than the best current techniques. The algorithm does not require novel data structures or complicated algorithmic techniques. Previously published methods for on-the-fly elimination of states are generalized and simplified to create a new method, triangle trimming, that is employed in the algorithm.
暂无评论