What does it mean for a programminglanguage to exist? Usually languages are defined by an informal description augmented by a reference compiler whose behavior is regarded as normative. This approach works well so lo...
ISBN:
(纸本)9781595930644
What does it mean for a programminglanguage to exist? Usually languages are defined by an informal description augmented by a reference compiler whose behavior is regarded as normative. This approach works well so long as the one true implementation suffices, but as soon as we wish to have multiple compilers for the same language, we must agree on what the language is independently of its implementations. Most often this is accomplished through social processes such as standardization committees for building *** processes have served us well, and will continue to be important for languagedesign. But they are not sufficient to support the level of rigor required to prove theorems about languages and programs written in them. For that we need a semantics, which provides an objective foundation for such analyses, typically in the form of a type system and an operational semantics. But merely having such a rigorous definition for a language is not enough — it must be validated by a body of meta-theory that establishes its coherence and its consistency with *** how are we to develop and maintain this body of theory? For full-scale languages the task is so onerous as to inhibit innovation and foster stagnation. The way forward is to take advantage of the recent advances in mechanized reasoning. By representing a language definition within a logical framework we may subject it to formal analysis, much as we use types to express and enforce crucial invariants in our programs. I will describe our use of the Twelf implementation of the LF logical framework, and discuss our successes and difficulties in using it as a tool for mechanizing the meta-theory of programminglanguages.
SKILL is a programminglanguage that supports both command entry and procedural customization in OpusTM design FrameworkTM. After briefly considering some related work, we examine the requirements that motivate the pr...
ISBN:
(纸本)9780897913638
SKILL is a programminglanguage that supports both command entry and procedural customization in OpusTM design FrameworkTM. After briefly considering some related work, we examine the requirements that motivate the provision of a programminglanguage available to the user and describe some of the technical characteristics of the languagedesign and implementation. Finally, we describe our experience with the language and outline future work. A number of programming examples are appended.
In this paper we present a modular interprocedural pointer analysis algorithm based on access-paths for C programs. We argue that access paths can reduce the overhead of representing context-sensitive transfer functio...
ISBN:
(纸本)9781581131994
In this paper we present a modular interprocedural pointer analysis algorithm based on access-paths for C programs. We argue that access paths can reduce the overhead of representing context-sensitive transfer functions and effectively distinguish non-recursive heap objects. And when the modular analysis paradigm is used together with other techniques to handle type casts and function pointers, we are able to handle significant programs like those in the SPECcint92 and SPECcint95 suites. We have implemented the algorithm and tested it on a Pentium II 450 PC running Linux. The observed resource consumption and performance improvement are very encouraging.
HoME is a version of Smalltalk which can be efficiently executed on a multiprocessor and can be executed in parallel by combining a Smalltalk process with a Mach thread and executing the process on the thread. HoME is...
ISBN:
(纸本)9780897914758
HoME is a version of Smalltalk which can be efficiently executed on a multiprocessor and can be executed in parallel by combining a Smalltalk process with a Mach thread and executing the process on the thread. HoME is nearly the same as ordinary Smalltalk except that multiple processes may execute in parallel. Thus, almost all applications running on ordinary Smalltalk can be executed on HoME without changes in their *** was designed and implemented based on the following fundamental policies: (1) theoretically, an infinite number of processes can become active; (2) the moment a process is scheduled, it becomes active; (3) no process switching occurs; (4) HoME is equivalent to ordinary Smalltalk except for the previous three *** performance of the current implementation of HoME running on OMRON LUNA-88K, which had four processors, was measured by benchmarks which execute in parallel with multiple processes. In all benchmarks, the results showed that HoME's performance is much better than HPS on the same workstation.
To keep up with the frantic pace at which devices come out, drivers need to be quickly developed, debugged and tested. Although a driver is a critical system component, the driver development process has made little (...
详细信息
To keep up with the frantic pace at which devices come out, drivers need to be quickly developed, debugged and tested. Although a driver is a critical system component, the driver development process has made little (if any) progress. The situation is particularly disastrous when considering the hardware operating code (i.e., the layer interacting with the device). Writing this code often relies on inaccurate or incomplete device documentation and involves assembly-level operations. As a result, hardware operating code is tedious to write, prone to errors, and hard to debug and maintain. This paper presents a new approach to developing hardware operating code based on an Interface Definition language (IDL) for hardware functionalities, named Devil. This IDL allows a high-level definition of the communication with a device. A compiler automatically checks the consistency of a Devil definition and generates efficient low-level code. Because the Devil compiler checks safety critical properties, the long-awaited notion of robustness for hardware operating code is made possible. Finally, the wide variety of devices that we have already specified (mouse, sound, DMA, interrupt, Ethernet, video, and IDE disk controllers) demonstrates the expressiveness of the Devil language.
We present the functional language CDuce, discuss some design issues, and show its adequacy for working with XML documents. Distinctive features of CDuce are a powerful pattern matching, first class functions, overloa...
详细信息
ISBN:
(纸本)9781581137569
We present the functional language CDuce, discuss some design issues, and show its adequacy for working with XML documents. Distinctive features of CDuce are a powerful pattern matching, first class functions, overloaded functions, a very rich type system (arrows, sequences, pairs, records, intersections, unions, differences), precise type inference for patterns and error localization, and a natural interpretation of types as sets of values. We also outline some important implementation issues; in particular, a dispatch algorithm that demonstrates how static type information can be used to obtain very efficient compilation schemas..
Functional programming presents several important advantages in the design, analysis and implementation of parallel algorithms: It discourages iteration and encourages *** supports persistence and hence easy *** encou...
详细信息
ISBN:
(纸本)9781605587943
Functional programming presents several important advantages in the design, analysis and implementation of parallel algorithms: It discourages iteration and encourages *** supports persistence and hence easy *** encourages higher-order aggregate *** is well suited for defining cost models tied to the programminglanguage rather than the *** can avoid false *** can use cheaper weak consistency *** most importantly, it supports safe deterministic *** fact functional programming supports a level of abstraction in which parallel algorithms are often as easy to design and analyze as sequential algorithms. The recent widespread advent of parallel machines therefore presents a great opportunity for functional programminglanguages. However, any changes will require significant education at all levels and involvement of the functional programming *** this talk I will discuss an approach to designing and analyzing parallel algorithms in a strict functional and fully deterministic setting. Key ideas include a cost model defined in term of analyzing work and span, the use of divide-and-conquer and contraction, the need for arrays (immutable) to achieve asymptotic efficiency, and the power of (deterministic) randomized algorithms. These are all ideas I believe can be taught at any level.
With advances of modern multi-core processors and accelerators, many modern applications are increasingly turning to compilerassisted parallel and vector programming models such as OpenMP, OpenCL, Halide, Python and T...
详细信息
The paper reports on the underlying concepts of a system for software reverse engineering. Although, the immediate goal is translation from CMS-2 to Ada, the system is envisaged more broadly as a comprehensive environ...
详细信息
ISBN:
(纸本)0897914457
The paper reports on the underlying concepts of a system for software reverse engineering. Although, the immediate goal is translation from CMS-2 to Ada, the system is envisaged more broadly as a comprehensive environment for the software lifecycle: for initial development, maintenance, re-engineering and re-documentation. This environment must assure consistency at all times of design, programs and documentation. The paper describes the three phases of the system: Extracting design and documentation from existing software. User visualization and design/redesign of the software. Ada program generation from design/redesign. The first phase translates source programs into an object-oriented Entity-Relation-Attribute (ERA) diagram, which is the main vehicle for the graphic visualization. The translation uses a concise set of objects and relations. The second phase consists of user query and retrieval of subdiagrams that provide views of the software, needed to visualize or redesign specific aspects of the software. This phase is divided into in-the-large and in-the-small parts. The in-the-large part involves high level objects such as: systems, packages, tasks, procedures (or functions), external variables, input/output and comments that specify requirements or provide explanations. The user may need to use design tools that optimize the design. The in-the-small part consists of execution statements within an individual program unit. These are translated into the MODEL equational language and visualized through a petri-net like graph. The equational representation is considerably easier to comprehend, test and verify. Finally, the code generation phase uses the graphics and text from the previous phase to generate respective parts of Ada programs. There are three code generators, as follows: Packages are generated from object usage views. Tasks are generated from dataflow views. Individual procedures are generated from equations and petri-net like diagrams. The approac
This paper presents the design and implementation of a compiler algorithm that effectively optimizes programs for energy usage using dynamic voltage scaling (DVS). The algorithm identifies program regions where the CP...
详细信息
ISBN:
(纸本)9781581136623
This paper presents the design and implementation of a compiler algorithm that effectively optimizes programs for energy usage using dynamic voltage scaling (DVS). The algorithm identifies program regions where the CPU can be slowed down with negligible performance loss. It is implemented as a source-to-source level transformation using the SUIF2 compiler infrastructure. Physical measurements on a high-performance laptop show that total system (i.e., laptop) energy savings of up to 28% can be achieved with performance degradation of less than 5% for the SPECfp95 benchmarks. On average, the system energy and energy-delay product are reduced by 11% and 9%, respectively, with a performance slowdown of 2%. It was also discovered that the energy usage of the programs using our DVS algorithm is within 6% from the theoretical lower bound. To the best of our knowledge, this is one of the first work that evaluates DVS algorithms by physical measurements.
暂无评论