Study shows that it is indeed possible to use declarative formulations in practical systems, when combined judiciously with the appropriate tools of evaluations. This paper describes the implementation of logical form...
详细信息
Study shows that it is indeed possible to use declarative formulations in practical systems, when combined judiciously with the appropriate tools of evaluations. This paper describes the implementation of logical formulations of two existing analysis techniques: groundness analysis of logic programs and strictness analysis of logic programs. The XSB system is used is used as the evaluation tool. Experimental evidence shows that the resultant groundness and strictness analysis system are practical in terms of both time and space.
Conventional dataflow analysis computes information about what facts may or will not hold during the execution of a program. Sometimes it is useful, for program optimization, to know how often or with what probability...
详细信息
Conventional dataflow analysis computes information about what facts may or will not hold during the execution of a program. Sometimes it is useful, for program optimization, to know how often or with what probability a fact holds true during program execution. In this paper, we provide a precise formulation of this problem for a large class of dataflow problems - the class of finite bi-distributive subset problems. We show how it can be reduced to a generalization of the standard dataflow analysis problem, one that requires a sum-over-all-paths quantity instead of the usual meet-overall-paths quantity. We show that Kildall's result expressing the meet-over-all-paths value as a maximal-fixed-point carries over to the generalized setting, We then outline ways to adapt the standard dataflow analysis algorithms to solve this generalized problem, both in the intraprocedural and the interprocedural case.
GUM is a portable, parallel implementation of the Haskell functional language. Despite sustained research interest in parallel functional programming, GUM is one of the first such systems to be made publicly available...
详细信息
GUM is a portable, parallel implementation of the Haskell functional language. Despite sustained research interest in parallel functional programming, GUM is one of the first such systems to be made publicly available. GUM is message-based, and portability is facilitated by using the PVM communications harness that is available on many multi-processors. As a result, GUM is available for both shared-memory (Sun SPARCserver multiprocessors) and distributed-memory (networks of workstations) architectures. The high message-latency of distributed machines is ameliorated by sending messages asynchronously, and by sending large packets of related data in each message. Initial performance figures demonstrate absolute speedups relative to the best sequential compiler technology. To improve the performance of a parallel Haskell program GUM provides tools for monitoring and visualising the behaviour of threads and of processors during execution.
Large software systems are often built on system platforms that support or enforce specific characteristics of the source code or actual design. These characteristics are either captured informally in design guideline...
详细信息
ISBN:
(纸本)9780897917889
Large software systems are often built on system platforms that support or enforce specific characteristics of the source code or actual design. These characteristics are either captured informally in design guideline documents or in specialized design and implementationlanguages. In our view, both approaches are unsatisfactory. Informal descriptions do not allow automated analysis and lead to vague constraint descriptions. The language-based approach leads to different languages for different platforms and even for different versions of the same basic platform. Our approach is to describe and name the constraints separately in a design constraint language called CDL, which is based on an extraordinarily concise logic of parse trees. designs are then annotated with the names of the constraints they are supposed to satisfy. We discuss how the design constraint language is integrated into a designlanguage environment. We exhibit industrial and experimental evidence that our choice of design constraint language allows us to formalize naturally and succinctly common design characteristics.
In this paper we present a method of code implementation that works in conjunction with collaboration and responsibility based analysis modeling techniques to achieve better code reuse and resilience to change. Our ap...
详细信息
ISBN:
(纸本)089791788X
In this paper we present a method of code implementation that works in conjunction with collaboration and responsibility based analysis modeling techniques to achieve better code reuse and resilience to change. Our approach maintains a closer mapping from responsibilities in the analysis model to entities in the implementation. In so doing, it leverages the features of flexible design and design reuse found in collaboration-based design models to provide similar adaptability and reuse in the implementation. Our approach requires no special development tools and uses only standard features available in the C++ language. In an earlier paper we described the basic mechanisms used by our approach and discussed its advantages in comparison to the framework approach. In this paper we show how our approach combines code and design reuse, describing specific techniques that can be used in the development of larger applications.
Recent shared-memory parallel computer systems offer the exciting possibility of customizing memory coherence protocols to fit an application's semantics and sharing patterns. Custom protocols have been used to ac...
详细信息
Recent shared-memory parallel computer systems offer the exciting possibility of customizing memory coherence protocols to fit an application's semantics and sharing patterns. Custom protocols have been used to achieve message-passing performance - while retaining the convenient programming model of a global address space - and to implement high-level language constructs. Unfortunately, coherence protocols written in a conventional language such as C are difficult to write, debug, understand, or modify. This paper describes Teapot, a small, domain-specific language for writing coherence protocols. Teapot uses continuations to help reduce the complexity of writing protocols. Simple static analysis in the Teapot compiler eliminates much of the overhead of continuations and results in protocols that run nearly as fast as hand-written C code. A Teapot specification can be compiled both to an executable coherence protocol and to input for a model checking system, which permits the specification to be verified. We report our experiences coding and verifying several protocols written in Teapot, along with measurements of the overhead incurred by writing a protocol in a higher-level language.
This paper describes the distributed memory implementation of a shared memory parallel functional language. The language is Id, an implicitly parallel, mostly functional language that is currently evolving into a dial...
详细信息
This paper describes the distributed memory implementation of a shared memory parallel functional language. The language is Id, an implicitly parallel, mostly functional language that is currently evolving into a dialect of Haskell. The target is a distributed memory machine, because we expect these to be the most widely available parallel platforms in the future. The difficult problem is to bridge the gap between the shared memory language model and the distributed memory machine model. The language model assumes that all data is uniformly accessible, whereas the machine has a severe memory hierarchy: a processor's access to remote memory (using explicit communication) is orders of magnitude slower than its access to local memory. Thus, avoiding communication is crucial for good performance. The Id language, and its general dataflow-inspired compilation to multithreaded code are described elsewhere. In this paper, we focus on our new parallel runtime system and its features for avoiding communication and for tolerating its latency when necessary: multithreading, scheduling and load balancing;the distributed heap model and distributed coherent cacheing, and parallel garbage collection. We have completed the first implementation, and we present some preliminary performance measurements.
The proceedings contain 28 papers. The topics discussed include: efficient building and placing of gating functions;avoiding conditional branches by code replication;accurate static branch prediction by value range pr...
ISBN:
(纸本)0897916972
The proceedings contain 28 papers. The topics discussed include: efficient building and placing of gating functions;avoiding conditional branches by code replication;accurate static branch prediction by value range propagation;improving balanced scheduling with compiler optimizations that increase instruction-level parallelism;selective specialization for object-oriented languages;corpus-based static branch prediction;flow-sensitive interprocedural constant propagation;efficient context-sensitive pointer analysis for c programs;simple and effective link-time optimization of modula-3 program;APT: a data structure for optimal control dependence computation;implementation of the data-flow synchronous language signal;scheduling and mapping: software pipelining in the presence of structural hazards;register allocation using lazy saves, eager restores, and greedy shuffling;context-insensitive alias analysis reconsidered;a type-based compiler for standard ML;unifying data and control transformations for distributed shared-memory machines;storage assignment to decrease code size;optimizing parallel programs with explicit synchronization;the LRPD test: speculative run-time parallelization of loops with privatization and reduction parallelization;and the power of assignment motion.
Supporting both task and data parallelism in one programming system is useful, since many applications need both types of parallelism. We present a programming model that integrates task and data parallelism using sha...
详细信息
Supporting both task and data parallelism in one programming system is useful, since many applications need both types of parallelism. We present a programming model that integrates task and data parallelism using shared objects. The model is a generalization of shared objects in Orca. Orca is a task parallel language that uses shared objects for communication between processes and for storing shared (possibly replicated) data. Our new model also uses shared objects for partitioning of shared data and for distribution of work in a data parallel way. Data parallelism is introduced by executing operations on a partitioned object in parallel. The paper describes the design of the new model, its implementation, and its usage for parallel applications that use mixed task and data parallelism.
We present an object-oriented design model and its mapping to Ada 95 in order to build Active Information Systems. This represents part of a methodology based, for each step of the software life cycle, on a model and ...
详细信息
暂无评论