Haskell is a modern, functional programming language with an interesting story to tell about parallelism: rather than using concurrent threads and locks, Haskell offers a variety of libraries that enable concise, high...
详细信息
Haskell is a modern, functional programming language with an interesting story to tell about parallelism: rather than using concurrent threads and locks, Haskell offers a variety of libraries that enable concise, high-level parallel programs with results that are guaranteed to be deterministic (independent of the number of cores and the scheduling being used).
A framework is presented for designing parallelprogramming languages whose semantics is functional and where communications are explicit. To this end Brookes and Geva's generalized concrete data structures are sp...
详细信息
A framework is presented for designing parallelprogramming languages whose semantics is functional and where communications are explicit. To this end Brookes and Geva's generalized concrete data structures are specialized with a notion of explicit data layout to yield a CCC of distributed structures called arrays. Arrays' symmetric replicated structures, suggested bf the data-parallel SPMD paradigm, are found to be incompatible with sum types. We then outline a functional language with explicitly distributed (monomorphic) concrete types, including higher-order, sum and recursive ones. in this language, programs can be as large as the network and can observe communication events in other programs. Such flexibility is missing from current data-parallel languages and amounts to a fusion with their so-called annotations, directives or meta-languages. (C) 2001 Elsevier Science B.V. All rights reserved.
A commonly used approach to develop deterministicparallel programs is to augment a sequential program with compiler directives that indicate which program blocks may potentially be executed in parallel. This paper de...
详细信息
A commonly used approach to develop deterministicparallel programs is to augment a sequential program with compiler directives that indicate which program blocks may potentially be executed in parallel. This paper develops a verification technique to reason about such compiler directives, in particular to show that they do not change the behaviour of the program. Moreover, the verification technique is tool-supported and can be combined with proving functional correctness of the program. To develop our verification technique, we propose a simple intermediate representation (syntax and semantics) that captures the main forms of deterministicparallel programs. This language distinguishes three kinds of basic blocks: parallel, vectorised and sequential blocks, which can be composed using three different composition operators: sequential, parallel and fusion composition. We show how a widely used subset of OpenMP can be encoded into this intermediate representation. Our verification technique builds on the notion of iteration contract to specify the behaviour of basic blocks;we show that if iteration contracts are manually specified for single blocks, then that is sufficient to automatically reason about data race freedom of the composed program. Moreover, we also show that it is sufficient to establish functional correctness on a linearised version of the original program to conclude functional correctness of the parallel program. Finally, we exemplify our approach on an example OpenMP program, and we discuss how tool support is provided.
暂无评论