An introduction to the Unity syntax and logic is given. The idea of abstract programs is outlined. Examples are used to illustrate the usefulness of abstract programs. These are: sorting an array and summing an array.
An introduction to the Unity syntax and logic is given. The idea of abstract programs is outlined. Examples are used to illustrate the usefulness of abstract programs. These are: sorting an array and summing an array.< >
NASA Technical Reports Server (Ntrs) 20010097883: parallel programming Strategies for Irregular Adaptive Applications by NASA Technical Reports Server (Ntrs); published by
NASA Technical Reports Server (Ntrs) 20010097883: parallel programming Strategies for Irregular Adaptive Applications by NASA Technical Reports Server (Ntrs); published by
NASA Technical Reports Server (Ntrs) 20020063612: F-Nets and Software Cabling: Deriving a Formal Model and Language for Portable parallel programming by NASA Technical Reports Server (Ntrs); NASA Technical Reports Ser...
详细信息
NASA Technical Reports Server (Ntrs) 20020063612: F-Nets and Software Cabling: Deriving a Formal Model and Language for Portable parallel programming by NASA Technical Reports Server (Ntrs); NASA Technical Reports Server (Ntrs); published by
parallel iterative linear solvers for unstructured grids in FEM application, which were developed for the Earth Simulator (ES), have been ported to various types of SMP cluster supercomputers. Performance of flat-MPI ...
详细信息
parallel iterative linear solvers for unstructured grids in FEM application, which were developed for the Earth Simulator (ES), have been ported to various types of SMP cluster supercomputers. Performance of flat-MPI and hybrid parallel programming model has been compared using more than 100 SMP nodes of ES, Hitachi SR8000 and IBM SP-3. Effect of coloring and method for storage of coefficient matrices have been also evaluated in various types of applications. Generally speaking, codes, which are developed for vector processors using multicolor ordering, demonstrate better performance with many colors on scalar processors. Flat-MPI and hybrid are competitive for each supercomputer. On ES, hybrid outperforms flat-MPI when number of SMP node is large and problem size is small because of MPI latency. In parallel computation of FEM-type applications, MPI latency is a very important parameter for choosing flat-MPI or hybrid
The CAPSE environment for Computer Aided parallel Software Engineering is intended to assist the developer in the crucial task of parallel programming. The methodology of CAPSE is based on direct manipulative graphica...
详细信息
The CAPSE environment for Computer Aided parallel Software Engineering is intended to assist the developer in the crucial task of parallel programming. The methodology of CAPSE is based on direct manipulative graphical creation and editing of scalable workload characterizations of MIMD algorithms. This paper presents the basic concepts of this methodology and an example of a parallel Poisson solver. The workload characterization representing the computation and communication behavior of the algorithm is based on directed acyclic task graphs, which achieve scalability by composing the task graph of scalable basic patterns instead of single node and arcs. The composition and the usage of these basic patterns is described in the light of designing the Poisson solver algorithm. The resulting task graph is used to predict the program's performance on a nCUBE 2 distributed memory machine and the PAPS simulator.
We describe Actors, a flexible, scalable and efficient model of computation, and develop a framework for analyzing the parallel complexity of programs written in it. Actors are asynchronous, autonomous objects which i...
详细信息
We describe Actors, a flexible, scalable and efficient model of computation, and develop a framework for analyzing the parallel complexity of programs written in it. Actors are asynchronous, autonomous objects which interact by message-passing. The data and process decomposition inherent in Actors simplifies modeling real-world systems. High-level concurrent programming abstractions have been developed to simplify program development using Actors; such abstractions do not compromise an efficient and portable implementation. In this paper, we define a parallel complexity model for Actors. The model we develop gives an accurate measure of performance on realistic architectures. We illustrate its use by analyzing a number of examples.
We present the HOOL system, which is an object-oriented hypercomputer environment designed for workstation clusters. An HOOL application is modeled as a hierarchy of cooperating objects. Appropriate object allocations...
详细信息
Writing a parallel program can be a difficult task which has to meet several, sometimes conflicting goals. While the manual approach is time-consuming and error-prone, the use of compilers reduces the programmer’s co...
详细信息
OCaml is a multi-paradigm (functional, imperative, object-oriented) high level sequential language. Types are stati¬cally inferred by the compiler and the type system is expressive and strong. These features make...
详细信息
ISBN:
(纸本)9781538678800
OCaml is a multi-paradigm (functional, imperative, object-oriented) high level sequential language. Types are stati¬cally inferred by the compiler and the type system is expressive and strong. These features make OCaml a very productive language for developing efficient and safe programs. In this tutorial we present three frameworks for using OCaml to program scalable parallel architectures: BSML, Multi-ML and Spoc.
An important development in cluster computing is the availability of multiprocessor workstations. These are able to provide additional computational power to the cluster without increasing network overhead and allow m...
详细信息
ISBN:
(纸本)9780818681172
An important development in cluster computing is the availability of multiprocessor workstations. These are able to provide additional computational power to the cluster without increasing network overhead and allow multiparadigm parallelism, which we define to be the simultaneous application of both distributed and shared memory parallel processing techniques to a single problem. In this paper we compare execution times and speedup of parallel programs written in a pure message-passing paradigm with those that combine message passing and shared-memory primitives in the same application. We consider three basic applications that are common building blocks for many scientific and engineering problems: numerical integration, matrix multiplication and Jacobi iteration. Our results indicate that the added complexity of combining shared- and distributed-memory programming methods in the same program does not contribute sufficiently to performance to justify the added programming complexity.
暂无评论