The proceedings contain 22 papers. The topics discussed include: MPI-2: standards beyond the message-passing model;a parallel program execution model supporting modular software construction;performance driven program...
ISBN:
(纸本)0818684275
The proceedings contain 22 papers. The topics discussed include: MPI-2: standards beyond the message-passing model;a parallel program execution model supporting modular software construction;performance driven programmingmodels;parallelprogramming and complexity analysis using actors;the high performance solution of irregular problems;aspects of the compilation of nested parallel imperative languages;massivelyparallel computing in java;market-based massivelyparallel internet computing;variable grain architectures for MPP computation and structured parallelprogramming;and datarol: a parallel machine architecture for fine-grain multithreading.
The proceedings contains 27 papers from the IEEE conference on programmingmodels for massivelyparallel Computers. Topics discussed include: symmetric multiprocessors;space-limited programming;ALPHA language;optimal ...
详细信息
The proceedings contains 27 papers from the IEEE conference on programmingmodels for massivelyparallel Computers. Topics discussed include: symmetric multiprocessors;space-limited programming;ALPHA language;optimal data distributions;group parallel numerical algorithms;stream-processing functional programs;compile-time information;concurrent object-based programming model;parallel virtual shared memory architectures;high-level parallel algorithms;elliptic partial differential equations;C programming language;FORTRAN programming language;high-dimension iteration;data sets;and nested-parallel programs.
This paper argues for the development of more general and user-friendly parallelprogrammingmodels, independent of hardware structures and concurrency concepts of operating systems theory, leading to portable program...
详细信息
ISBN:
(纸本)0818684275
This paper argues for the development of more general and user-friendly parallelprogrammingmodels, independent of hardware structures and concurrency concepts of operating systems theory, leading to portable programs and easy to use languages. It then presents the BaLinda model, based on last in/first out threads that interact via a shared tuplespace, and argues that it is simple enough to be both general and easy to use. It also discusses the idea of using function-based objects as the basic unit of parallel execution and the hierarchical structure to partition tuplespaces.
The paper discusses the relationships between hierarchically composite MPP architectures and the software technology derived from the structured parallelprogramming methodology, in particular the architectural suppor...
详细信息
ISBN:
(纸本)0818684275
The paper discusses the relationships between hierarchically composite MPP architectures and the software technology derived from the structured parallelprogramming methodology, in particular the architectural support to successive modular refinements of parallel applications, and the architectural support to the parallelprogramming paradigms and their combinations. The structured parallelprogramming methodology referred here is an application of the Skeletons model. The considered hierarchically composite architectures are MPP machine models for PetaFlops computing, composed of proper combinations of current architectural models of different granularities, where the Processors-In-Memory model is adopted at the finest granularity level. The methodologies are discussed with reference to the current PQE2000 Project on MPP general purpose systems.
Although Java was not specifically designed for the computationally intensive numeric applications that are the typical fodder of highly parallel machines, its widespread popularity and portability make it an interest...
详细信息
ISBN:
(纸本)0818684275
Although Java was not specifically designed for the computationally intensive numeric applications that are the typical fodder of highly parallel machines, its widespread popularity and portability make it an interesting candidate vehicle for massivelyparallelprogramming. With the advent of high-performance optimizing Java compilers, the open question is: How can Java programs best exploit massive parallelism? The authors have been contemplating this question vie! libraries of Java-routines for specifying and coordinating parallel codes. A would be most desirable to have these routines written in 100%-Pure Java;however, a more expedient solution is to provide Java wrappers (stubs) to existing parallel coordination libraries, such as MPI. MPI is an attractive alternative, as like Java, it is portable. We discuss both approaches here. In undertaking this study, we have also identified some minor modifications of the current language specification that would make 100%-Pure Java parallelprogramming more natural.
This paper is concerned with the use of massivelyparallel Processing (MPP) systems by industry and commerce. In this context, it is argued that the definition of MPP should be extended to include LAN/WAN clusters or ...
详细信息
ISBN:
(纸本)0818684275
This paper is concerned with the use of massivelyparallel Processing (MPP) systems by industry and commerce. In this context, it is argued that the definition of MPP should be extended to include LAN/WAN clusters or 'meta-computers'. The frontier for research for industry has moved on from mere parallel implementations of scientific simulations or commercial databases - rather, it is concerned with the problem of integrating computational and informational resources in a seamless and effective manner. Examples taken from recent research projects at the parallel Applications Centre (PAC) are used to illustrate these points.
We study the use of well-defined building blocks for SPMD programming of machines with distributed memory. Our general framework is based on homomorphisms, functions that capture the idea of data-parallelism and have ...
ISBN:
(纸本)0818684275
We study the use of well-defined building blocks for SPMD programming of machines with distributed memory. Our general framework is based on homomorphisms, functions that capture the idea of data-parallelism and have a close correspondence with collective operations of the MPI standard, e.g., scan and reduction. We prove two composition rules: under certain conditions, a composition of a scan and a reduction can be transformed into one reduction, and a composition of two scans into one scan. As an example of decomposition, we transform a segmented reduction into a composition of partial reduction and all-gather The performance gain and overhead of the proposed composition and decomposition rules are assessed analytically for the hypercube and compared with the estimates for some other parallelmodels.
作者:
Dennis, JBMIT
Comp Sci Lab Cambridge MA 02139 USA
A watershed is near in the architecture of computer systems. There is overwhelming demand for systems that support a universal format for computer programs and software components so users may benefit from their use o...
详细信息
ISBN:
(纸本)0818684275
A watershed is near in the architecture of computer systems. There is overwhelming demand for systems that support a universal format for computer programs and software components so users may benefit from their use on a wide variety of computing platforms. At present this demand is being met by commodity microprocessors together with standard operating system interfaces. However current systems do not offer a standard API (application program interface) for parallelprogramming, and the popular interfaces for parallel computing violate essential principles of modular or component-based software construction. Moreover;microprocessor architecture is reaching the limit of what can be done usefully within the framework of superscalar and VLIW processor models. The next step is to put several processors (or the equivalent) on a single chip. This paper presents a set of principles for modular software construction, and descibes a program execution model based on functional programming that satisfies the set of principles. The implications of the pinciples for computer system architecture are discussed together with a sketch of the architecture of a multithread processing chip which promises to provide efficient execution of parallel computations while providing a sound base for modular software construction.
暂无评论