Loop tiling and communication optimizations, such as message pipelining and aggregation, can achieve optimized and robust memory performance by proactively managing storage and data movement. In this paper, we general...
详细信息
Loop tiling and communication optimizations, such as message pipelining and aggregation, can achieve optimized and robust memory performance by proactively managing storage and data movement. In this paper, we generalize these techniques to pointer-based data structures (PBDSs). Our approach, dynamic pointer alignment (DPA), has two components. the compiler decomposes a program into non-blocking threads that operate on specific pointers and labels thread creation sites withtheir corresponding pointers. At runtime, an explicit mapping from pointers to dependent threads is updated at thread creation and is used to dynamically schedule boththreads and communication, such that threads using the same objects execute together, communication overlaps with local work, and messages are aggregated. We have implemented DPA to optimize remote reads to global PBDSs on parallel machines. Our empirical results on the force computation phases of two applications that use sophisticated PBDSs, Barnes-Hut and FMM, show that DPA achieves good absolute performance and speedups by enabling tiling and communication optimizations on the CRAY T3D.
Some of the most common parallelprogramming idioms include locks, barriers, and reduction operations. the interaction of these programming idioms withthe multiprocessor's coherence protocol has a significant imp...
详细信息
Some of the most common parallelprogramming idioms include locks, barriers, and reduction operations. the interaction of these programming idioms withthe multiprocessor's coherence protocol has a significant impact on performance. In addition, the advent of machines that support multiple coherence protocols prompts the question of how to best implement such parallel constructs, i.e. what combination of implementation and coherence protocol yields the best performance. In this paper we study the running time and communication behavior of (1) centralized (ticket) and MCS spin locks, (2) centralized, dissemination, and tree-based barriers, and (3) parallel and sequential reductions, under pure and competitive update coherence protocols;results for write-invalidate protocol are presented mostly for comparison purposes. Our experiments indicate that parallelprogramming techniques that are well-established for write invalidate protocols, such as MCS locks and parallel reductions, are often inappropriate for update-based protocols. In contrast, techniques such as dissemination and tree barriers achieve superior performance under update-based protocols. Our results also show that the implementation of parallelprogramming idioms must take the coherence protocol into account, since update-based protocols often lead to different design decisions than write invalidate protocols. Our main conclusion is that protocol-conscious implementation of parallelprogramming structures can significantly improve application performance;for multiprocessors that can support more than one coherence protocol boththe protocol and implementation should be taken into account when exploiting parallel constructs.
Global addressing of shared data simplifies parallelprogramming and complements message passing models commonly found in distributed memory machines. A number of programming systems have been designed that synthesize...
详细信息
Global addressing of shared data simplifies parallelprogramming and complements message passing models commonly found in distributed memory machines. A number of programming systems have been designed that synthesize global addressing purely in software on such machines. these systems provide a number of communication mechanisms to mitigate the effect of high communication latencies and overheads. this study compares the mechanisms in two representative all-software systems: CRL and Split-C. CRL uses region-based caching while Split-C uses split-phase and push-based data transfers for optimizing communication performance. Both systems take advantage of bulk data transfers. By implementing a set of parallel applications in both CRL and Split-C, and running them on the IBM SP2, Meiko CS-2 and two simulated architectures, we find that split-phase and push-based bulk data transfers are essential for good performance. Region-based caching benefits applications with irregular structure and with sufficient temporal locality, especially under high communication latencies. However, caching also hurts performance when there is insufficient data reuse or when the size of caching granularity is mismatched withthe communication granularity. We find the programming complexity of the communication mechanisms in both languages to be comparable. Based on our results, we recommend that an ideal system intended to support diverse applications on parallel platforms should incorporate the communication mechanisms in CRL and Split-C.
this paper describes the design and implementation of a garbage collection scheme on large-scale distributed-memory computers and reports various experimental results. the collector is based on the conservative GC lib...
详细信息
this paper describes the design and implementation of a garbage collection scheme on large-scale distributed-memory computers and reports various experimental results. the collector is based on the conservative GC library by Boehm & Weiser. Each processor traces local pointers using the GC library while traversing remote pointers by exchanging 'mark messages' between processors. It exhibits a promising performance - in the most space-intensive settings we tested, the total collection overhead ranges from 5% up to 15% of the application running time (excluding idle time). We not only examine basic performance figures such as the total overhead or latency of a global collection, but also demonstrate how local collection scheduling strategies affect application performance. In our collector, a local collection is scheduled either independently or synchronously. Experimental results show that the benefit of independent local collections has been overstated in the literature. Independent local collections slowed down application performance to 40%, by increasing the average communication latency. Synchronized local collections exhibit much more robust performance characteristics than independent local collections and the overhead for global synchronization is not significant. Furthermore, we show that an adaptive collection scheduler can select the appropriate local collection strategy based on the application's behavior. the collector has been used in a concurrent object-oriented language ABCL/f and the performance is measured on a large-scale parallel computer (256 processors) using four non-trivial applications written in ABCL/f.
the proceedings contains 21 papers from the Fifthacmsigplansymposium on principles & practice of parallelprogramming PPOPP. Topics discussed include data parallel programs;data libraries;data caches;data acces...
详细信息
the proceedings contains 21 papers from the Fifthacmsigplansymposium on principles & practice of parallelprogramming PPOPP. Topics discussed include data parallel programs;data libraries;data caches;data access;distributed and shared memory multiprocessors;dataflow analysis;scheduling;optimization;and synchronization.
作者:
PHILIPPSEN, MICSI
International Computer Science Institute Berkeley CA and Dept. of Informatics University of Karlsruhe
this paper investigates the problem of aligning array data and processes in a distributed-memory implementation. We present complete algorithms for compile-time analysis, the necessary program restructuring, and subse...
详细信息
ISBN:
(纸本)9780897917001
this paper investigates the problem of aligning array data and processes in a distributed-memory implementation. We present complete algorithms for compile-time analysis, the necessary program restructuring, and subsequent code-generation, and discuss their complexity. We finally evaluate the practical usefulness by quantitative experiments. the technique presented analyzes complete programs, including branches, loops, and nested parallelism. Alignment is determined with respect to offset, stride, and general ass's relations. Pplacement of both data and processes are computed in a unifying framework based on an extended preference graph and its analysis. Dynamic redistributions are derived. the experimental results are very encouraging. the optimization algorithms implemented in our Modula-2* compiler improved the execution times of the programs by an average over 40% on a MasPar MP-1 with 16384 processors.
It is widely acknowledged in high-performance computing circles that parallel input/output needs substantial improvement in order to make scalable computers truly usable. We present a data storage model that allows pr...
详细信息
It is widely acknowledged in high-performance computing circles that parallel input/output needs substantial improvement in order to make scalable computers truly usable. We present a data storage model that allows processors independent access to their own data and a corresponding compilation strategy that integrates data-parallel computation with data distribution for out-of-core problems. Our results compare several communication methods and I/O optimizations using two out-of-core problems, Jacobi iteration and LU factorization.
In this paper, we propose a straightforward solution to the problems of compositional parallelprogramming by using skeletons as the uniform mechanism for structured composition. In our approach parallel programs are ...
详细信息
In this paper, we propose a straightforward solution to the problems of compositional parallelprogramming by using skeletons as the uniform mechanism for structured composition. In our approach parallel programs are constructed by composing procedures in a conventional base language using a set of high-level, predefined, functional, parallel computational forms known as skeletons. the ability to compose skeletons provides us withthe essential tools for building further and more complex application-oriented skeletons specifying important aspects of parallel computation. Compared withthe process network based composition approach, such as PCN, the skeleton approach abstracts away the fine details of connecting communication ports to the higher level mechanism of making data distributions conform, thus avoiding the complexity of using lower level ports as the means of interaction. thus, the framework provides a natural integration of the compositional programming approach withthe data parallelprogramming paradigm.
Data-parallel languages, such as High Performance Fortran, are widely regarded as a promising means for writing portable programs for distributed-memory machines. Novel features of these languages call for the develop...
详细信息
Data-parallel languages, such as High Performance Fortran, are widely regarded as a promising means for writing portable programs for distributed-memory machines. Novel features of these languages call for the development of new techniques in both compilers and run-time systems. In this paper, we present an improved algorithm for finding the local memory access sequence in computations involving regular sections of arrays with cyclic(k) distributions. After establishing the fact that regular section indices correspond to elements of an integer lattice, we show how to find a lattice basis that allows for simple and fast enumeration of memory accesses. the complexity of our algorithm is shown to be lower than that of the previous solution for the same problem. In addition, the experimental results demonstrate the efficiency of our method in practice.
Scalable busy-wait synchronization algorithms are essential for achieving good parallel program performance on large scale multiprocessors. Such algorithms include mutual exclusion locks, reader-writer locks, and barr...
详细信息
Scalable busy-wait synchronization algorithms are essential for achieving good parallel program performance on large scale multiprocessors. Such algorithms include mutual exclusion locks, reader-writer locks, and barrier synchronization. Unfortunately, scalable synchronization algorithms are particularly sensitive to the effects of multiprogramming: their performance degrades sharply when processors are shared among different applications, or even among processes of the same application. In this paper we describe the design and evaluation of scalable scheduler-conscious mutual exclusion locks, reader-writer locks, and barriers, and show that by sharing information across the kernel/application interface we can improve the performance of scheduler-oblivious implementations by more than an order of magnitude.
暂无评论