Many computationally-intensive programs, such as those for differential equations, spatial interpolation, and dynamic programming, spend a large portion of their execution time in multiply-nested loops which have a re...
详细信息
Many computationally-intensive programs, such as those for differential equations, spatial interpolation, and dynamic programming, spend a large portion of their execution time in multiply-nested loops which have a regular stencil of data dependences. Tiling is a well-known optimization that improves performance on such loops, particularly for computers with a multi-levelled hierarchy of parallelism and memory. Most previous work on tiling restricts the tile shape to be rectangular. Our previous work and its extension by Desprez, Dongarra, Rastello and Robert showed that for doubly nested loops, using parallelograms can improve parallel execution time by decreasing the idle time, the time that a processor spends waiting for data or synchronization. In this paper, we extend that work to more deeply nested loops, as well as to more complex loop bounds. We introduce a model which allows us to demonstrate the equivalence in complexity of linear programming and determining the execution time of a tiling in the model. We then identify a sub-class of these loops that constitute rectilinear iteration spaces for which we derive a closed form formula for their execution time. This formula can be used by a compiler to predict the execution time of a loop nest. We then derive the tile shape that minimizes this formula. Using the duality property of linear programming, we also study how the longest path of dependent tiles within a rectilinear iteration space changes with the tile shape. Finally, we observe that the execution time of a rectilinear iteration space depends on the slope of only four of the facets defining the iteration space, independent of its dimensionality.
Sparse matrix computations are ubiquitous in science and engineering. In particular, solving a sparse linear system of equations is the bottleneck in many computations. Of the many available algorithms, some are easy ...
详细信息
Sparse matrix computations are ubiquitous in science and engineering. In particular, solving a sparse linear system of equations is the bottleneck in many computations. Of the many available algorithms, some are easy to parallelize, because they reduce to sparse-matrix-vector multiplication, for which good algorithms based on graph partitioning exist. But others, which are widely used sequentially because of their numerical properties, remain hard to parallelize scalably. In this talk we highlight the challenges of parallelizing two widely used methods: sparse Gaussian Elimination with pivoting, and multigrid for linear systems arising from solid mechanics problems on irregular meshes. In both cases our current algorithms are among the fastest available (the first one getting parallel efficiencies up to 20% on 512 processors, and the latter up to 50% on 960 processors), but many algorithm design and performance analysis problems remain. Both these problems are much more challenging to parallelize than their simpler counterparts (sparse Cholesky, and multigrid on regular meshes, which we will briefly review), and to some extent the parallelizability of these simpler algorithms provides `upper bounds' for us. For both our algorithms we exploit certain tradeoffs between scalability and `accuracy,' so that the parallelalgorithms are not ones we would run sequentially. Here are some open problems that we plan to discuss. In the case of sparse Gaussian elimination.
The papers submitted to the Tenth annualacmsymposium on parallelalgorithms and architectures are presented. The issues considered include network protocols, parallelalgorithms, multiprocessing systems, parallel pr...
详细信息
The papers submitted to the Tenth annualacmsymposium on parallelalgorithms and architectures are presented. The issues considered include network protocols, parallelalgorithms, multiprocessing systems, parallel processing systems, distributed computer systems, program compilers, sorting, computer science, data storage equipment, video signal processing.
In this paper we present a simple parallel sorting algorithm and illustrate two applications. The algorithm (called the (l, m)-merge sort (LMM)) is an extension of the bitonic and odd-even merge sorts. Literature on p...
详细信息
In this paper we present a simple parallel sorting algorithm and illustrate two applications. The algorithm (called the (l, m)-merge sort (LMM)) is an extension of the bitonic and odd-even merge sorts. Literature on parallel sorting is abundant. Many of the algorithms proposed, though being theoretically important, may not perform satisfactorily in practice owing to large constants in their time bounds. The algorithm to be presented in this paper, due partly to its simplicity, results in small constants. We present an implementation for the parallel disk sorting problem. The algorithm is asymptotically optimal (assuming that N is a polynomial in M, where N is the number of records to be sorted and M is the internal memory size). The underlying constant is very small. This algorithm has a better performance than the disk-striped mergesort (DSM) algorithm when the number of disks is large. Our implementation is as simple as that of DSM (requiring no fancy data structures or prefetch techniques). As a second application, we prove that we can get a sparse enumeration sort on the hypercube that is simpler than that of the classical algorithm of Nassimi and Sahni. We also show that Leighton's columnsort algorithm is a special case of LMM.
In this paper we develop models for and analyze several randomized work stealing algorithms in a dynamic setting. Our models represent the limiting behavior of systems as the number of processors grows to infinity usi...
详细信息
ISBN:
(纸本)9780897919890
In this paper we develop models for and analyze several randomized work stealing algorithms in a dynamic setting. Our models represent the limiting behavior of systems as the number of processors grows to infinity using differential equations. The advantages of this approach include the ability to model a large variety of systems and to provide accurate numerical approximations of system behavior even when the number of processors is relatively small. We show how this approach can yield significant intuition about the behavior of work stealing algorithms in realistic settings.
暂无评论