Sparse matrix computations are ubiquitous in science and engineering. In particular, solving a sparse linear system of equations is the bottleneck in many computations. Of the many available algorithms, some are easy ...
详细信息
Sparse matrix computations are ubiquitous in science and engineering. In particular, solving a sparse linear system of equations is the bottleneck in many computations. Of the many available algorithms, some are easy to parallelize, because they reduce to sparse-matrix-vector multiplication, for which good algorithms based on graph partitioning exist. But others, which are widely used sequentially because of their numerical properties, remain hard to parallelize scalably. In this talk we highlight the challenges of parallelizing two widely used methods: sparse Gaussian Elimination with pivoting, and multigrid for linear systems arising from solid mechanics problems on irregular meshes. In both cases our current algorithms are among the fastest available (the first one getting parallel efficiencies up to 20% on 512 processors, and the latter up to 50% on 960 processors), but many algorithm design and performance analysis problems remain. Both these problems are much more challenging to parallelize than their simpler counterparts (sparse Cholesky, and multigrid on regular meshes, which we will briefly review), and to some extent the parallelizability of these simpler algorithms provides `upper bounds' for us. For both our algorithms we exploit certain tradeoffs between scalability and `accuracy,' so that the parallelalgorithms are not ones we would run sequentially. Here are some open problems that we plan to discuss. In the case of sparse Gaussian elimination.
Efficient memory management of dynamic non-blocking data structures remains an important open question. Existing methods either sacrifice the ability to deallocate objects or reduce performance notably. In this paper,...
详细信息
ISBN:
(纸本)9781450315722
Efficient memory management of dynamic non-blocking data structures remains an important open question. Existing methods either sacrifice the ability to deallocate objects or reduce performance notably. In this paper, we present a novel technique, called Drop the Anchor, which significantly reduces the overhead associated with the memory management while reclaiming objects even in the presence of thread failures. We demonstrate this memory management scheme on the common linked list data structure. Using extensive evaluation, we show that Drop the Anchor significantly outperforms Hazard Pointers, the widely used technique for non-blocking memory management.
Many computationally-intensive programs, such as those for differential equations, spatial interpolation, and dynamic programming, spend a large portion of their execution time in multiply-nested loops which have a re...
详细信息
Many computationally-intensive programs, such as those for differential equations, spatial interpolation, and dynamic programming, spend a large portion of their execution time in multiply-nested loops which have a regular stencil of data dependences. Tiling is a well-known optimization that improves performance on such loops, particularly for computers with a multi-levelled hierarchy of parallelism and memory. Most previous work on tiling restricts the tile shape to be rectangular. Our previous work and its extension by Desprez, Dongarra, Rastello and Robert showed that for doubly nested loops, using parallelograms can improve parallel execution time by decreasing the idle time, the time that a processor spends waiting for data or synchronization. In this paper, we extend that work to more deeply nested loops, as well as to more complex loop bounds. We introduce a model which allows us to demonstrate the equivalence in complexity of linear programming and determining the execution time of a tiling in the model. We then identify a sub-class of these loops that constitute rectilinear iteration spaces for which we derive a closed form formula for their execution time. This formula can be used by a compiler to predict the execution time of a loop nest. We then derive the tile shape that minimizes this formula. Using the duality property of linear programming, we also study how the longest path of dependent tiles within a rectilinear iteration space changes with the tile shape. Finally, we observe that the execution time of a rectilinear iteration space depends on the slope of only four of the facets defining the iteration space, independent of its dimensionality.
暂无评论