Matrix multiplication is an important kernel in linear algebra algorithms, and the performance of both serial and parallel implementations is highly dependent on the memory system behavior. Unfortunately, due to false...
详细信息
Matrix multiplication is an important kernel in linear algebra algorithms, and the performance of both serial and parallel implementations is highly dependent on the memory system behavior. Unfortunately, due to false sharing and cache conflicts, traditional column-major or row-major array layouts incur high variability in memory system performance as matrix size varies. This paper investigates the use of recursive array layouts for improving the performance of parallel recursive matrix multiplication algorithms. We extend previous work by Frens and Wise on recursive matrix multiplication to examine several recursive array layouts and three recursive algorithms: standard matrix multiplication, and the more complex algorithms of Strassen and Winograd. We show that while recursive array layouts significantly outperform traditional layouts (reducing execution times by a factor of 1.2-2.5) for the standard algorithm, they offer little improvement for Strassen's and Winograd's algorithms;we provide an algorithmic explanation of this phenomenon. We demonstrate that carrying the recursive layout down to the level of individual matrix elements is counter-productive, and that a combination of recursive layouts down to canonically ordered matrix tiles instead yields higher performance. We evaluate five recursive layouts with successively increasing complexity of address computation, and show that addressing overheads can be kept in control even for the most computationally demanding of these layouts. Finally, we provide a critique of the Cilk system that we used to parallelize our code.
Fast commodity network-connected PC or workstation clusters are becoming more and more popular. This popularity can be attributed to their ability to provide high-performance parallel computing on a relatively inexpen...
详细信息
Fast commodity network-connected PC or workstation clusters are becoming more and more popular. This popularity can be attributed to their ability to provide high-performance parallel computing on a relatively inexpensive platform. An accurate global clock is invaluable for these systems, both for measuring network performance and coordinating distributed applications. Typically, however, these systems do not include dedicated clock synchronization support. Previous clock synchronization methods are not suitable here in general, either because of extra, non-commodity hardware requirements or insufficient synchronized clock accuracy. In this paper we present and evaluate an adaptive clock synchronization algorithm. We have implemented and tested the algorithm on our Myrinet-based PC cluster. It is regularly used as part of a parallel performance tool running on the cluster. The algorithm has several important features. First, it does not require any extra hardware support. Second, we show that this algorithm imposes very low overhead on the system and has microsecond-level accuracy. Finally, our results indicate that adding the ability to adaptively adjust the clock's re-synchronization period causes almost no extra overhead while achieving a much better global clock accuracy.
In this paper we demonstrate that parallelism and fill can be traded off in orders for Gaussian elimination. While the well-known nested dissection algorithm produces very parallel elimination orders, we show that by ...
详细信息
In this paper we demonstrate that parallelism and fill can be traded off in orders for Gaussian elimination. While the well-known nested dissection algorithm produces very parallel elimination orders, we show that by reducing the parallelism it is possible to reduce the fill that the orders generate. In particular, we present a new `less parallel nested dissection' algorithm (LPND). We prove that, unlike standard nested dissection, when applied to a chordal graph LPND finds a zero-fill elimination order. Our implementation of LPND generates less fill than state-of-the-art implementations of the nested dissection (METIS), minimum-degree (AMD), and hybrid (BEND) algorithms on a large body of test matrices, at the cost of a small reduction in the paralellism in the orders that it produces. We have also implemented a nested dissection algorithm that is different from METIS and that uses the same separator algorithm used by our implementation of LPND. This algorithm, like LPND, generates less fill than METIS, and on large graphs generates significantly less fill than AMD. The latter comparison is notable, because although it is known that, for certain classes of graphs, minimum-degree produces asymptotically more fill than nested dissection, minimum-degree is believed to produce low-fill orderings in practice. Our experiments contradict this belief.
A computer system is useless unless it can interact with the outside world through input/output (I/O) devices. I/O systems are complex, including aspects such as memory-mapped operations, interrupts, and bus bridges. ...
详细信息
A computer system is useless unless it can interact with the outside world through input/output (I/O) devices. I/O systems are complex, including aspects such as memory-mapped operations, interrupts, and bus bridges. Often, I/O behavior is described for isolated devices without a formal description of how the complete I/O system behaves. The lack of an end-to-end system description makes the tasks of system programmers and hardware implementors more difficult to do correctly. This paper proposes a framework for formally describing I/O architectures called Wisconsin I/O (WIO). WIO extends work on memory consistency models (that formally specify the behavior of normal memory) to handle considerations such as memory-mapped operations, device operations, interrupts, and operations with side effects. Specifically, WIO asks each processor or device that can issue k operation types to specify ordering requirements in a k×k table. A system obeys WIO if there always exists a total order of all operations that respects processor and device ordering requirements and has the value of each `read' equal to the value of the most recent `write' to that address. This paper then presents examples of WIO specifications for systems with various memory consistency models including sequential consistency (SC), SPARC TSO, an approximation of Intel IA-32, and Compaq Alpha. Finally, we present a directory-based implementation of an SC system, and we sketch a proof which shows that the implementation conforms to its WIO specification.
Many computationally-intensive programs, such as those for differential equations, spatial interpolation, and dynamic programming, spend a large portion of their execution time in multiply-nested loops which have a re...
详细信息
Many computationally-intensive programs, such as those for differential equations, spatial interpolation, and dynamic programming, spend a large portion of their execution time in multiply-nested loops which have a regular stencil of data dependences. Tiling is a well-known optimization that improves performance on such loops, particularly for computers with a multi-levelled hierarchy of parallelism and memory. Most previous work on tiling restricts the tile shape to be rectangular. Our previous work and its extension by Desprez, Dongarra, Rastello and Robert showed that for doubly nested loops, using parallelograms can improve parallel execution time by decreasing the idle time, the time that a processor spends waiting for data or synchronization. In this paper, we extend that work to more deeply nested loops, as well as to more complex loop bounds. We introduce a model which allows us to demonstrate the equivalence in complexity of linear programming and determining the execution time of a tiling in the model. We then identify a sub-class of these loops that constitute rectilinear iteration spaces for which we derive a closed form formula for their execution time. This formula can be used by a compiler to predict the execution time of a loop nest. We then derive the tile shape that minimizes this formula. Using the duality property of linear programming, we also study how the longest path of dependent tiles within a rectilinear iteration space changes with the tile shape. Finally, we observe that the execution time of a rectilinear iteration space depends on the slope of only four of the facets defining the iteration space, independent of its dimensionality.
This paper deals with data management for parallel and distributed systems in which the computing nodes are connected by a relatively sparse network. We present the DIVA (Distributed Variables) library that provides f...
详细信息
This paper deals with data management for parallel and distributed systems in which the computing nodes are connected by a relatively sparse network. We present the DIVA (Distributed Variables) library that provides fully transparent access to global variables, i.e., shared data objects, from the individual nodes in the network. The current implementations are based on mesh-connected massively parallel computers. The data management strategies implemented in the library use a non-standard approach based on a randomized but locality preserving embedding of `access trees' into the physical network. The access tree strategy was previously analyzed only in a theoretical model using competitive analysis, where it was shown that the strategy produces minimal network congestion up to small factors. In this paper, the access tree strategy will be evaluated experimentally. We test several variations of this strategy on three different applications of parallel computing, which are matrix multiplication, bitonic sorting, and Barnes-Hut N-body simulation. We compare the congestion and the execution time of the access tree strategy and their variations with a standard caching strategy that uses a fixed home for each data object. Additionally, we do comparisons with hand-optimized message passing strategies producing minimal communication overhead. At first, we will see that the execution time of the applications heavily depends on the congestion produced by the different data management strategies. At second, we will see that the access tree strategy clearly outperforms the fixed home strategy and comes reasonably close to the performance of the hand-optimized message passing strategies. In particular, the larger the network is the more superior the access tree strategy is against the fixed home strategy.
Sparse matrix computations are ubiquitous in science and engineering. In particular, solving a sparse linear system of equations is the bottleneck in many computations. Of the many available algorithms, some are easy ...
详细信息
Sparse matrix computations are ubiquitous in science and engineering. In particular, solving a sparse linear system of equations is the bottleneck in many computations. Of the many available algorithms, some are easy to parallelize, because they reduce to sparse-matrix-vector multiplication, for which good algorithms based on graph partitioning exist. But others, which are widely used sequentially because of their numerical properties, remain hard to parallelize scalably. In this talk we highlight the challenges of parallelizing two widely used methods: sparse Gaussian Elimination with pivoting, and multigrid for linear systems arising from solid mechanics problems on irregular meshes. In both cases our current algorithms are among the fastest available (the first one getting parallel efficiencies up to 20% on 512 processors, and the latter up to 50% on 960 processors), but many algorithm design and performance analysis problems remain. Both these problems are much more challenging to parallelize than their simpler counterparts (sparse Cholesky, and multigrid on regular meshes, which we will briefly review), and to some extent the parallelizability of these simpler algorithms provides `upper bounds' for us. For both our algorithms we exploit certain tradeoffs between scalability and `accuracy,' so that the parallelalgorithms are not ones we would run sequentially. Here are some open problems that we plan to discuss. In the case of sparse Gaussian elimination.
暂无评论