Many computationally-intensive programs, such as those for differential equations, spatial interpolation, and dynamic programming, spend a large portion of their execution time in multiply-nested loops which have a re...
详细信息
Many computationally-intensive programs, such as those for differential equations, spatial interpolation, and dynamic programming, spend a large portion of their execution time in multiply-nested loops which have a regular stencil of data dependences. Tiling is a well-known optimization that improves performance on such loops, particularly for computers with a multi-levelled hierarchy of parallelism and memory. Most previous work on tiling restricts the tile shape to be rectangular. Our previous work and its extension by Desprez, Dongarra, Rastello and Robert showed that for doubly nested loops, using parallelograms can improve parallel execution time by decreasing the idle time, the time that a processor spends waiting for data or synchronization. In this paper, we extend that work to more deeply nested loops, as well as to more complex loop bounds. We introduce a model which allows us to demonstrate the equivalence in complexity of linear programming and determining the execution time of a tiling in the model. We then identify a sub-class of these loops that constitute rectilinear iteration spaces for which we derive a closed form formula for their execution time. This formula can be used by a compiler to predict the execution time of a loop nest. We then derive the tile shape that minimizes this formula. Using the duality property of linear programming, we also study how the longest path of dependent tiles within a rectilinear iteration space changes with the tile shape. Finally, we observe that the execution time of a rectilinear iteration space depends on the slope of only four of the facets defining the iteration space, independent of its dimensionality.
This paper deals with data management for parallel and distributed systems in which the computing nodes are connected by a relatively sparse network. We present the DIVA (Distributed Variables) library that provides f...
详细信息
This paper deals with data management for parallel and distributed systems in which the computing nodes are connected by a relatively sparse network. We present the DIVA (Distributed Variables) library that provides fully transparent access to global variables, i.e., shared data objects, from the individual nodes in the network. The current implementations are based on mesh-connected massively parallel computers. The data management strategies implemented in the library use a non-standard approach based on a randomized but locality preserving embedding of `access trees' into the physical network. The access tree strategy was previously analyzed only in a theoretical model using competitive analysis, where it was shown that the strategy produces minimal network congestion up to small factors. In this paper, the access tree strategy will be evaluated experimentally. We test several variations of this strategy on three different applications of parallel computing, which are matrix multiplication, bitonic sorting, and Barnes-Hut N-body simulation. We compare the congestion and the execution time of the access tree strategy and their variations with a standard caching strategy that uses a fixed home for each data object. Additionally, we do comparisons with hand-optimized message passing strategies producing minimal communication overhead. At first, we will see that the execution time of the applications heavily depends on the congestion produced by the different data management strategies. At second, we will see that the access tree strategy clearly outperforms the fixed home strategy and comes reasonably close to the performance of the hand-optimized message passing strategies. In particular, the larger the network is the more superior the access tree strategy is against the fixed home strategy.
Sparse matrix computations are ubiquitous in science and engineering. In particular, solving a sparse linear system of equations is the bottleneck in many computations. Of the many available algorithms, some are easy ...
详细信息
Sparse matrix computations are ubiquitous in science and engineering. In particular, solving a sparse linear system of equations is the bottleneck in many computations. Of the many available algorithms, some are easy to parallelize, because they reduce to sparse-matrix-vector multiplication, for which good algorithms based on graph partitioning exist. But others, which are widely used sequentially because of their numerical properties, remain hard to parallelize scalably. In this talk we highlight the challenges of parallelizing two widely used methods: sparse Gaussian Elimination with pivoting, and multigrid for linear systems arising from solid mechanics problems on irregular meshes. In both cases our current algorithms are among the fastest available (the first one getting parallel efficiencies up to 20% on 512 processors, and the latter up to 50% on 960 processors), but many algorithm design and performance analysis problems remain. Both these problems are much more challenging to parallelize than their simpler counterparts (sparse Cholesky, and multigrid on regular meshes, which we will briefly review), and to some extent the parallelizability of these simpler algorithms provides `upper bounds' for us. For both our algorithms we exploit certain tradeoffs between scalability and `accuracy,' so that the parallelalgorithms are not ones we would run sequentially. Here are some open problems that we plan to discuss. In the case of sparse Gaussian elimination.
The papers submitted to the Tenth annualacmsymposium on parallelalgorithms and architectures are presented. The issues considered include network protocols, parallelalgorithms, multiprocessing systems, parallel pr...
详细信息
The papers submitted to the Tenth annualacmsymposium on parallelalgorithms and architectures are presented. The issues considered include network protocols, parallelalgorithms, multiprocessing systems, parallel processing systems, distributed computer systems, program compilers, sorting, computer science, data storage equipment, video signal processing.
In this paper we develop models for and analyze several randomized work stealing algorithms in a dynamic setting. Our models represent the limiting behavior of systems as the number of processors grows to infinity usi...
详细信息
ISBN:
(纸本)9780897919890
In this paper we develop models for and analyze several randomized work stealing algorithms in a dynamic setting. Our models represent the limiting behavior of systems as the number of processors grows to infinity using differential equations. The advantages of this approach include the ability to model a large variety of systems and to provide accurate numerical approximations of system behavior even when the number of processors is relatively small. We show how this approach can yield significant intuition about the behavior of work stealing algorithms in realistic settings.
This paper envisions an extension to a standard instruction set which efficiently implements PRAM-style algorithms using explicit multi-threaded instruction-level parallelism (ILP);that is, Explicit Multi-Threading (X...
详细信息
ISBN:
(纸本)9780897919890
This paper envisions an extension to a standard instruction set which efficiently implements PRAM-style algorithms using explicit multi-threaded instruction-level parallelism (ILP);that is, Explicit Multi-Threading (XMT), a fine-grained computational paradigm covering the spectrum from algorithms through architecture to implementation is introduced;new elements are added where needed.
We present lower bounds for time needed to solve basic problems on three general-purpose models of parallel computation: the shared-memory models QSM and s-QSW, and the distributed-memory model, the BSP. For each of t...
详细信息
ISBN:
(纸本)9780897919890
We present lower bounds for time needed to solve basic problems on three general-purpose models of parallel computation: the shared-memory models QSM and s-QSW, and the distributed-memory model, the BSP. For each of these models, we also obtain lower bounds for the number of rounds needed to solve these problems using a randomized algorithm on a p-processor machine. Our results on 'rounds' is of special interest in the context of designing work-efficient algorithms on a machine where latency and synchronization costs are high. Many of our lower bound results are complemented by upper bounds that match the lower bound or are close to it.
This paper analyzes job scheduling for parallel computers by using theoretical and experimental means. Based on existing architectures we first present a machine and a job model. Then, we propose a simple on-line algo...
详细信息
This paper analyzes job scheduling for parallel computers by using theoretical and experimental means. Based on existing architectures we first present a machine and a job model. Then, we propose a simple on-line algorithm employing job preemption without migration and derive theoretical bounds for the performance of the algorithm. The algorithm is experimentally evaluated with trace data from a large computing facility. These experiments show that the algorithm is highly sensitive on parameter selection and that substantial performance improvements over existing (non-preemptive) scheduling methods are possible.
As databases increasingly integrate non-textual information it is becoming necessary to support efficient similarity searching in addition to range searching. Recently, declustering techniques have been proposed for i...
详细信息
ISBN:
(纸本)9780897919890
As databases increasingly integrate non-textual information it is becoming necessary to support efficient similarity searching in addition to range searching. Recently, declustering techniques have been proposed for improving the performance of similarity searches through parallel I/O. In this paper, we propose a new scheme which provides good declustering for similarity searching. In particular, it does global declustering as opposed to local declustering, exploits the availability of extra disks and does not limit the partitioning of the data space. Our technique is based upon the Cyclic declustering schemes which were developed for range and partial match queries. We establish, in general, that Cyclic declustering techniques outperform previously proposed techniques.
Mining association rules from large databases is an important problem in data mining. There is a need to develop parallel algorithm for this problem because it is a very costly computation process. However, all propos...
详细信息
ISBN:
(纸本)9780897919890
Mining association rules from large databases is an important problem in data mining. There is a need to develop parallel algorithm for this problem because it is a very costly computation process. However, all proposed parallelalgorithms for mining association rules follow the conventional level-wise approach. On a shared-memory multi-processors, they will impose a synchronization in every iteration which degrades greatly their performance. The deficiency comes from the contention on the shared I/O channel when all processors are accessing their database partitions in the shared storage synchronously. An asynchronous algorithm APM has been proposed for mining association rules on shared-memory multiprocessors. All participating processors in APM generate candidates and count their supports independently without synchronization. Furthermore, it can finish the computation with less I/O than required in the level-wise approach. The algorithm has been implemented on a Sun Enterprise 4000 multi-processors with 12 nodes. The experiments show that APM has super performance than other proposed synchronous algorithms.
暂无评论