The classic balls-into-bins game considers the experiment of placing m balls independently and uniformly at random (i.u.r.) in n bins. For m = n, it is well known that the maximum load, i.e., the number of baits in th...
The classic balls-into-bins game considers the experiment of placing m balls independently and uniformly at random (i.u.r.) in n bins. For m = n, it is well known that the maximum load, i.e., the number of baits in the fullest bin is Theta (log n/log log n), with high probability. It is also known (see [S2]) that a maximum load of O(m/n) can be obtained for all m greater than or equal to n if each ball is allocated in one (suitably chosen) of two (i.u.r.) bins. Stemann presents a distributed algorithm to find the "suitable" bin for each ball. The algorithm uses r communication rounds to achieve a maximum load of max{root logn, O(m/n)}, with high probability. Adler et al. [acmR] show that Stemann's protocol is optimal up to a constant factor for constant r. In this paper we extend the above results in two directions: we generalize the lower bound to arbitrary r less than or equal to log log n. This implies that Stemann's protocol is optimal for all r. Our key result is a generalization of Stemann's upper bound to weighted balls: Let WA (resp. W-M) denote the average (resp. maximum) weight of the balls. Furthermore, let Delta = W-A/W-M. Then the optimal maximum load is Omega(m/n . W-A + W-M). We present a protocol that achieves a maximum load of gamma . (m/n . W-A + W-M) using O(log log n/log(gamma . (m/n . Delta + 1))) communication rounds. For uniform weights this matches the results of Stemann. In particular, we achieve a load of O(m/n . W-A + W-M) using log log n communication rounds, which is optimal up to a constant factor. An extension of our lower bound shows that our algorithm also reaches a load which is within a constant factor of the optimal load in the case of weighted balls. All the balls-into-bins games model load balancing problems: the halls are jobs, the bins are resources, the task is to allocate the jobs to the resources in such a way that the maximum load is minimized. Our extension to weighted balls allows us to extend previous bounds to models w
Previous techniques used for out of order execution and speculative execution use modified versions of Tomosulo's algorithm and re-order buffer. An extension to these algorithms for multi-threading is necessary in...
详细信息
ISBN:
(纸本)1581130864
Previous techniques used for out of order execution and speculative execution use modified versions of Tomosulo's algorithm and re-order buffer. An extension to these algorithms for multi-threading is necessary in order to issue instructions from multiple streams of instructions dynamically and yet keep the consistency of state for the machine. We present an architecture which has a wide instruction issue rate and can issue from multiple threads at a time. This asynchronous architecture has a mechanism to dynamically resolve data dependencies and executes instructions out of order and speculatively without any special help from an optimizing compiler. Instructions from various threads are interleaved simultaneously to dynamically exploit ILP to the maximum that the architecture can provide. Consistency of state is maintained by precise interrupts and in-order commitment of instructions for all the threads in execution.
Many computationally-intensive programs, such as those for differential equations, spatial interpolation, and dynamic programming, spend a large portion of their execution time in multiply-nested loops which have a re...
详细信息
Many computationally-intensive programs, such as those for differential equations, spatial interpolation, and dynamic programming, spend a large portion of their execution time in multiply-nested loops which have a regular stencil of data dependences. Tiling is a well-known optimization that improves performance on such loops, particularly for computers with a multi-levelled hierarchy of parallelism and memory. Most previous work on tiling restricts the tile shape to be rectangular. Our previous work and its extension by Desprez, Dongarra, Rastello and Robert showed that for doubly nested loops, using parallelograms can improve parallel execution time by decreasing the idle time, the time that a processor spends waiting for data or synchronization. In this paper, we extend that work to more deeply nested loops, as well as to more complex loop bounds. We introduce a model which allows us to demonstrate the equivalence in complexity of linear programming and determining the execution time of a tiling in the model. We then identify a sub-class of these loops that constitute rectilinear iteration spaces for which we derive a closed form formula for their execution time. This formula can be used by a compiler to predict the execution time of a loop nest. We then derive the tile shape that minimizes this formula. Using the duality property of linear programming, we also study how the longest path of dependent tiles within a rectilinear iteration space changes with the tile shape. Finally, we observe that the execution time of a rectilinear iteration space depends on the slope of only four of the facets defining the iteration space, independent of its dimensionality.
This paper deals with data management for parallel and distributed systems in which the computing nodes are connected by a relatively sparse network. We present the DIVA (Distributed Variables) library that provides f...
详细信息
This paper deals with data management for parallel and distributed systems in which the computing nodes are connected by a relatively sparse network. We present the DIVA (Distributed Variables) library that provides fully transparent access to global variables, i.e., shared data objects, from the individual nodes in the network. The current implementations are based on mesh-connected massively parallel computers. The data management strategies implemented in the library use a non-standard approach based on a randomized but locality preserving embedding of `access trees' into the physical network. The access tree strategy was previously analyzed only in a theoretical model using competitive analysis, where it was shown that the strategy produces minimal network congestion up to small factors. In this paper, the access tree strategy will be evaluated experimentally. We test several variations of this strategy on three different applications of parallel computing, which are matrix multiplication, bitonic sorting, and Barnes-Hut N-body simulation. We compare the congestion and the execution time of the access tree strategy and their variations with a standard caching strategy that uses a fixed home for each data object. Additionally, we do comparisons with hand-optimized message passing strategies producing minimal communication overhead. At first, we will see that the execution time of the applications heavily depends on the congestion produced by the different data management strategies. At second, we will see that the access tree strategy clearly outperforms the fixed home strategy and comes reasonably close to the performance of the hand-optimized message passing strategies. In particular, the larger the network is the more superior the access tree strategy is against the fixed home strategy.
Sparse matrix computations are ubiquitous in science and engineering. In particular, solving a sparse linear system of equations is the bottleneck in many computations. Of the many available algorithms, some are easy ...
详细信息
Sparse matrix computations are ubiquitous in science and engineering. In particular, solving a sparse linear system of equations is the bottleneck in many computations. Of the many available algorithms, some are easy to parallelize, because they reduce to sparse-matrix-vector multiplication, for which good algorithms based on graph partitioning exist. But others, which are widely used sequentially because of their numerical properties, remain hard to parallelize scalably. In this talk we highlight the challenges of parallelizing two widely used methods: sparse Gaussian Elimination with pivoting, and multigrid for linear systems arising from solid mechanics problems on irregular meshes. In both cases our current algorithms are among the fastest available (the first one getting parallel efficiencies up to 20% on 512 processors, and the latter up to 50% on 960 processors), but many algorithm design and performance analysis problems remain. Both these problems are much more challenging to parallelize than their simpler counterparts (sparse Cholesky, and multigrid on regular meshes, which we will briefly review), and to some extent the parallelizability of these simpler algorithms provides `upper bounds' for us. For both our algorithms we exploit certain tradeoffs between scalability and `accuracy,' so that the parallelalgorithms are not ones we would run sequentially. Here are some open problems that we plan to discuss. In the case of sparse Gaussian elimination.
暂无评论