This paper deals with data management for parallel and distributed systems in which the computing nodes are connected by a relatively sparse network. We present the DIVA (Distributed Variables) library that provides f...
详细信息
This paper deals with data management for parallel and distributed systems in which the computing nodes are connected by a relatively sparse network. We present the DIVA (Distributed Variables) library that provides fully transparent access to global variables, i.e., shared data objects, from the individual nodes in the network. The current implementations are based on mesh-connected massively parallel computers. The data management strategies implemented in the library use a non-standard approach based on a randomized but locality preserving embedding of `access trees' into the physical network. The access tree strategy was previously analyzed only in a theoretical model using competitive analysis, where it was shown that the strategy produces minimal network congestion up to small factors. In this paper, the access tree strategy will be evaluated experimentally. We test several variations of this strategy on three different applications of parallel computing, which are matrix multiplication, bitonic sorting, and Barnes-Hut N-body simulation. We compare the congestion and the execution time of the access tree strategy and their variations with a standard caching strategy that uses a fixed home for each data object. Additionally, we do comparisons with hand-optimized message passing strategies producing minimal communication overhead. At first, we will see that the execution time of the applications heavily depends on the congestion produced by the different data management strategies. At second, we will see that the access tree strategy clearly outperforms the fixed home strategy and comes reasonably close to the performance of the hand-optimized message passing strategies. In particular, the larger the network is the more superior the access tree strategy is against the fixed home strategy.
A new block algorithm for triangularization of regular or singular matrices with dimension m × n is proposed. Taking benefit of fast block multiplication algorithms, it achieves the best known sequential complexi...
ISBN:
(纸本)9781581134094
A new block algorithm for triangularization of regular or singular matrices with dimension m × n is proposed. Taking benefit of fast block multiplication algorithms, it achieves the best known sequential complexity Ο(mw-1n) for any sizes and any rank. Moreover, the block strategy enables to improve locality with respect to previous algorithms as exhibited by practical performances.
暂无评论