In order to solve a problem in parallel we need to undertake the fundamental step of splitting the computational tasks into parts, i.e. decomposing the problem solving. A whatever decomposition does not necessarily le...
详细信息
In order to solve a problem in parallel we need to undertake the fundamental step of splitting the computational tasks into parts, i.e. decomposing the problem solving. A whatever decomposition does not necessarily lead to a parallel algorithm with the highest performance. This topic is even more important when complex parallel algorithms must be developed for hybrid or heterogeneous architectures. We present an innovative approach which starts from a problem decomposition into parts (sub-problems). These parts will be regarded as elements of an algebraic structure and will be related to each other according to a suitably defined dependency relationship. The main outcome of such framework is to define a set of block matrices (dependency, decomposition, memory accesses and execution) which simply highlight fundamental characteristics of the corresponding algorithm, such as inherent parallelism and sources of overheads. We provide a mathematical formulation of this approach, and we perform a feasibility analysis for the performance of a parallel algorithm in terms of its time complexity and scalability. We compare our results with standard expressions of speed up, efficiency, overhead, and so on. Finally, we show how the multilevel structure of this framework eases the choice of the abstraction level (both for the problem decomposition and for the algorithm description) in order to determine the granularity of the tasks within the performance analysis. This feature is helpful to better understand the mapping of parallel algorithms on novel hybrid and heterogeneous architectures.
In this paper we address the challenge of metacomputing with two distant parallel computers linked by a slow network and running the numerical approximation of two sets of coupled PDEs. Several software tools are avai...
详细信息
In this paper we address the challenge of metacomputing with two distant parallel computers linked by a slow network and running the numerical approximation of two sets of coupled PDEs. Several software tools are available for coupling codes, and large-scale computing on a network of parallel computers seems to be mature from a computer science point of view, From an algorithmic point of view, the key to obtaining parallel efficiency is the ability to overlap communication with computation: a priori, the speed of communication between the processors that run the two different codes must be of the same order as that between processors that run the same code in parallel. However, a local network of processors is still faster than a long distant network used for metacomputing by one or two orders of magnitude at least. In this paper, to overcome this limitation, we study some new adaptive time-marching schemes for coupling codes so that efficient metacomputing may be obtained. We will focus on stability and accuracy issues in order to minimize the communication processes and define under which conditions our schemes are numerically efficient. We give several examples of applications chosen as representative test cases for the numerical validation of our algorithms. Finally, efficient metacomputing with two distanced computers linked by a slow network is demonstrated for an application in combustion. (C) 2000 Academic Press.
The computational cost, in the bit model of computation, of the evaluation of a real function f(x) in a point x is analyzed, when the number d of correct digits of the result increases asymptotically. We want to study...
详细信息
The computational cost, in the bit model of computation, of the evaluation of a real function f(x) in a point x is analyzed, when the number d of correct digits of the result increases asymptotically. We want to study how the cost depends on x also when x approaches a critical point for the function f. We investigate the hypotheses under which it is possible to give upper bounds on the cost as functions of "separated variables" d and x, that is as products of two functions, each of one variable. We examine in particular the case of elementary functions.
暂无评论