parallel analogs of the variants of the incomplete Cholesky-conjugate gradient method and the modified incomplete Cholesky-conjugate gradient method for solving elliptic equations on uniform triangular and unstructure...
详细信息
We describe the construction of paralleliterative solvers for finite-element approximations of the Navier-Stokes equations on unstructured grids using domain decomposition methods. The iterative method used is FGMRES...
详细信息
We describe the construction of paralleliterative solvers for finite-element approximations of the Navier-Stokes equations on unstructured grids using domain decomposition methods. The iterative method used is FGMRES, preconditioned by a parallel adaptation of a block preconditioner recently proposed by Kay et al. The parallelization is achieved by adapting the technology of our domain decomposition solver DOUG (previously used for scalar problems) to block-systems. The iterative solver is applied to shifted linear systems that arise in eigenvalue calculations. To illustrate the performance of the solver, we compare several strategies both theoretically and practically for the calculation of the eigenvalues of large sparse non-symmetric matrices arising in the assessment of the stability of flow past a cylinder. Copyright (C) 2003 John Wiley Sons, Ltd.
The paper describes the implementation of the Successive Overrelaxation (SOR) method on an asynchronous multiprocessor computer for solving large, linear systems. The parallel algorithm is derived by dividing the seri...
详细信息
The paper describes the implementation of the Successive Overrelaxation (SOR) method on an asynchronous multiprocessor computer for solving large, linear systems. The parallel algorithm is derived by dividing the serial SOR method into noninterfering tasks which are then combined with an optimal schedule of a feasible number of processors. The important features of the algorithm are: (i) achieves a speedup S p ≅ O( N /3) and an efficiency E p ≅ 2/3 using p = [ N /2] processors, where N is the number of the equations, (ii) contains a high level of inherent parallelism, whereas on the other hand, the convergence theory of the parallel SOR method is the same as its sequential counterpart and (iii) may be modified to use block methods in order to minimise the overhead due to communication and synchronisation of the processors.
We present a new parallel implementation of the Gauss-Seidel iteration for solving systems of linear equations, improving the results presented in two recent papers.
We present a new parallel implementation of the Gauss-Seidel iteration for solving systems of linear equations, improving the results presented in two recent papers.
In this note we improve results presented in the paper: N.M. Missirlis, Scheduling parallel iterative methods on multiprocessor systems, parallel Computing 5 (1987) 295–302.
In this note we improve results presented in the paper: N.M. Missirlis, Scheduling parallel iterative methods on multiprocessor systems, parallel Computing 5 (1987) 295–302.
We describe the construction of paralleliterative solvers for finite-element approximations of the Navier-Stokes equations on unstructured grids using domain decomposition methods. The iterative method used is FGMRES...
详细信息
We describe the construction of paralleliterative solvers for finite-element approximations of the Navier-Stokes equations on unstructured grids using domain decomposition methods. The iterative method used is FGMRES, preconditioned by a parallel adaptation of a block preconditioner recently proposed by Kay et al. The parallelization is achieved by adapting the technology of our domain decomposition solver DOUG (previously used for scalar problems) to block-systems. The iterative solver is applied to shifted linear systems that arise in eigenvalue calculations. To illustrate the performance of the solver, we compare several strategies both theoretically and practically for the calculation of the eigenvalues of large sparse non-symmetric matrices arising in the assessment of the stability of flow past a cylinder. Copyright (C) 2003 John Wiley Sons, Ltd.
We present convergence and comparison theorems on paralleliterative multisplitting methods with different weighting schemes. In particular, we show that certain Gauss-Seidel multisplittings cannot converge faster tha...
详细信息
We present convergence and comparison theorems on paralleliterative multisplitting methods with different weighting schemes. In particular, we show that certain Gauss-Seidel multisplittings cannot converge faster than the usual Gauss-Seidel method. We also give numerical results on a 64 processor local memory computer. These experiments show that the 'naive' use of multisplittings can easily produce unsatisfactory results on parallel computers with more than just a few processors.
The Projected Aggregation methods generate the new point (k+)1 as the projection of x(k) onto an "aggregate" hyperplane usually arising from linear combinations of the hyperplanes defined by the blocks. The ...
详细信息
The Projected Aggregation methods generate the new point (k+)1 as the projection of x(k) onto an "aggregate" hyperplane usually arising from linear combinations of the hyperplanes defined by the blocks. The aim of this paper is to improve the speed of convergence of a particular kind of them by projecting the directions given by the blocks onto the aggregate hyperplane defined in the last iteration. For that purpose we apply the scheme introduced in "A new method for solving large sparse systems of linear equations using row projections" [11], for a given block projection algorithm, to some new methods here introduced whose main features are related to the fact that the projections do not need to be accurately computed. Adaptative splitting schemes are applied which take into account the structure and conditioning of the matrix. It is proved that these new highly parallel algorithms improve the original convergence rate and present numerical results which show their computational efficiency.
Asynchronous iterative algorithms can reduce much of the data dependencies associated with synchronization barriers. The reported study investigates the potentials of asynchronous iterative algorithms by quantifying t...
详细信息
Asynchronous iterative algorithms can reduce much of the data dependencies associated with synchronization barriers. The reported study investigates the potentials of asynchronous iterative algorithms by quantifying the critical parallel processing factors. Specifically, a time complexity-based analysis method is used to understand the inherent interdependencies between computing and communication overheads for the parallel asynchronous algorithm. The results show, not only that the computational experiments closely match the analytical results, but also that the use of asynchronous iterative algorithms can be beneficial for a vast number of parallel processing environments. The choice of local stopping criteria that is critically important to the overall system performance is investigated in depth. (C) 1999 Academic Press.
A class of Generalized Approximate Inverse Matrix (GAIM) techniques, based on the concept of LU-sparse factorization procedures, is introduced for computing explicitly approximate inverses of large sparse unsymmetric ...
详细信息
A class of Generalized Approximate Inverse Matrix (GAIM) techniques, based on the concept of LU-sparse factorization procedures, is introduced for computing explicitly approximate inverses of large sparse unsymmetric matrices of irregular structure, without inverting the decomposition factors. Explicit preconditioned iterativemethods, in conjunction with modified forms of the GAIM techniques, are presented for solving numerically initial/boundary value problems on multiprocessor systems. Application of the new methods on linear boundary-value problems is discussed and numerical results are given.
暂无评论