Dissipative particle dynamics (DPD) and its generalization-the fluid particle model (FPM)-represent the 'fluid particle' approach for simulating fluid-like behavior in the mesoscale. Unlike particles from the ...
详细信息
Dissipative particle dynamics (DPD) and its generalization-the fluid particle model (FPM)-represent the 'fluid particle' approach for simulating fluid-like behavior in the mesoscale. Unlike particles from the molecular dynamics (MD) method, the 'fluid particle' can be viewed as a 'droplet' consisting of liquid molecules. In the FPM, 'fluid particles' interact by both central and non-central, short-range forces with conservative, dissipative and Brownian character. In comparison to MD, the FPM method in three dimensions requires two to three times more memory load and a three times greater communication overhead. Computational load per step per particle is comparable to MD due to the shorter interaction range allowed between 'fluid particles' than between MD atoms. The classical linked-cells technique and decomposing the computational box into strips allow for rapid modifications of the code and for implementing non-cubic computational boxes. We show that the efficiency of the FPM code depends strongly on the number of particles simulated, the geometry of the box and the computer architecture. We give a few examples from long FPM simulations involving up to 8 million fluid particles and 32 processors. Results from FPM simulations in three dimensions of the phase separation in binary fluid and dispersion of the colloidal slab are presented. A scaling law for symmetric quench in phase separation has been properly reconstructed. We also show that the microstructure of dispersed fluid depends strongly on the contrast between the kinematic viscosities of this fluid phase and the bulk phase. This FPM code can be applied for simulating mesoscopic flow dynamics in capillary pipes or critical flow phenomena in narrow blood vessels. Copyright (C) 2002 John Wiley Sons, Ltd.
We investigated the usefulness of a parallel genetic algorithm for phylogenetic inference under the maximum-likelihood (NIL) optimality criterion. parallelization was accomplished by assigning each "individual&qu...
详细信息
We investigated the usefulness of a parallel genetic algorithm for phylogenetic inference under the maximum-likelihood (NIL) optimality criterion. parallelization was accomplished by assigning each "individual" in the genetic algorithm "population" to a separate processor so that the number of processors used was equal to the size of the evolving population (plus one additional processor for the control of operations). The genetic algorithm incorporated branch-length and topological mutation, recombination, selection on the ML score, and (in some cases) migration and recombination among subpopulations. We tested this parallel genetic algorithm with large (228 taxa) data sets of both empirically observed DNA sequence data (for angiosperms) as well as simulated DNA sequence data. For both observed and simulated data, search-time improvement was nearly linear with respect to the number of processors, so the parallelization strategy appears to be highly effective at improving computation time for large phylogenetic problems using the genetic algorithm. We also explored various ways of optimizing and tuning the parameters of the genetic algorithm. Under the conditions of our analyses, we did not find the best-known solution using the genetic algorithm approach before terminating each run. We discuss some possible limitations of the current implementation of this genetic algorithm as well as of avenues for its future improvement.
In recent years, there has been exciting advances in estimation methods based on Monte Carlo techniques. The particle filtering technique may Core with non-linear models as well as non-Gaussian dynamic and observation...
详细信息
In recent years, there has been exciting advances in estimation methods based on Monte Carlo techniques. The particle filtering technique may Core with non-linear models as well as non-Gaussian dynamic and observation noises. It recursively constructs the conditional probability density of the state variables, with respect to all available Measurements, through a random exploration of the state space by entities called particles. A weight is assigned to each particle by a Bayesian correction term based on measurements. The main drawback of this procedure is due to the large number of particles needed which limits its application to on-line filtering. A data parallel algorithm is proposed to achieve real-time particle filtering, Extensive results are presented. Them shove that the accuracy of the method is preserved and that the computing times of the parallel algorithm are compatible kith the real-time constraints of the most challenging applications. (C) 2002 Elsevier Science (USA).
Some new local and parallel finite element algorithms are proposed and analyzed in this paper for eigenvalue problems. With these algorithms, the solution of an eigenvalue problem on a fine grid is reduced to the solu...
详细信息
Some new local and parallel finite element algorithms are proposed and analyzed in this paper for eigenvalue problems. With these algorithms, the solution of an eigenvalue problem on a fine grid is reduced to the solution of an eigenvalue problem on a relatively coarse grid together with solutions of some linear algebraic systems on fine grid by using some local and parallel procedure. A theoretical tool for analyzing these algorithms is some local error estimate that is also obtained in this paper for finite element approximations of eigenvectors on general shape-regular grids.
We study parallel solutions to the problem of weighted multiselection to select r elements on given weighted-ranks from a, set S of n weighted elements, where an element is on weighted rank k if it is the smallest ele...
详细信息
ISBN:
(纸本)0769515126
We study parallel solutions to the problem of weighted multiselection to select r elements on given weighted-ranks from a, set S of n weighted elements, where an element is on weighted rank k if it is the smallest element such that the aggregated weight of all elements not greater than it in S is not smaller than k. We propose efficient algorithms on two of the most popular parallel architectures, hypercube and mesh. For a hypercube with p < n processors, we present a parallel algorithm running in O(n(epsilon) min{r, log p}) time for p = n(1-epsilon), 0 < epsilon < 1, which is cost optimal when r greater than or equal to p. Our algorithm on rootp x rootp mesh runs in O(rootp + n/p log(3) p) time P which is the same as multiselection on mesh when r greater than or equal to log p, and thus has the same optimality as multiselection in this case.
As to Markov cipher, its transition probability matrix is a doubly stochastic one. The eigenvalue of the matrix with maximum magnitude less than one plays an important role in designing Markov cipher This paper provid...
详细信息
ISBN:
(纸本)0769515126
As to Markov cipher, its transition probability matrix is a doubly stochastic one. The eigenvalue of the matrix with maximum magnitude less than one plays an important role in designing Markov cipher This paper provides a parallel algorithm for computing the eigenvalue of the doubly stochastic matrix A of size 65535x65535, which comes from a Markov cipher shrunken model with both 16 bits plaintext and ciphertext, an analysis on the complexity of the parallel algorithm is also considered.
In this paper we present two simulated annealing algorithms (sequential and parallel) for the permutation flow shop sequencing problem with the objective of minimizing the flowtime. We propose a neighbourhood using th...
详细信息
ISBN:
(纸本)3540437924
In this paper we present two simulated annealing algorithms (sequential and parallel) for the permutation flow shop sequencing problem with the objective of minimizing the flowtime. We propose a neighbourhood using the so-called blocks of jobs on a critical path and specific accepting function. We also use the lower bound of cost function. By computer simulations on Taillard [17] and other random problems, it is shown that the performance of the proposed algorithms is comparable with the random heuristic technique discussed in literature. The proposed properties can be applied in any local search procedures.
This paper presents a low communication-overhead parallel algorithm for pattern matching in biological sequences. Given such a sequence of length n and a pattern of length m, we conclude an algorithm with five computa...
详细信息
ISBN:
(纸本)0769517455
This paper presents a low communication-overhead parallel algorithm for pattern matching in biological sequences. Given such a sequence of length n and a pattern of length m, we conclude an algorithm with five computation/communication phases, each requiring O(n) computation time and only O(p) message units. The low communication overhead of the algorithm is essential to achieving reasonable speedups on clusters, where the interprocessor communication latency is usually higher Previous parallel implementations use straightforward domain decomposition based on existing sequential algorithms and rely on parallel machines with low-latency interconnection network and fast hardware support for processor synchronization.
Weighted multiselection requires to select r elements from a given set of a elements, each associated with a weight such that each element selected is on a pre-specifed weighted-rank, where an element is on weighted-r...
详细信息
ISBN:
(纸本)0780374908
Weighted multiselection requires to select r elements from a given set of a elements, each associated with a weight such that each element selected is on a pre-specifed weighted-rank, where an element is on weighted-rank k if it is the smallest element such that the aggregated weight of all elements not greater than it in the set is not smaller than k. This paper presents efficient algorithms for solving this problem both sequentially and in parallel on EREW PRAM. Our sequential algorithm solves this problem in time O(n log r) which is optimal. Our parallel algorithm runs in O(T-1 log r) time on an EREW PRAM with 1 < p less than or equal to n processors, and is optimal with respect to T-1 which is the time complexity for single-element weighted selection using p *** give a parallel algorithm for single-element weighted selection using p EREW processors which runs cost-optimally in O(n/p) time for 1 < p less than or equal to n log log n/log n,' and time-optimally in O(log n/log log n) time for n log log n/log n < p < n.
This paper focuses on BSR (Broadcasting with Selective Reduction) implementation of algorithms solving basic convex polygon problems. More precisely, constant time solutions on a linear number, max(N, M) (where N and ...
详细信息
This paper focuses on BSR (Broadcasting with Selective Reduction) implementation of algorithms solving basic convex polygon problems. More precisely, constant time solutions on a linear number, max(N, M) (where N and M are the number of edges of the two considered polygons), of processors for computing the maximum distance between two convex polygons, finding critical support lines of two convex polygons, computing the diameter, the width of a convex polygon and the vector sum of two convex polygons are described. These solutions are based on the merging slopes technique using one criterion BSR operations.
暂无评论