We present a high level language called SequenceL. The language allows a programmer to describe functions in terms of abstract relationships between their inputs and outputs, and the semantics of the language are capa...
详细信息
The traditional zero-one principle for sorting networks states that "if a network with n input lines sorts all 2(n) binary sequences into nondecreasing order, then it will sort any arbitrary sequence of n numbers...
详细信息
The traditional zero-one principle for sorting networks states that "if a network with n input lines sorts all 2(n) binary sequences into nondecreasing order, then it will sort any arbitrary sequence of n numbers into nondecreasing order". We generalize this to the situation when a network sorts almost all binary sequences and relate it to the behavior of the sorting network on arbitrary inputs. We also present an application to mesh sorting. (c) 2004 Elsevier B.V. All rights reserved.
By using a chaotic encryption-hash parallel algorithm and the semi-group property of Chebyshev chaotic map, we propose a secure and efficient scheme for the deniable authentication. The scheme is efficient, practicabl...
详细信息
By using a chaotic encryption-hash parallel algorithm and the semi-group property of Chebyshev chaotic map, we propose a secure and efficient scheme for the deniable authentication. The scheme is efficient, practicable and reliable, with high potential to be adopted for e-commerce. (C) 2004 Elsevier Ltd. All rights reserved.
Stabilized explicit implicit domain decomposition (SEIDD) is a class of globally non-iterative domain decomposition methods for the numerical simulation of unsteady diffusion processes on parallel computers. By adding...
详细信息
ISBN:
(纸本)0769523129
Stabilized explicit implicit domain decomposition (SEIDD) is a class of globally non-iterative domain decomposition methods for the numerical simulation of unsteady diffusion processes on parallel computers. By adding a communication-cost-free stabilization step to the explicit-implicit domain decomposition (EIDD) methods, the SEIDD methods achieve high stability but with the restriction that the interface boundaries have no crossing-overs inside the domain. In this paper, we present a parallelized SEIDD algorithm with paralellism higher than the number of subdomains, eliminating the disadvantage of non-crossing-over interface boundaries at a slight computation cost.
Mine scheduling is a multi-objective highly constrained optimization problem. Often months are spent by mine planners to achieve one feasible ore production solution. In order to assist in the process and to present a...
详细信息
ISBN:
(纸本)0889865361
Mine scheduling is a multi-objective highly constrained optimization problem. Often months are spent by mine planners to achieve one feasible ore production solution. In order to assist in the process and to present alternatives with a higher likelihood of optimality, a parallel genetic algorithm for long-term scheduling of underground mines is developed. For the mine scheduling problems considered, a 2-dimensional map of stopes is given, along with the mineral properties of each. It is required to schedule the extraction sequence of the ore from the stopes to meet the mine's objectives. The appropriateness of a schedule is determined by applying a fitness function. The fitness function assesses how well the schedule meets objectives and satisfies given constraints. In practice, the scheduling problem is simplified in order to obtain a solution in given time bounds. By modularizing the problem and employing a parallel algorithm with minimal communication requirements, a higher quality mine schedule may be found in given time bounds.
The use of parallel computation for sci- entific research is so widespread today that it's easy to forget it began on a large scale as recently as the late 1980s. With the increasing attention given to parallel al...
详细信息
The use of parallel computation for sci- entific research is so widespread today that it's easy to forget it began on a large scale as recently as the late 1980s. With the increasing attention given to parallel algorithms for scientific research, researchers soon realized that certain common applications, such as Monte Carlo algorithms, were embarrassingly parallel: they required very little interproces-sor communication and could be effectively deployed on networks of modest communications bandwidth, including the Internet. This quickly led to the idea of using large geographically distributed networks of computers for scientific research, a concept that first captured the public's imagination with the SETI@home project.
We consider three approaches for estimating the rates of nonsynonymous and synonymous changes at each site in a sequence alignment in order to identify sites under positive or negative selection: (1) a suite of fast l...
详细信息
We consider three approaches for estimating the rates of nonsynonymous and synonymous changes at each site in a sequence alignment in order to identify sites under positive or negative selection: (1) a suite of fast likelihood-based "counting methods" that employ either a single most likely ancestral reconstruction, weighting across all possible ancestral reconstructions, or sampling from ancestral reconstructions;(2) a random effects likelihood (REL) approach, which models variation in nonsynonymous and synonymous rates across sites according to a predefined distribution, with the selection pressure at an individual site inferred using an empirical Bayes approach;and (3) a fixed effects likelihood (FEL) method that directly estimates nonsynonymous and synonymous substitution rates at each site. All three methods incorporate flexible models of nucleotide substitution bias and variation in both nonsynonymous and synonymous substitution rates across sites, facilitating the comparison between the methods. We demonstrate that the results obtained using these approaches show broad agreement in levels of Type I and Type 11 error and in estimates of substitution rates. Counting methods are well suited for large alignments, for which there is high power to detect positive and negative selection, but appear to underestimate the substitution rate. A REL approach, which is more computationally intensive than counting methods, has higher power than counting methods to detect selection in data sets of intermediate size but may suffer from higher rates of false positives for small data sets. A FEL approach appears to capture the pattern of rate variation better than counting methods or random effects models, does not suffer from as many false positives as random effects models for data sets comprising few sequences, and can be efficiently parallelized. Our results suggest that previously reported differences between results obtained by counting methods and random effects models a
An efficient parallel algorithm is presented to find a maximum weight independent set of a permutation graph which takes O(log n) time using O(n(2)/log n) processors on an EREW PRAM, provided the graph has at most O(n...
详细信息
An efficient parallel algorithm is presented to find a maximum weight independent set of a permutation graph which takes O(log n) time using O(n(2)/log n) processors on an EREW PRAM, provided the graph has at most O(n) maximal independent sets. The best known parallel algorithm takes O(log(2) n) time and O(n(3)/log n) processors on a CREW PRAM.
Genetic sequence data typically exhibit variability in substitution rates across sites. In practice. there is often too hale, variation to fit a different rate for each site in the alignment. but the distribution of r...
详细信息
Genetic sequence data typically exhibit variability in substitution rates across sites. In practice. there is often too hale, variation to fit a different rate for each site in the alignment. but the distribution of rates across sites may not be well modeled using simple parametric families. Mixtures of different distributions can capture more complex patterns of rate variation, but are often parameter-rich and difficult to fit. We present a simple hierarchical model in which a baseline rate distribution, such as a gamma distribution. is discretized into several categories, the quantiles of which are estimated using a discretized beta distribution. Although this approach involves adding only two extra parameters to a standard distribution, a wide range of rate distributions can be captured. Using simulated data, we demonstrate that a "beta-" model can reproduce the moments of the rate distribution more accurately than the distribution used to simulate the data. even when the baseline rate distribution is misspecified. Using hepatitis C virus and mammalian mitochondrial sequences, we show that a beta-model can fit as well or better than a model with multiple discrete rate categories. and compares favorably with a model which fits a separate rate category to each site. We also demonstrate this discretization scheme in the context of codon models specifically aimed at identifying individual sites undergoing adaptive or purifying evolution.
Power-List, ParList and PList data structures are efficient tools for functional descriptions of parallel programs that are divide & conquer in nature. The goal of this work is to develop three parallel variants f...
详细信息
ISBN:
(纸本)3540440496
Power-List, ParList and PList data structures are efficient tools for functional descriptions of parallel programs that are divide & conquer in nature. The goal of this work is to develop three parallel variants for Fast Fourier Transformation using these theories. The variants are implied by the degree of the polynomial, which can be a power of two, a prime number, or a product of prime factors. The last variant includes the first two, and represents a general and efficient parallel algorithm for Fast Fourier Transformation. This general algorithm has a very good time complexity, and can be mapped on a recursive interconnection network.
暂无评论