We deduce the law of nonstationary recursion which makes it possible, for given a primitive set A = {a1,..,ak}, k > 2, to construct an algorithm for finding the set of the numbers outside the additive semigroup gen...
详细信息
A multitude of algorithms for sequence comparison, short-read assembly and whole-genome alignment have been developed in the general context of molecular biology, to support technology development for high-throughput ...
详细信息
A multitude of algorithms for sequence comparison, short-read assembly and whole-genome alignment have been developed in the general context of molecular biology, to support technology development for high-throughput sequencing, numerous applications in genome biology and fundamental research on comparative genomics. The computational complexity of these algorithms has been previously reported in original research papers, yet this often neglected property has not been reviewed previously in a systematic manner and for a wider audience. We provide a review of space and time complexity of key sequence analysis algorithms and highlight their properties in a comprehensive manner, in order to identify potential opportunities for further research in algorithm or data structure optimization. The complexity aspect is poised to become pivotal as we will be facing challenges related to the continuous increase of genomic data on unprecedented scales and complexity in the foreseeable future, when robust biological simulation at the cell level and above becomes a reality. (C) 2017 Elsevier B.V. All rights reserved.
This paper considers the computational complexity of the design of voting rules, which is formulated by simple games. We prove that it is an NP-complete problem to decide whether a given simple game is stable, or not.
This paper considers the computational complexity of the design of voting rules, which is formulated by simple games. We prove that it is an NP-complete problem to decide whether a given simple game is stable, or not.
We show that Van der Heyden's variable dimension algorithm and Dantzig and Cottle's principal pivoting method require 2n–1 pivot steps to solve a class of linear complementarity problems of ordern. Murty and ...
详细信息
We show that Van der Heyden's variable dimension algorithm and Dantzig and Cottle's principal pivoting method require 2n–1 pivot steps to solve a class of linear complementarity problems of ordern. Murty and Fathi have previously shown that the computational effort required to solve a linear complementarity problem of ordern by Lemke's complementary pivot algorithm or by Murty's Bard-type algorithm is not bounded above by a polynomial inn. Our study shows that the variable dimension algorithm and the principal pivoting method have similar worst case computational requirements.
Abstract: In this paper we describe a method for computing the Discrete Fourier Transform (DFT) of a sequence of n elements over a finite field ${\text {GF}}({p^m})$ with a number of bit operations $O(nm \log ...
详细信息
Abstract: In this paper we describe a method for computing the Discrete Fourier Transform (DFT) of a sequence of n elements over a finite field ${\text {GF}}({p^m})$ with a number of bit operations $O(nm \log (nm) \cdot P(q))$ where $P(q)$ is the number of bit operations required to multiply two q-bit integers and $q \cong 2 {\log _2}n + 4 {\log _2}m + 4 {\log _2}p$. This method is uniformly applicable to all instances and its order of complexity is not inferior to that of methods whose success depends upon the existence of certain primes. Our algorithm is a combination of known and novel techniques. In particular, the finite-field DFT is at first converted into a finite field convolution; the latter is then implemented as a two-dimensional Fourier transform over the complex field. The key feature of the method is the fusion of these two basic operations into a single integrated procedure centered on the Fast Fourier Transform algorithm.
The first-order logical theory Th( N, x+ 1, F( x)) is proved to be complete for the class ATIME-ALT(2(O(n)), O( n)) when F(x)=2(x), and the same result holds for F(x)=c(x), x(c) (c is an element of N, c >= 2), and ...
详细信息
The first-order logical theory Th( N, x+ 1, F( x)) is proved to be complete for the class ATIME-ALT(2(O(n)), O( n)) when F(x)=2(x), and the same result holds for F(x)=c(x), x(c) (c is an element of N, c >= 2), and F(x)=tower of x powers of two. The difficult part is the upper bound, which is obtained by using a bounded Ehrenfeucht-Fraisse game.
In combinatorial game theory, the winning player for a position in normal play is analyzed and characterized via algebraic operations. Such analyses define a value for each position, called a game value. A game (rules...
详细信息
In combinatorial game theory, the winning player for a position in normal play is analyzed and characterized via algebraic operations. Such analyses define a value for each position, called a game value. A game (ruleset) is called universal if any game value is achievable in some position in a play of the game. Although the universality of a game implies that the ruleset is rich enough (i.e., sufficiently complex), it does not immediately imply that the game is intractable in the sense of computational complexity. This paper proves that the universal game Turning Tiles is PSPACE-complete. We also give other positive and negative results on the computational complexity of Turning Tiles.
The problem of estimating the increase in computational effort as the system size n increases is studied for methods requiring the solution of Ax = b, where A is sparse and topology-symmetric. The expected value of th...
详细信息
The problem of estimating the increase in computational effort as the system size n increases is studied for methods requiring the solution of Ax = b, where A is sparse and topology-symmetric. The expected value of the total number of upper triangular nonzero elements after factorization is assumed to grow as n 1+γ . The expected computational effort for the factorization itself is shown to grow as n 1+2γ , while the one for each repeat solution is shown to grow as n 1+γ . Values of γ for typical power systems are experimentally determined by generating a variety of random networks and ordering the resultant matrices according to "scheme 2". For typical power systems a reasonable value for γ is 0.2. Therefore, methods requiring repeated refactorization of A (such as Newton's method) can be expected to increase as n 1.4 , while methods requiring merely repeat solutions (such as fast decoupled methods) can be expected to increase as n 1.2 . Several other important comparisons are included.
We study the computational complexity of the vertex cover problem in the class of planar graphs (planar triangulations) admitting a plane representation whose faces are triangles. It is shown that the problem is stron...
详细信息
We study the computational complexity of the vertex cover problem in the class of planar graphs (planar triangulations) admitting a plane representation whose faces are triangles. It is shown that the problem is strongly NP-hard in the class of 4-connected planar triangulations in which the degrees of vertices are of order O(log n), where n is the number of vertices, and in the class of plane 4-connected Delaunay triangulations based on the Minkowski triangular distance. A pair of vertices in such a triangulation is adjacent if and only if there is an equilateral triangle ac(p, lambda) with p a R (2) and lambda > 0 whose interior does not contain triangulation vertices and whose boundary contains this pair of vertices and only it, where ac(p, lambda) = p + lambda ac = {x a R (2): x = p + lambda a, a a ac};here ac is the equilateral triangle with unit sides such that its barycenter is the origin and one of the vertices belongs to the negative y-axis. Keywords: computational complexity, Delaunay triangulation, Delaunay TD-triangulation.
After successful accuracy and reliability verifications of the algorithm for a 2D adaptive mesh refinement method using exact and numerical benchmark results, we consider the computational complexity of this algorithm...
详细信息
After successful accuracy and reliability verifications of the algorithm for a 2D adaptive mesh refinement method using exact and numerical benchmark results, we consider the computational complexity of this algorithm using 2D steady incompressible lid-driven cavity flows. The algorithm for the 2D adaptive mesh refinement method is proposed based on the qualitative theory of differential equations. The adaptive mesh refinement method performs mesh refinement based on the numerical solutions of Navier-Stokes equations solved by Navier2D, an open source vertex-centered finite volume code that uses the median dual mesh to form the control volumes about each vertex. We show the comparisons of the computational complexities between the algorithm of the adaptive mesh refinement method twice and the algorithm that uses uniform mesh with the same size of twice refined cells for Reynolds numbers 100, 1000, 2500. The adaptive mesh refinement method can be applied to find the accurate numerical solutions of any mathematical models containing continuity equations for incompressible fluid or steady-state fluid flows.
暂无评论