We answer the following question posed by Lechuga: given a simply connected space X with both H-*(X;Q) and pi(*)(X) circle times Q being finite dimensional, what is the computational complexity of an algorithm computi...
详细信息
We answer the following question posed by Lechuga: given a simply connected space X with both H-*(X;Q) and pi(*)(X) circle times Q being finite dimensional, what is the computational complexity of an algorithm computing the cup length and the rational Lusternik-Schnirelmann category of X? Basically, by a reduction from the decision problem of whether a given graph is k-colourable for k >= 3, we show that even stricter versions of the problems above are NP-hard.
Experiments show that evolutionary fitness landscapes can have a rich combinatorial structure due to epistasis. For some landscapes, this structure can produce a computational constraint that prevents evolution from f...
详细信息
Experiments show that evolutionary fitness landscapes can have a rich combinatorial structure due to epistasis. For some landscapes, this structure can produce a computational constraint that prevents evolution from finding local fitness optima-thus overturning the traditional assumption that local fitness peaks can always be reached quickly if no other evolutionary forces challenge natural selection. Here, I introduce a distinction between easy landscapes of traditional theory where local fitness peaks can be found in a moderate number of steps, and hard landscapes where finding local optima requires an infeasible amount of time. Hard examples exist even among landscapes with no reciprocal sign epistasis;on these semismooth fitness landscapes, strong selection weak mutation dynamics cannot find the unique peak in polynomial time. More generally, on hard rugged fitness landscapes that include reciprocal sign epistasis, no evolutionary dynamics-even ones that do not follow adaptive paths-can find a local fitness optimum quickly. Moreover, on hard landscapes, the fitness advantage of nearby mutants cannot drop off exponentially fast but has to follow a power-law that long-term evolution experiments have associated with unbounded growth in fitness. Thus, the constraint of computational complexity enables open-ended evolution on finite landscapes. Knowing this constraint allows us to use the tools of theoretical computer science and combinatorial optimization to characterize the fitness landscapes that we expect to see in nature. I present candidates for hard landscapes at scales from single genes, to microbes, to complex organisms with costly learning (Baldwin effect) or maintained cooperation (Hankshaw effect). Just how ubiquitous hard landscapes (and the corresponding ultimate constraint on evolution) are in nature becomes an open empirical question.
The computational complexity of the graph approximation problem is investigated. It is shown that the different variants of this problem are NP-hard both for undirected and directed graphs. A polynomial-time approxima...
详细信息
In the present chapter we focus our attention on the computational complexity of proving regulatory compliance of business process models. While the topic has never received the deserved attention, we argue that the t...
详细信息
The continuous variable quantum key distribution has been considered to have the potential to provide high secret key rate. However, in present experimental demonstrations, the secret key can be distilled only under v...
详细信息
The continuous variable quantum key distribution has been considered to have the potential to provide high secret key rate. However, in present experimental demonstrations, the secret key can be distilled only under very small loss rates. Here, by calculating explicitly the computational complexity with the channel transmission, we show that under high loss rate it is hard to distill the secret key in present continuous variable scheme and one of its advantages, the potential of providing high secret key rate, may therefore be limited.
computational complexity and exact polynomial algorithms are reported for the problem of stabbing a set of straight line segments with a least cardinality set of disks of fixed radii r > 0, where the set of segment...
详细信息
computational complexity and exact polynomial algorithms are reported for the problem of stabbing a set of straight line segments with a least cardinality set of disks of fixed radii r > 0, where the set of segments forms a straight line drawing G = (V,E) of a plane graph without edge crossings. Similar geometric problems arise in network security applications (Agarwal et al., 2013). We establish the strong NP-hardness of the problem for edge sets of Delaunay triangulations, Gabriel graphs, and other subgraphs (which are often used in network design) for r [d(min), d(max)] and some constant , where d(max) and d(min) are the Euclidean lengths of the longest and shortest graph edges, respectively.
This paper studies the complexity of computing (or approximating, or bounding) the various inner and outer radii of an n-dimensional convex polytope in the space R(n) equipped with an l(p) norm or a polytopal norm. Th...
详细信息
This paper studies the complexity of computing (or approximating, or bounding) the various inner and outer radii of an n-dimensional convex polytope in the space R(n) equipped with an l(p) norm or a polytopal norm. The polytope P is assumed to be presented as the convex hull of finitely many points with rational coordinates (V-presented) or as the intersection of finitely many closed halfspaces defined by linear inequalities with rational coefficients (He-presented). The inner j-radius of P is the radius of a largest j-ball contained in P;it is P's in radius when j = n and half of P's diameter when j = 1. The outer j-radius measures how well P can be approximated, in a minimax sense, by an (n - j)-flat;it is P's circumradius when j = n and half of P's width when j = 1. The binary (Turing machine) model of computation is employed. The primary concern is not with finding optimal algorithms, but with establishing polynomial-time computability or NP-hardness. Special attention is paid to the case in which P is centrally symmetric. When the dimension n is permitted to vary, the situation is roughly as follows: (a) for general H-presented polytopes in l(p) spaces with 1 < p < infinity, all outer radius computations are NP-hard;(b) in the remaining cases (including symmetric H-presented polytopes), some radius computations can be accomplished in polynomial time and others are NP-hard. These results are obtained by using a variety of tools from the geometry of convex bodies, from linear and nonlinear programming, and from the theory of computational complexity. Applications of the results to various problems in mathematical programming, computer science and other fields are included.
In this paper, we investigate the computational behavior of the exterior point simplex algorithm. Up until now, there has been a major difference observed between the theoretical worst case complexity and practical pe...
详细信息
In this paper, we investigate the computational behavior of the exterior point simplex algorithm. Up until now, there has been a major difference observed between the theoretical worst case complexity and practical performance of simplex-type algorithms. computational tests have been carried out on randomly generated sparse linear problems and on a small set of benchmark problems. Specifically, 6780 linear problems were randomly generated, in order to formulate a respectable amount of experiments. Our study consists of the measurement of the number of iterations that the exterior point simplex algorithm needs for the solution of the above mentioned problems and benchmark dataset. Our purpose is to formulate representative regression models for these measurements, which would play a significant role for the evaluation of an algorithm's efficiency. For this examination, specific characteristics, such as the number of constraints and variables, the sparsity and bit length, and the condition of matrix A, of each linear problem, were taken into account. What drew our attention was that the formulated model for the randomly generated problems reveal a linear relation among these characteristics.
We establish that in the worst case, the computational effort required for solving a parametric linear program is not bounded above by a polynomial in the size of the problem.
We establish that in the worst case, the computational effort required for solving a parametric linear program is not bounded above by a polynomial in the size of the problem.
The purpose of this work is to promote a programming-language approach to studying computability and complexity, with an emphasis on time complexity. The essence of the approach is: a programming language, with semant...
详细信息
The purpose of this work is to promote a programming-language approach to studying computability and complexity, with an emphasis on time complexity. The essence of the approach is: a programming language, with semantics and complexity measure, can serve as a computational model that has several advantages over the currently popular models and in particular the Turing machine. An obvious advantage is a stronger relevance to the practice of programming. In this paper we demonstrate other advantages: certain proofs and constructions that are hard to do precisely and clearly with Turing machines become clearer and easier in our approach, and sometimes lead to finer results. In particular, we prove several time hierarchy theorems, for deterministic and non-deterministic time complexity which show that, in contrast with Turing machines, constant factors do matter in this framework. This feature, too, brings the theory closer to practical considerations. The above result suggests that this framework may be appropriate for studying low complexity classes,such as linear time. As an example we give a problem complete for non-deterministic linear time under deterministic linear-time reductions. Finally, we consider some extensions and modifications of our programming language and their effect on lime complexity results.
暂无评论