We propose a fast Fourier transform algorithm, which removes two steps of twiddle factor multiplications from the conventional five-step FFT algorithm. The proposed FFT algorithm not only reduces the computational com...
详细信息
We propose a fast Fourier transform algorithm, which removes two steps of twiddle factor multiplications from the conventional five-step FFT algorithm. The proposed FFT algorithm not only reduces the computational complexity of the five-step FFT algorithm by O(n) operations, but also reduces its memory requirement. (c) 2006 Elsevier B.V. All rights reserved.
Creating target structures through the coordinated efforts of teams of autonomous robots (possibly aided by specific features in their environments) is a very important problem in distributed robotics. Many specific i...
详细信息
Creating target structures through the coordinated efforts of teams of autonomous robots (possibly aided by specific features in their environments) is a very important problem in distributed robotics. Many specific instances of distributed robotic construction teams have been developed manually. An important issue is whether automated controller design algorithms can both quickly produce robot controllers and guarantee that teams using these controllers will build arbitrary requested target structures correctly;this task may also involve specifying features in the environment that can aid the construction process. In this paper, we give the first computational and parameterized complexity analyses of several problems associated with the design of robot controllers and environments for creating target structures. These problems use a simple finite-state robot controller model that moves in a non-continuous deterministic manner in a grid-based environment. Our goal is to establish whether algorithms exist that are both fast and correct for all inputs and if not, under which restrictions such algorithms are possible. We prove that none of these problems are efficiently solvable in general and remain so under a number of plausible restrictions on controllers, environments, and target structures. We also give the first restrictions relative to which these problems are efficiently solvable and discuss what theoretical solvability and unsolvability results derived relative to the problems examined here mean for real-world construction using robot teams.
We study the minimum-concave-cost flow problem on a two-dimensional grid. We characterize the computational complexity of this problem based on the number of rows and columns of the grid, the number of different capac...
详细信息
We study the minimum-concave-cost flow problem on a two-dimensional grid. We characterize the computational complexity of this problem based on the number of rows and columns of the grid, the number of different capacities over all the arcs, and the location of sources and sinks. The concave cost over each arc is assumed to be evaluated through an oracle machine, i.e., the oracle machine returns the cost over an arc in a single computational step, given the flow value and the arc index. We propose an algorithm whose running time is polynomial in the number of columns of the grid for the following cases: (1) the grid has a constant number of rows, a constant number of different capacities over all the arcs, and sources and sinks in at most two rows;(2) the grid has two rows and a constant number of different capacities over all the arcs connecting rows;(3) the grid has a constant number of rows and all sources in one row, with infinite capacity over each arc. These are presumably the most general polynomially solvable cases, since we show that the problem becomes NP-hard when any condition in these cases is removed. Our results apply to several variants and generalizations of the single item dynamic lot sizing model and answer several questions raised in serial supply chain optimization.
Halpern and Pearl introduced a definition of actual causality;Eiter and Lukasiewicz showed that computing whether X = x is a cause of Y = y is NP-complete in binary models (where all variables can take on only two val...
详细信息
Halpern and Pearl introduced a definition of actual causality;Eiter and Lukasiewicz showed that computing whether X = x is a cause of Y = y is NP-complete in binary models (where all variables can take on only two values) and Sigma(P)(2)-complete in general models. In the final version of their paper, Halpern and Pearl slightly modified the definition of actual cause, in order to deal with problems pointed out by Hopkins and Pearl. As we show, this modification has a nontrivial impact on the complexity of computing whether (X) over right arrow = (x) over right arrow is a cause of Y = y. To characterize the complexity, a new family D (p)(k), k = 1;2;3,.., of complexity classes is introduced, which generalizes the class D-P introduced by Papadimitriou and Yannakakis (D-P is just D-1(p)). We show that the complexity of computing causality under the updated definition is D-2(P)-complete. Chockler and Halpern extended the definition of causality by introducing notions of responsibility and blame, and characterized the complexity of determining the degree of responsibility and blame using the original definition of causality. Here, we completely characterize the complexity using the updated definition of causality. In contrast to the results on causality, we show that moving to the updated definition does not result in a difference in the complexity of computing responsibility and blame.
We clarify, extend, and solve a long-standing open problem concerning the computational complexity for packet scheduling algorithms to achieve tight end-to-end delay bounds. We first focus on the difference between th...
详细信息
We clarify, extend, and solve a long-standing open problem concerning the computational complexity for packet scheduling algorithms to achieve tight end-to-end delay bounds. We first focus on the difference between the time a packet finishes service in a scheduling algorithm and its virtual finish time under a GPS (General Processor Sharing) scheduler, called GPS-relative delay. We prove that, under a slightly restrictive but reasonable computational model, the lower bound computational complexity of any scheduling algorithm that guarantees 0(l) GPS-relative delay bound is Ohm (log n). We also discover that, surprisingly, the complexity lower bound remains the same even if the delay bound is relaxed to O(n(a)) for 0 < a < 1. This implies that the delay-complexity tradeoff curve is flat in the "interval" [O(1), O(n)). We later conditionally extend both complexity results (for O(1) or O(n(a)) delay) to a much stronger computational model, the linear decision tree. Finally, we show that the same complexity lower bounds are conditionally applicable to guaranteeing tight end-to-end delay bounds, if the delay bounds are provided through the Latency Rate (LR) framework.
We study the computational aspects of weak saddles, an ordinal set-valued solution concept proposed by Shapley. F. Brandt et al. recently gave a polynomial-time algorithm for computing weak saddles in a subclass of ma...
详细信息
We study the computational aspects of weak saddles, an ordinal set-valued solution concept proposed by Shapley. F. Brandt et al. recently gave a polynomial-time algorithm for computing weak saddles in a subclass of matrix games, and showed that certain problems associated with weak saddles of bimatrix games are NP-hard. The important question of whether weak saddles can be found efficiently was left open. We answer this question in the negative by showing that finding weak saddles of bimatrix games is NP-hard, under polynomial-time Turing reductions. We moreover prove that recognizing weak saddles is coNP-complete, and that deciding whether a given action is contained in some weak saddle is hard for parallel access to NP and thus not even in NP unless the polynomial hierarchy collapses. Most of our hardness results are shown to carry over to a natural weakening of weak saddles.
Bondarenko et al. have recently proposed an abstract framework for default reasoning. Besides capturing most existing formalisms and proving that their standard semantics all coincide, the framework extends these form...
详细信息
Bondarenko et al. have recently proposed an abstract framework for default reasoning. Besides capturing most existing formalisms and proving that their standard semantics all coincide, the framework extends these formalisms by generalising the semantics of admissible and preferred arguments, originally proposed for logic programming only. In this paper we analyse the computational complexity of credulous and sceptical reasoning under the semantics of admissible and preferred arguments for (the propositional variant of) the instances of the abstract framework capturing theorist, circumscription, logic programming, default logic, and autoepistemic logic. Although the new semantics have been tacitly assumed to mitigate the computational hardness of default reasoning under the standard semantics of stable extensions, we show that in many cases reasoning under the admissibility and preferability semantics is computationally harder than under the standard semantics. In particular, in the case of autoepistemic logic, sceptical reasoning under preferred arguments is located at the fourth level of the polynomial hierarchy, whereas the same form of reasoning under stable extensions is located at the second level. (C) 2002 Elsevier Science B.V. All rights reserved.
The pooling problem is an extension of the minimum cost flow problem defined on a directed graph with three layers of nodes, where quality constraints are introduced at each terminal node. Flow entering the network at...
详细信息
The pooling problem is an extension of the minimum cost flow problem defined on a directed graph with three layers of nodes, where quality constraints are introduced at each terminal node. Flow entering the network at the source nodes has a given quality, at the internal nodes (pools) the entering flow is blended, and then sent to the terminal nodes where all entering flow streams are blended again. The resulting flow quality at the terminals has to satisfy given bounds. The objective is to find a cost-minimizing flow assignment that satisfies network capacities and the terminals' quality specifications. Recently, it was proved that the pooling problem is NP-hard, and that the hardness persists when the network has a unique pool. In contrast, instances with only one source or only one terminal can be formulated as compact linear programs, and thereby solved in polynomial time. In this work, it is proved that the pooling problem remains NP-hard even if there is only one quality constraint at each terminal. Further, it is proved that the NP-hardness also persists if the number of sources and the number of terminals are no more than two, and it is proved that the problem remains hard if all in-degrees or all out-degrees are at most two. Examples of special cases in which the problem is solvable by linear programming are also given. Finally, some open problems, which need to be addressed in order to identify more closely the borderlines between polynomially solvable and NP-hard variants of the pooling problem, are pointed out.
In this work we study a special minimax problem where there are linear constraints that couple both the minimization and maximization decision variables. The problem is a generalization of the traditional saddle point...
详细信息
In this work we study a special minimax problem where there are linear constraints that couple both the minimization and maximization decision variables. The problem is a generalization of the traditional saddle point problem (which does not have the coupling constraint), and it finds applications in wireless communication, game theory, transportation, just to name a few. We show that the considered problem is challenging, in the sense that it violates the classical max-min inequality, and that it is NP-hard even under very strong assumptions (e.g., when the objective is strongly-convex-strongly-concave). We then develop a duality theory for it, and analyze conditions under which the duality gap becomes zero. Finally, we study a class of stationary solutions defined based on the dual problem, and evaluate their practical performance in an application on adversarial attacks on network flow problems.
For a fixed pattern graph H, let H-CONTRACTIBILITY denote the problem of deciding whether a given input graph is contractible to H. This paper is part I of our study on the computational complexity of the H-CONTRACTIB...
详细信息
For a fixed pattern graph H, let H-CONTRACTIBILITY denote the problem of deciding whether a given input graph is contractible to H. This paper is part I of our study on the computational complexity of the H-CONTRACTIBILITY problem. We continue a line of research that was started in 1987 by Brouwer and Veldman, and we determine the computational complexity of the H-CONTRACTIBILITY problem for certain classes of pattern graphs. In particular, we pinpoint the complexity for all graphs H with five vertices except for two graphs, whose polynomial time algorithms are presented in part II. Interestingly, in all connected cases that are known to be polynomially solvable, the pattern graph H has a dominating vertex, whereas in all cases that are known to be NP-complete, the pattern graph H does not have a dominating vertex. (C) 2007 Wiley Periodicals, Inc.
暂无评论