Approximation schemes are commonly classified as being either a polynomial-time approximation scheme (ptas) or a fully polynomial-time approximation scheme (fptas). To properly differentiate between approximation sche...
详细信息
Approximation schemes are commonly classified as being either a polynomial-time approximation scheme (ptas) or a fully polynomial-time approximation scheme (fptas). To properly differentiate between approximation schemes for concrete problems, several subclasses have been identified: (optimum-)asymptotic schemes (ptas(a), fptas(a)), efficient schemes (eptas), and size-asymptotic schemes. We explore the structure of these subclasses, their mutual relationships, and their connection to the classic approximation classes. We prove that several of the classes are in fact equivalent. Furthermore, we prove the equivalence of eptas to so-called convergent polynomial-time approximation schemes. The results are used to refine the hierarchy of polynomial-time approximation schemes considerably and demonstrate the central position of eptas among approximation schemes. We also present two ways to bridge the hardness gap between asymptotic approximation schemes and classic approximation schemes. First, using notions from fixed-parameter complexity theory, we provide new characterizations of when problems have a ptas or fptas. Simultaneously, we prove that a large class of problems (including all MAX-SNP-complete problems) cannot have an optimum-asymptotic approximation scheme unless P=NP, thus strengthening results of Arora et al. (J. ACM 45(3):501-555, 1998). Secondly, we distinguish a new property exhibited by many optimization problems: pumpability. With this notion, we considerably generalize several problem-specific approaches to improve the effectiveness of approximation schemes with asymptotic behavior.
The is arguably one of the most important methods in modern science in analysing and understanding complex phenomena. In his book The Philosophy of Information, Floridi (The philosophy of information. Oxford Universit...
详细信息
The is arguably one of the most important methods in modern science in analysing and understanding complex phenomena. In his book The Philosophy of Information, Floridi (The philosophy of information. Oxford University Press, Oxford, 2011) presents the method of levels of abstraction as the main method of the Philosophy of Information. His discussion of abstraction as a method seems inspired by the formal methods and frameworks of computer science, in which abstraction is operationalised extensively in programming languages and design methodologies. Is it really clear what we should understand by levels of abstraction? How should they be specified? We will argue that levels of abstraction should be augmented with annotations, in order to express semantic information for them and reconcile the method of level of abstraction (LoA's) with other approaches. We discuss the extended method when applied e.g. to the analysis of abstract machines. This will lead to an example in which the number of LoA's is unbounded.
We determine tight bounds on the smallest-size integer grid needed to represent the n-node intersection graphs of a convex polygon P with P given in rational coordinates. The intersection graphs use only polygons that...
详细信息
We determine tight bounds on the smallest-size integer grid needed to represent the n-node intersection graphs of a convex polygon P with P given in rational coordinates. The intersection graphs use only polygons that are geometrically similar to P (translates or homothets) and must be represented such that each corner of each polygon lies on a point of the grid. We show the following generic results: if P is a parallelogram and only translates of P are used, then an Omega(n(2)) x Omega(n(2)) grid is sufficient and is needed for some graphs;if P is any other convex polygon and only translates of P are used, then a 2(Omega(n)) x 2(Omega(n)) grid is sufficient and is needed for some graphs;if P is any convex polygon and arbitrary homothets of P are allowed, then a 2(Omega(n)) x 2(Omega(n)) grid is sufficient and is needed for some graphs. The results substantially improve earlier bounds and settle the complexity of representing convex polygon intersection graphs. The results also imply small polynomial certificates for the recognition problem for all graph classes considered.
Shortcutting is the operation of adding edges to a network with the intent to decrease its diameter. We are interested in shortcutting networks while keeping degree increases in the network bounded, a problem first po...
详细信息
Shortcutting is the operation of adding edges to a network with the intent to decrease its diameter. We are interested in shortcutting networks while keeping degree increases in the network bounded, a problem first posed by Chung and Garey. Improving on a result of Bokhari and Raza we show that, for any delta >= 1, every undirected graph G can be shortcut in linear time to a diameter of at most O(log(1+delta) n) by adding no more than O(log(1+delta) n) edges such that degree increases remain bounded by delta. The result extends an estimate due to Alon et al. for the unconstrained case. Degree increases can be limited to 1 at only a small extra cost. For strongly connected, bounded -degree directed graphs Flaxman and Frieze proved that, if epsilon n random arcs are added, then the resulting graph has diameter O(ln n) with high probability. We prove that O(n/ Inn) edges suffice to shortcut any strongly connected directed graph to a graph with diameter less than O(In n) while keeping the degree increases bounded by O(1) per node. The result is proved in a stronger, parameterized form. For general directed graphs with stability number alpha, we show that all distances can be shortcut to O(alpha [ln n/alpha]) by adding only 4n/ln/alpha + alpha empty set edges while keeping degree increases bounded by at most O(1) per node, where empty set is equal to the so-called feedback -dimension of the graph. Finally, we prove bounds for various special classes of graphs, including graphs with Hamiltonian cycles or paths. Shortcutting with a degree constraint is proved to be strongly NP-complete and W[2]-hard, implying that the problem is neither likely to be fixed-parameter tractable nor efficiently approximable unless FPT = W[2]. (C) 2016 Elsevier B.V. All rights reserved.
We treat PNE-GG, the problem of deciding the existence of a Pure Nash Equilibrium in a graphical game, and the role of treewidth in this problem. PNE-GG is known to be -complete in general, but polynomially solvable f...
详细信息
We treat PNE-GG, the problem of deciding the existence of a Pure Nash Equilibrium in a graphical game, and the role of treewidth in this problem. PNE-GG is known to be -complete in general, but polynomially solvable for graphical games of bounded treewidth. We prove that PNE-GG is -Hard when parameterized by treewidth. On the other hand, we give a dynamic programming approach that solves the problem in time, where is the cardinality of the largest strategy set and is the treewidth of the input graph (and hides polynomial factors). This proves that PNE-GG is in for the combined parameter . Moreover, we prove that there is no algorithm that solves PNE-GG in time for any , unless the Strong Exponential Time Hypothesis fails. Our lower bounds implicitly assume that;we show that for the problem can be solved in polynomial time. Finally, we discuss the implication for computing pure Nash equilibria in graphical games (PNE-GG) of treewidth, the existence of polynomial kernels for PNE-GG parameterized by treewidth, and the construction of a sample and maximum-payoff pure Nash equilibrium.
We develop a model of computation as an unbounded process, measuring complexity by the number of observed behavioural changes during the computation. In a natural way, the model brings effective unbounded computation ...
详细信息
We develop a model of computation as an unbounded process, measuring complexity by the number of observed behavioural changes during the computation. In a natural way, the model brings effective unbounded computation up to the second level of the Arithmetical Hierarchy, unifying several earlier concepts like trial-and-error predicates and relativistic computing. The roots of the model can be traced back to the circular a-machines already distinguished by Turing in 1936. The model allows one to introduce nondeterministic unbounded computations and to formulate an analogue of the P-versus-NP question. We show that under reasonable assumptions, the resource-bounded versions of deterministic and nondeterministic unbounded computation have equal computational power but that in general, the corresponding complexity classes are different (P-mind subset of NPmind). (C) 2011 Elsevier B.V. All rights reserved.
Question-answering systems like Watson beat humans when it comes to processing speed and memory. But what happens if we compensate for this? What are the fundamental differences in power between human and artificial a...
详细信息
Question-answering systems like Watson beat humans when it comes to processing speed and memory. But what happens if we compensate for this? What are the fundamental differences in power between human and artificial agents in question answering? We explore this issue by defining new computational models for both agents and comparing their computational efficiency in interactive sessions. Concretely, human agents are modeled by means of cognitive automata, augmented with a form of background intelligence which gives the automata the possibility to query a given Turing machine and use the answers from one interaction to the next. On the other hand, artificial question-answering agents are modeled by QA-machines, which are Turing machines that can access a predefined, potentially infinite knowledge base ('advice') and have a bounded amount of learning space at their disposal. We show that cognitive automata and QA-machines have exactly the same potential in realizing question-answering sessions, provided the resource bounds in one model are sufficient to match the abilities of the other. In particular, polynomially bounded cognitive automata with background intelligence (i.e. human agents) prove to be equivalent to polynomially bounded QA-machines with logarithmic learning space. It generalizes Pippenger's theorem on the computational power of switching circuits (without background intelligence) to a foundational result for question answering in cognitive science. The framework reveals why QA-machines have a fundamental advantage. (C) 2018 Elsevier B.V. All rights reserved.
We resolve an old problem, namely to design a 'natural' machine model for accepting the complements of recursively enumerable languages. The model we present is based on non-deterministic Turing machines with ...
详细信息
We resolve an old problem, namely to design a 'natural' machine model for accepting the complements of recursively enumerable languages. The model we present is based on non-deterministic Turing machines with 'one-sided' advice. We prove that these machines precisely accept the co-RE languages, without restriction on the advice functions that are used. We show that for accepting a co-RE language, one-sided advice is as powerful as arbitrary advice, but also that linearly bounded one-sided advice is sufficient. However, 'long' sublinear advice can make Turing machines with one-sided advice accept more co-RE languages than 'short' sublinear advice. We prove that infinite proper hierarchies of language classes inside co-RE can be devised, using suitable increasing sequences of bounding functions for the allowed advice. The machine model and the results dualize for the family of recursively enumerable languages.
We consider the multiprocessor scheduling problem in which one must schedule n independent tasks nonpreemptively on m identical, parallel machines, such that the completion time of the last task is minimal. For this w...
详细信息
We consider the multiprocessor scheduling problem in which one must schedule n independent tasks nonpreemptively on m identical, parallel machines, such that the completion time of the last task is minimal. For this well-studied problem the Largest Differencing Method of Karmarkar and Karp outperforms other existing polynomial-time approximation algorithms from an average-case perspective. For m >= 3 the worst-case performance of the Largest Differencing Method has remained a challenging open problem. In this paper we show that the worst-case performance ratio is bounded between 4/3 - 1/3(m-1) and 4/3 - 1/3m. For fixed m we establish further refined bounds in terms of n.
Geometric intersection graphs are graphs determined by intersections of geometric objects. We study the complexity of visualizing the arrangements of objects that induce such graphs. We give a general framework for de...
详细信息
ISBN:
(纸本)9783642184680
Geometric intersection graphs are graphs determined by intersections of geometric objects. We study the complexity of visualizing the arrangements of objects that induce such graphs. We give a general framework for describing geometric intersection graphs, using arbitrary finite base sets of rationally given convex polygons and affine transformations. We prove that for every class of intersection graphs that. fits the framework, the graphs in the class have a representation using polynomially many bits. Consequently, the recognition problem of these classes is in NP (and thus NP-complete). We also give an algorithm to find a drawing of the objects in the plane, if a graph class fits the framework.
暂无评论