Given a set X of species, a phylogenetic tree is an unrooted binary tree whose leaves are bijectively labelled by X. Such trees can be used to show the way species evolve over time. One way of understanding how topolo...
详细信息
Given a set X of species, a phylogenetic tree is an unrooted binary tree whose leaves are bijectively labelled by X. Such trees can be used to show the way species evolve over time. One way of understanding how topologically different two phylogenetic trees are, is to construct a minimum -size agreement forest: a partition of X into the smallest number of blocks, such that the blocks induce homeomorphic, non -overlapping subtrees in both trees. This is called a maximum agreement forest. This comparison yields insight into commonalities and differences in the evolution of X across the two trees. Computing a maximum agreement forest is NP -hard [8]. In this work we study the problem on caterpillars, which are path -like phylogenetic trees. We will demonstrate that, even if we restrict the input to this highly restricted subclass, the problem remains NP -hard and is in fact APX-hard. Furthermore we show that for caterpillars two standard reduction rules well known in the literature yield a tight kernel of size at most 7k, compared to 15k for general trees [11]. Finally we demonstrate that we can determine if two caterpillars have an agreement forest with at most k blocks in O * (2.49 k ) time, compared to O * (3 k ) for general trees [4], where O * (.) suppresses polynomial factors.
Phylogenetic trees are used to model evolution: leaves are labeled to represent contemporary species (``taxa""), and interior vertices represent extinct ancestors. Informally, convex characters are measureme...
详细信息
Phylogenetic trees are used to model evolution: leaves are labeled to represent contemporary species (``taxa""), and interior vertices represent extinct ancestors. Informally, convex characters are measurements on the contemporary species in which the subset of species (both contemporary and extinct) that share a given state form a connected subtree. Kelk and Stamoulis [Adv. Appl. Math., 84 (2017), pp. 34--46] showed how to efficiently count, list, and sample certain restricted subfamilies of convex characters, and algorithmic applications were given. We continue this work in a number of directions. First, we show how combining the enumeration of convex characters with existing parameterized algorithms can be used to speed up exponential-time algorithms for the maximum agreement forest problem in phylogenetics. Second, we revisit the quantity g2(T), defined as the number of convex characters on T in which each state appears on at least 2 taxa. We use this to give an algorithm with running time O(\phi n \cdot poly(n)), where \phi \approx 1.6181 is the golden ratio and n is the number of taxa in the input trees for computation of maximum parsimony distance on two state characters. By further restricting the characters counted by g2(T) we open an interesting bridge to the literature on enumeration of matchings. By crossing this bridge we improve the running time of the aforementioned parsimony distance algorithm to O(1.5895n \cdot poly(n)) and obtain a number of new results in themselves relevant to enumeration of matchings on at most binary trees.
In the Boolean maximum constraint satisfaction problem-Max CSP(G)-one is given a collection of weighted applications of constraints from a finite constraint language Gamma, over a common set of variables, and the goal...
详细信息
In the Boolean maximum constraint satisfaction problem-Max CSP(G)-one is given a collection of weighted applications of constraints from a finite constraint language Gamma, over a common set of variables, and the goal is to assign Boolean values to the variables so that the total weight of satisfied constraints is maximized. There exists a concise dichotomy theorem providing a criterion on Gamma for the problem to be polynomial-time solvable and stating that otherwise, it becomes NP-hard. We study the NP-hard cases through the lens of kernelization and provide a complete characterization of Max CSP( Gamma) with respect to the optimal compression size. Namely, we prove that Max CSP(Gamma) parameterized by the number of variables n is either polynomial-time solvable, or there exists an integer d >= 2 depending on Gamma, such that: (1) An instance of Max CSP( Gamma) can be compressed into an equivalent instance with O(n(d) logn) bits in polynomial time, (2) Max CSP(Gamma) does not admit such a compression to O(n(d-epsilon)) bits unless NP subset of co-NP/poly. Our reductions are based on interpreting constraints as multilinear polynomials combined with the framework of "constraint implementations", formerly used in the context of APX-hardness. As another application of our reductions, we reveal tight connections between optimal running times for solving Max CSP(Gamma). More precisely, we show that obtaining a running time of the form O(2((1-epsilon)n)) for particular classes of Max CSPs is as hard as breaching this barrier for Max d-SAT for some d.
In this paper, we give a framework to design exponential-time approximation schemes for basic graph partitioning problems such as k-WAY CUT, MULTIWAY CUT, STEINER k-CUT and MULTICUT, where the goal is to minimize the ...
详细信息
ISBN:
(纸本)9783959773096
In this paper, we give a framework to design exponential-time approximation schemes for basic graph partitioning problems such as k-WAY CUT, MULTIWAY CUT, STEINER k-CUT and MULTICUT, where the goal is to minimize the number of edges going across the parts. Our motivation to focus on approximation schemes for these problems comes from the fact that while it is possible to solve them exactly in 2nn (1) time (note that this is already faster than brute-forcing over all partitions or edge sets), it is not known whether one can do better. Using our framework, we design the first (1 + 0 -approximation algorithms for the above problems that run in time 2f(')' (for f (6) < 1) for all these problems. As part of our framework, we present two compression procedures. The first of these is a "lossless" procedure, which is inspired by the seminal randomized contraction algorithm for Global Min-cut of Karger [SODA '93]. Here, we reduce the graph to an equivalent instance where the total number of edges is linearly bounded in the number of edges in an optimal solution of the original instance. Following this, we show how a careful combination of greedy choices and the best exact algorithm for the respective problems can exploit this structure and lead to our approximation schemes. Our first compression procedure bounds the number of edges linearly in the optimal solution, but this could still leave a dense graph as the solution size could be superlinear in the number of vertices. However, for several problems, it is known that they admit significantly faster algorithms on instances where solution size is linear in the number of vertices, in contrast to general instances. Hence, a natural question arises here. Could one reduce the solution size to linear in the number of vertices, at least in the case where we are willing to settle for a near-optimal solution, so that the aforementioned faster algorithms could be exploited? In the second compression procedure, using cut sparsifiers
This paper focuses on the Sort & Search method to solve a particular class of single criterion optimization problems called multiple constraints problems. This general method enables to derive exponential-time alg...
详细信息
This paper focuses on the Sort & Search method to solve a particular class of single criterion optimization problems called multiple constraints problems. This general method enables to derive exponential-time algorithms for NP-hard optimization problems. In this paper, this method is further extended to the class of multicriteria multiple constraints problems for which the set of Pareto optima is enumerated. Then, the application of this new method on some multicriteria scheduling problems is proposed, leading to new exponential-time algorithms for those problems.
Given a 1.5-dimensional terrain T, also known as an x-monotone polygonal chain, the Terrain Guarding problem seeks a set of points of minimum size on T that guards all of the points on T. Here, we say that a point p g...
详细信息
Given a 1.5-dimensional terrain T, also known as an x-monotone polygonal chain, the Terrain Guarding problem seeks a set of points of minimum size on T that guards all of the points on T. Here, we say that a point p guards a point q if no point of the line segment (pq) over bar is strictly below T. The Terrain Guarding problem has been extensively studied for over 20 years. In 2005 it was already established that this problem admits a constant-factor approximation algorithm (SODA 2005). However, only in 2010 King and Krohn (SODA 2010) finally showed that Terrain Guarding is NP-hard. In spite of the remarkable developments in approximation algorithms for Terrain Guarding, next to nothing is known about its parameterized complexity. In particular, the most intriguing open questions in this direction ask whether, if parameterized by the size k of a solution guard set, it admits a subexponential-time algorithm and whether it is fixed-parameter tractable. In this article, we answer the first question affirmatively by developing an n(O(root k))-time algorithm for both Discrete Terrain Guarding and Continuous Terrain Guarding. We alsomake non-trivial progress with respect to the second question: we show that Discrete Orthogonal Terrain Guarding, a well-studied special case of Terrain Guarding, is fixed-parameter tractable.
We study feedback vertex sets (FVS) in tournaments, which are orientations of complete graphs. As our main result, we show that any tournament on n nodes has at most 1.5949(n) minimal FVS. This significantly improves ...
详细信息
We study feedback vertex sets (FVS) in tournaments, which are orientations of complete graphs. As our main result, we show that any tournament on n nodes has at most 1.5949(n) minimal FVS. This significantly improves the previously best upper bound of 1.6667(n) by Fomin etal. [STOC 2016] and 1.6740(n) by Gaspers and Mnich [J. Graph Theory72(1):72-89, 2013]. Our new upper bound almost matches the best-known lower bound of 21n/7 approximate to 1.5448n, due to Gaspers and Mnich. Our proof is algorithmic, and shows that all minimal FVS of tournaments can be enumerated in time O(1.5949(n)).
A dominating set in a graph is a subset of vertices such that each vertex is either in the dominating set or adjacent to some vertex in the dominating set. It is known that graphs have at most O(1.7159(n)) minimal dom...
详细信息
A dominating set in a graph is a subset of vertices such that each vertex is either in the dominating set or adjacent to some vertex in the dominating set. It is known that graphs have at most O(1.7159(n)) minimal dominating sets. In this paper, we establish upper bounds on this maximum number of minimal dominating sets for split graphs, cobipartite graphs and interval graphs. For each of these graph classes, we provide an algorithm to enumerate them. For split and interval graphs, we show that the number of minimal dominating sets is at most 3(n/3) approximate to 1.4423(n), which is the best possible bound. This settles a conjecture by Couturier et al. (SOFSEM 2012, [11). For cobipartite graphs, we lower the O(1.5875(n)) upper bound from Couturier et al. to O(1.4511(n)). (C) 2014 Elsevier B.V. All rights reserved.
In this paper, we focus on the Sort & Search method initially proposed by Horowitz and Sahni (1974) [6] to solve the knapsack problem, which has already show its applicability to derive exponential-time algorithms...
详细信息
In this paper, we focus on the Sort & Search method initially proposed by Horowitz and Sahni (1974) [6] to solve the knapsack problem, which has already show its applicability to derive exponential-time algorithms for some scheduling problems. We propose an extension of this method to a general class of problems called Multiple Constraint Problems and show that the extended Sort & Search method enables one to derive new exponential-time algorithms, with O*(m(n/2)) worst-case complexity, for two scheduling problems. (C) 2013 Elsevier B.V. All rights reserved.
We present an exact algorithm for a PSPACE-complete problem, denoted by CONNkSAT, which asks whether the solution space for a given k-CNF formula is connected on the n-dimensional hypercube. The problem is known to be...
详细信息
We present an exact algorithm for a PSPACE-complete problem, denoted by CONNkSAT, which asks whether the solution space for a given k-CNF formula is connected on the n-dimensional hypercube. The problem is known to be PSPACE-complete for k >= 3, and polynomial solvable for k <= 2 (Gopalan et al., 2009) [6]. We show that CONNkSAT for k >= 3 is solvable in time O((2 - epsilon(k))(n)) for some constant epsilon(k) > 0, where epsilon(k) depends only on k, but not on n. This result is considered to be interesting due to the following fact shown by Calabro [5]: QBF-3-SAT, which is a typical PSPACE-complete problem, is not solvable in time O((2 - epsilon)(n)) for any constant epsilon > 0, provided that the SAT problem (with no restriction to the clause length) is not solvable in time O((2 - epsilon)(n)) for any constant epsilon > 0. (C) 2011 Elsevier B.V. All rights reserved.
暂无评论