In phylogenetics, a central problem is to infer the evolutionary relationships between a set of species X;these relationships are often depicted via a phylogenetic tree-a tree having its leaves labeled bijectively by ...
详细信息
In phylogenetics, a central problem is to infer the evolutionary relationships between a set of species X;these relationships are often depicted via a phylogenetic tree-a tree having its leaves labeled bijectively by elements of X and without degree-2 nodes-called the "species tree." One common approach for reconstructing a species tree consists in first constructing several phylogenetic trees from primary data (e.g., DNA sequences originating from some species in X), and then constructing a single phylogenetic tree maximizing the "concordance" with the input trees. The obtained tree is our estimation of the species tree and, when the input trees are defined on overlapping-but not identical-sets of labels, is called "supertree." In this paper, we focus on two problems that are central when combining phylogenetic trees into a supertree: the compatibility and the strict compatibility problems for unrooted phylogenetic trees. These problems are strongly related, respectively, to the notions of "containing as a minor" and "containing as a topological minor" in the graph community. Both problems are known to be fixed parameter tractable in the number of input trees k, by using their expressibility in monadic second-order logic and a reduction to graphs of bounded treewidth. Motivated by the fact that the dependency on k of these algorithms is prohibitively large, we give the first explicit dynamic programming algorithms for solving these problems, both running in time , where n is the total size of the input.
Given a directed graph G and a set R of vertex pairs, the MINIMUM CERTIFICATE DISPERSAL problem asks for an assignment of arcs to vertices ("terminals") such that, for each (u, v) is an element of R, a u-v-p...
详细信息
Given a directed graph G and a set R of vertex pairs, the MINIMUM CERTIFICATE DISPERSAL problem asks for an assignment of arcs to vertices ("terminals") such that, for each (u, v) is an element of R, a u-v-path can be constructed using only arcs assigned to u or v. Herein, the total number k of assignments should be minimal. The problem is well motivated in key-exchange protocols for asymmetric cryptography. We provide a first parameterized complexity analysis of this NP-hard problem and its variant CHAINED MINIMUM CERTIFICATE DISPERSAL, where, instead of pairs of terminals, a set of paths ("chains") that should be constructed, is prescribed. Although polynomial-time solvable for constant values of k, the former variant seems much harder, surfacing in the proof that it is W[1]-hard with respect to k while CHAINED MINIMUM CERTIFICATE DISPERSAL yields a polynomial-size problem kernel. We even show fixed-parameter tractability of the latter with respect to the stronger parameter "number t of terminals". In particular, while there is no polynomial-size kernel with respect to t, the problem admits a polynomial-size Turing kernel. Towards answering the question whether MINIMUM CERTIFICATE DISPERSAL can be solved in polynomial time when t is constant, we show polynomial-time solvability for at most two requests (comprising all instances with t <= 2) using an algorithm for the STRONG DISTANCE problem, which asks for a minimum-size subdigraph in which two given vertices are strongly connected. Finally, we emphasize the hardness of MINIMUM CERTIFICATE DISPERSAL by proving it NP-hard for very restricted sets of instances, thereby excluding many parameters and combinations from consideration for efficient multivariate algorithms. (C) 2016 Elsevier B.V. All rights reserved.
In this paper we study Domination Games, a class of games introduced by Fomin, Kratsch, and Muller in [8]. Domination games are a variant of the well-known graph searching games (also called cops and robber games), wh...
详细信息
In this paper we study Domination Games, a class of games introduced by Fomin, Kratsch, and Muller in [8]. Domination games are a variant of the well-known graph searching games (also called cops and robber games), where a number of cops tries to capture a robber hiding on the vertices of a graph. Variants of these games are often used to provide a game-theoretic characterization of important graph parameters such as pathwidth, treewidth, and hypertreewidth. We are primarily interested in questions concerning the complexity and monotonicity of these games. We show that dominating games are computationally much harder than standard cops and robber games and establish strong non-monotonicity results for various notions of monotonicity that arise naturally in the context of domination games. Answering a question of [8], we show that there are graphs where the shortest winning strategy for a minimal number of cops must necessarily be of exponential length. (C) 2016 Elsevier B.V. All rights reserved.
The generalized function matching (GFM) problem has been intensively studied starting with Ehrenfreucht and Rozenberg (Inf Process Lett 9(2):86-88, 1979). Given a pattern p and a text t, the goal is to find a mapping ...
详细信息
The generalized function matching (GFM) problem has been intensively studied starting with Ehrenfreucht and Rozenberg (Inf Process Lett 9(2):86-88, 1979). Given a pattern p and a text t, the goal is to find a mapping from the letters of p to non-empty substrings of t, such that applying the mapping to p results in t. Very recently, the problem has been investigated within the framework of parameterized complexity (Fernau et al. in FSTTCS, 2013). In this paper we study the parameterized complexity of the optimization variant of GFM (called Max-GFM), which has been introduced in Amir and Amihood (J Discrete Algorithms 5(3):514-523, 2007). Here, one is allowed to replace some of the pattern letters with some special symbols "?", termed wildcards or don't cares, which can be mapped to an arbitrary substring of the text. The goal is to minimize the number of wildcards used. We give a complete classification of the parameterized complexity of Max-GFM and its variants under a wide range of parameterizations, such as, the number of occurrences of a letter in the text, the size of the text alphabet, the number of occurrences of a letter in the pattern, the size of the pattern alphabet, the maximum length of a string matched to any pattern letter, the number of wildcards and the maximum size of a string that a wildcard can be mapped to.
We study the computational problem of checking whether a quantified conjunctive query (a first-order sentence built using only conjunction as Boolean connective) is true in a finite poset (a reflexive, antisymmetric, ...
详细信息
We study the computational problem of checking whether a quantified conjunctive query (a first-order sentence built using only conjunction as Boolean connective) is true in a finite poset (a reflexive, antisymmetric, and transitive directed graph). We prove that the problem is already NP-hard on a certain fixed poset, and investigate structural properties of posets yielding fixed-parameter tractability when the problem is parameterized by the query. Our main algorithmic result is that model checking quantified conjunctive queries on posets is fixed-parameter tractable when parameterized by the sentence and the width of the poset (the maximum size of a subset of pairwise incomparable elements). We complement our algorithmic result by complexity results with respect to classes of finite posets in a hierarchy of natural poset invariants, establishing its tightness in this sense. (C) 2016 Elsevier B.V. All rights reserved.
The boxicity of a graph G is the least integer d such that G has an intersection model of axis-aligned d-dimensional boxes. Boxicity, the problem of deciding whether a given graph G has boxicity at most d, is NP-compl...
详细信息
The boxicity of a graph G is the least integer d such that G has an intersection model of axis-aligned d-dimensional boxes. Boxicity, the problem of deciding whether a given graph G has boxicity at most d, is NP-complete for every fixed . We show that Boxicity is fixed-parameter tractable when parameterized by the cluster vertex deletion number of the input graph. This generalizes the result of Adiga et al. (2010), that Boxicity is fixed-parameter tractable in the vertex cover number. Moreover, we show that Boxicity admits an additive 1-approximation when parameterized by the pathwidth of the input graph. Finally, we provide evidence in favor of a conjecture of Adiga et al. (2010) that Boxicity remains NP-complete even on graphs of constant treewidth.
The workflow satisfiability problem (wsp) is a problem of practical interest that arises whenever tasks need to be performed by authorized users, subject to constraints defined by business rules. We are required to de...
详细信息
The workflow satisfiability problem (wsp) is a problem of practical interest that arises whenever tasks need to be performed by authorized users, subject to constraints defined by business rules. We are required to decide whether there exists a plan-an assignment of tasks to authorized users-such that all constraints are satisfied. The wsp is, in fact, the conservative constraint satisfaction problem (i.e., for each variable, here called task, we have a unary authorization constraint) and is, thus, -complete. It was observed by Wang and Li (ACM Trans Inf Syst Secur 13(4):40, 2010) that the number of tasks is often quite small and so can be used as a parameter, and several subsequent works have studied the parameterized complexity of wsp regarding parameter . We take a more detailed look at the kernelization complexity of wsp( ) when denotes a finite or infinite set of allowed constraints. Our main result is a dichotomy for the case that all constraints in are regular: (1) We are able to reduce the number of users to . This entails a kernelization to size poly for finite , and, under mild technical conditions, to size poly for infinite , where denotes the number of constraints. (2) Already wsp( ) for some allows no polynomial kernelization in unless the polynomial hierarchy collapses.
We deal with the Regenerator Location Problem in optical networks. We are given a network G = (V, E), and a set Q of communication requests between pairs of terthinals in V. We investigate two variations: one in which...
详细信息
We deal with the Regenerator Location Problem in optical networks. We are given a network G = (V, E), and a set Q of communication requests between pairs of terthinals in V. We investigate two variations: one in which we are given a routing P of the requests in Q, and one in which we are required to find also the routing. In both cases, each path in P must contain at least one regenerator in every d consecutive internal vertices in order to deal with loss of signal quality for some d > 0. The goal is to minimize the number of vertices that contain regenerators used by the solution. Both variations of the problem are NP-HARD in the general case. In this work we investigate the parameterized complexity of the problem. We introduce several fixed parameter tractability results and polynomial algorithms for fixed parameter values, as well as several NP-HARDNESS results. The parameters under consideration are the treewidth of the input graph, the sizes d and vertical bar Q vertical bar and the load of the vertices, i.e. the number of paths passing through any vertex. (C) 2015 Elsevier B.V. All rights reserved.
Successful solvers for integer linear programs (ILPs) demonstrate that preprocessing can greatly speed up the computation. We study preprocessing for ILPs via the theoretical notion of kernelization from parameterized...
详细信息
Successful solvers for integer linear programs (ILPs) demonstrate that preprocessing can greatly speed up the computation. We study preprocessing for ILPs via the theoretical notion of kernelization from parameterized complexity. Prior to our work, there were only implied lower bounds from other problems that hold only for dense instances and do not take into account the domain size. We consider the feasibility problem for ILPs Ax <= where A is an r-row-sparse matrix parameterized by the number of variables, i.e., A has at most r nonzero entries per row, and show that its kernelizability depends strongly on the domain size. If the domain is unbounded then this problem does not admit a polynomial kernelization unless NP subset of coNP/poly. If, on the other hand, the domain size d of each variable is polynomially bounded in n, or if d is an additional parameter, then we do get a polynomial kernelization. (C) 2016 Elsevier Inc. All rights reserved.
An author's profile on Google Scholar consists of indexed articles and associated data, such as the number of citations and the H-index. The author is allowed to merge articles;this may affect the H-index. We anal...
详细信息
An author's profile on Google Scholar consists of indexed articles and associated data, such as the number of citations and the H-index. The author is allowed to merge articles;this may affect the H-index. We analyze the (parameterized) computational complexity of maximizing the H-index using article merges. Herein, to model realistic manipulation scenarios, we define a compatibility graph whose edges correspond to plausible merges. Moreover, we consider several different measures for computing the citation count of a merged article. For the measure used by Google Scholar, we give an algorithm that maximizes the H-index in linear time if the compatibility graph has constant-size connected components. In contrast, if we allow to merge arbitrary articles (that is, for compatibility graphs that are cliques), then already increasing the H-index by one is NP-hard. Experiments on Google Scholar profiles of AI researchers show that the H-index can be manipulated substantially only if one merges articles with highly dissimilar titles. (C) 2016 Elsevier B.V. All rights reserved.
暂无评论