One approach to confronting computational hardness is to try to understand the contribution of various parameters to the running time of algorithms and the complexity of computational tasks. Almost no computational ta...
详细信息
One approach to confronting computational hardness is to try to understand the contribution of various parameters to the running time of algorithms and the complexity of computational tasks. Almost no computational tasks in real life are specified by their size alone. It is not hard to imagine that some parameters contribute more intractability than others and it seems reasonable to develop a theory of computational complexity which seeks to exploit this fact. Such a theory should be able to address the needs of practitioners in algorithmics. The last twenty years have seen the development of such a theory. This theory has a large number of successes in terms of a rich collection of algorithmic techniques, both practical and theoretical, and a fine-grained intractability theory. Whilst the theory has been widely used in a number of areas of applications including computational biology, linguistics, VLSI design, learning theory and many others, knowledge of the area is highly varied. We hope that this article will show the basic theory and point at the wide array of techniques available. Naturally the treatment is condensed, and the reader who wants more should go to the texts of Downey and Fellows (1999) [2], Flum and Grohe (2006) [59], Niedermeier (2006) [28], and the upcoming undergraduate text (Downey and Fellows 2012) [278]. (C). 2011 Elsevier Inc. All rights reserved.
The problem of packing of equal disks (or circles) into a rectangle is a fundamental geometric problem. (By a packing here we mean an arrangement of disks in a rectangle without overlapping.) We consider the following...
详细信息
The problem of packing of equal disks (or circles) into a rectangle is a fundamental geometric problem. (By a packing here we mean an arrangement of disks in a rectangle without overlapping.) We consider the following algorithmic generalization of the equal disk packing problem. In this problem, for a given packing of equal disks into a rectangle, the question is whether by changing positions of a small number of disks, we can allocate space for packing more disks. More formally, in the repacking problem, for a given set of n equal disks packed into a rectangle and integers k and h, we ask whether it is possible by changing positions of at most h disks to pack n+k\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n+k$$\end{document} disks. Thus the problem of packing equal disks is the special case of our problem with n=h=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n=h=0$$\end{document}. While the computational complexity of packing equal disks into a rectangle remains open, we prove that the repacking problem is NP-hard already for h=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$h=0$$\end{document}. Our main algorithmic contribution is an algorithm that solves the repacking problem in time (h+k)O(h+k)center dot|I|O(1)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt}
In this paper a new family of cryptographic hash-functions is described. The main goal was to create a such hash function, where algorithm varies depending on hash code length. Hash function Hamsi was taken as basis o...
详细信息
In this paper a new family of cryptographic hash-functions is described. The main goal was to create a such hash function, where algorithm varies depending on hash code length. Hash function Hamsi was taken as basis of a parameterized algorithm. This hash function was analyzed in a different ways. For a linear transformation, whole class of linear transformations with the same branch numbers was defined. For this class were found invariant subspaces. The second part of the analysis was a research of differential attacks on Hamsi compression function. After the analysis of published works, changes were made to compression function. With these changes a parameterized hash function Hansi-n was described, that produces n bit of hash code (e.g. 512, 1024, 2048). To find out complexity of different versions of algorithm, the estimation of bitwise operations needed for one compression function evaluation is described. This new hash-functions can be used in a lot of applications, where hash-codes of varying length are needed.
This paper introduces the d-distance matching problem, in which we are given a bipartite graph G = (S, T;E) with S = {s(1),..., s(n)}, a weight function on the edges and an integer d epsilon Z(+). The goal is to find ...
详细信息
This paper introduces the d-distance matching problem, in which we are given a bipartite graph G = (S, T;E) with S = {s(1),..., s(n)}, a weight function on the edges and an integer d epsilon Z(+). The goal is to find a maximum-weight subset M subset of E of the edges satisfying the following two conditions: (i) the degree of every node of S is at most one in M, (ii) if s(i) t, s (j) t. M, then vertical bar j - i vertical bar= d. This question arises naturally, for example, in various scheduling problems. We show that the problem is NP-complete in general and admits a simple 3-approximation. We give an FPT algorithm parameterized by d and also showthat the casewhen the size of T is constant can be solved in polynomial time. From an approximability point of view, we show that the integrality gap of the natural integer programming model is at most 2- 1/2d-1, and give an LP-based approximation algorithm for the weighted case with the same guarantee. A combinatorial (2- 1/d)-approximation algorithm is also presented. Several greedy approaches are considered, and a local search algorithm is described that achieves an approximation ratio of 3/2 + epsilon for any constant epsilon > 0 in the unweighted case. The novel approaches used in the analysis of the integrality gap and the approximation ratio of locally optimal solutions might be of independent combinatorial interest.
We give a randomized algorithm that determines if a given graph has a simple path of length at least k in O(2(k) . poly(n)) time. Our method extends a recent O(2(3k/2). poly(n)) <= 0 (2.83(k) . poly(n)) algorithm o...
详细信息
We give a randomized algorithm that determines if a given graph has a simple path of length at least k in O(2(k) . poly(n)) time. Our method extends a recent O(2(3k/2). poly(n)) <= 0 (2.83(k) . poly(n)) algorithm of Koutis. (C) 2008 Elsevier B.V. All rights reserved.
Covering all edges of a graph by a minimum number of cliques is a well known NP-hard problem. For the parameter k being the maximal number of cliques to be used, the problem becomes fixed parameter tractable. However,...
详细信息
Covering all edges of a graph by a minimum number of cliques is a well known NP-hard problem. For the parameter k being the maximal number of cliques to be used, the problem becomes fixed parameter tractable. However, assuming the Exponential Time Hypothesis, there is no kernel of subexponential size in the worst-case. We study the average kernel size for random intersection graphs with n vertices, edge probability p, and clique covers of size k. We consider the well-known set of reduction rules of Gramm, Guo, Huffner, and Niedermeier (2009)[17] and show that with high probability they reduce the graph completely if pis bounded away from 1 and k < c log n for some constant c > 0. This shows that for large probabilistic graph classes like random intersection graphs the expected kernel size can be substantially smaller than the known exponential worst-case bounds. (C) 2015 Elsevier B.V. All rights reserved.
The GRAPH MOTIF problem asks whether a given multiset of colors appears on a connected subgraph of a vertex-colored graph. The fastest known parameterized algorithm for this problem is based on a reduction to the k-Mu...
详细信息
The GRAPH MOTIF problem asks whether a given multiset of colors appears on a connected subgraph of a vertex-colored graph. The fastest known parameterized algorithm for this problem is based on a reduction to the k-Multilinear Detection (k-MLD) problem: the detection of multilinear terms of total degree k in polynomials presented as circuits. We revisit k-MLD and define k-CMLD, a constrained version of it which reflects GRAPH MOTIF more faithfully. We then give a fast algorithm for k-CMLD. As a result we obtain faster parameterized algorithms for GRAPH MOTIF and variants of it. (C) 2012 Elsevier B.V. All rights reserved.
Partial Cover problems are optimization versions of fundamental and well-studied problems like VERTEX COVER and DOMINATING SET. Here one is interested in covering (or dominating) the maximum number of edges (or vertic...
详细信息
Partial Cover problems are optimization versions of fundamental and well-studied problems like VERTEX COVER and DOMINATING SET. Here one is interested in covering (or dominating) the maximum number of edges (or vertices) using a given number k of vertices, rather than covering all edges (or vertices). In general graphs, these problems are hard for parameterized complexity classes when parameterized by k. It was recently shown by Amini et al. (2008) [1] that PARTIAL VERTEX COVER and PARTIAL DOMINATING SET are fixed parameter tractable on large classes of sparse graphs, namely H-minor-free graphs, which include planar graphs and graphs of bounded genus. In particular, it was shown that on planar graphs both problems can be solved in time 2(O(k))n(O(1)). During the last decade there has been an extensive study on parameterized subexponential algorithms. In particular, it was shown that the classical VERTEX COVER and DOMINATING SET problems can be solved in subexponential time on H-minor-free graphs. The techniques developed to obtain subexponential algorithms for classical problems do not apply to partial cover problems. It was left as an open problem by Amini et al. (2008) [1] whether there is a subexponential algorithm for PARTIAL VERTEX COVER and PARTIAL DOMINATING SET. In this paper, we answer the question affirmatively by solving both problems in time 2(O(root k))n(O(1)) not only on planar graphs but also on much larger classes of graphs, namely, apex-minor-free graphs. Compared to previously known algorithms for these problems our algorithms are significantly faster and simpler. (C) 2011 Elsevier B.V. All rights reserved.
We study extremal questions on induced matchings in certain natural graph classes. We argue that these questions should be asked for twinless graphs, that is graphs not containing two vertices with the same neighborho...
详细信息
We study extremal questions on induced matchings in certain natural graph classes. We argue that these questions should be asked for twinless graphs, that is graphs not containing two vertices with the same neighborhood. We show that planar twinless graphs always contain an induced matching of size at least n/40 while there are planar twinless graphs that do not contain an induced matching of size (n + 10)/27. We derive similar results for outerplanar graphs and graphs of bounded genus. These extremal results can be applied to the area of parameterized computation. For example, we show that the induced matching problem on planar graphs has a kernel of size at most 40k that is computable in linear time: this significantly improves the results of Moser and Sikdar (2007). We also show that we can decide in time O(91(k) n) whether a planar graph contains an induced matching of size at least k. (C) 2010 Elsevier Inc. All rights reserved.
It is known that a planar graph on n vertices has branch-width/ tree-width bounded by alpha root n. In many algorithmic applications, it is useful to have a small bound on the constant alpha. We give a proof of the be...
详细信息
It is known that a planar graph on n vertices has branch-width/ tree-width bounded by alpha root n. In many algorithmic applications, it is useful to have a small bound on the constant alpha. We give a proof of the best, so far, upper bound for the constant a. In particular, for the case of tree-width, alpha < 3.182 and for the case of branch-width, alpha < 2.122. Our proof is based on the planar separation theorem of Alon, Seymour, and Thomas and some min-max theorems of Robertson and Seymour from the graph minors series. We also discuss some algorithmic consequences of this result. (c) 2005 Wiley Periodicals, Inc.
暂无评论