A palindrome is a string that reads the same forward and backward. For a string x, let Pals(x) be the set of all maximal palindromes of x, where each maximal palindrome in Pals(x) is encoded by a pair (c, r) of its ce...
详细信息
A palindrome is a string that reads the same forward and backward. For a string x, let Pals(x) be the set of all maximal palindromes of x, where each maximal palindrome in Pals(x) is encoded by a pair (c, r) of its center c and its radius r. Given a text t of length n and a pattern p of length m, the palindrome pattern matching problem is to compute all positions i oft such that Pals(p) = Pals(t[i : i + m - 1]). We present linear-timealgorithms to solve this problem. (C) 2012 Elsevier B.V. All rights reserved.
Hoang defined the P-4-sparse graphs as the graphs where every set of five vertices induces at most one P-4. These graphs attracted considerable attention in connection with the P-4-structure of graphs and the fact tha...
详细信息
Hoang defined the P-4-sparse graphs as the graphs where every set of five vertices induces at most one P-4. These graphs attracted considerable attention in connection with the P-4-structure of graphs and the fact that P-4-sparse graphs have bounded clique-width. Fouquet and Giakoumakis generalized this class to the nicely structured semi-P-4-sparse graphs being the (P-5, co-P-5, co-chair)-free graphs. We give a complete classification with respect to clique-width of all superclasses of P-4-sparse graphs defined by forbidden P-4 extensions by one vertex which are not P-4-sparse, i.e. the P-5, chair, P, C-5 as well as their complements. It turns out that there are exactly two other inclusion-maximal classes defined by three or four forbidden P-4 extensions namely the (P-5, P, co-chair)-free graphs and the (P, co-P, chair, co-chair)-free graphs which also deserve the name semi-P-4-sparse. (C) 2003 Elsevier B.V. All rights reserved.
Expander decompositions form the basis of one of the most flexible paradigms for close-to-linear-time graph algorithms. Length-constrained expander decompositions generalize this paradigm to better work for problems w...
详细信息
ISBN:
(纸本)9798331516758;9798331516741
Expander decompositions form the basis of one of the most flexible paradigms for close-to-linear-time graph algorithms. Length-constrained expander decompositions generalize this paradigm to better work for problems with lengths, distances and costs. Roughly, an (h, s)-length phi-expander decomposition is a small collection of length increases to a graph so that nodes within distance h can route flow over paths of length hs with congestion at most 1/phi. In this work, we give a close-to-lineartime algorithm for computing length-constrained expander decompositions in graphs with general lengths and capacities. Notably, and unlike previous works, our algorithm allows for one to trade off off between the size of the decomposition and the length of routing paths: for any epsilon > 0 not too small, our algorithm computes in close-to-lineartime an (h, s)-length phi-expander decomposition of size m . phi . n(epsilon) where s = exp(poly(1/epsilon)). The key foundations of our algorithm are: (1) a simple yet powerful structural theorem which states that the union of a sequence of sparse length-constrained cuts is itself sparse and (2) new algorithms for efficiently computing sparse length-constrained flows.
The Lempel-Ziv (LZ) 77 factorization of a string is a widely-used algorithmic tool that plays a central role in compression and indexing. For a length-n string over a linearly-sortable alphabet, e.g., Sigma = {1, . . ...
详细信息
ISBN:
(数字)9783031439803
ISBN:
(纸本)9783031439797;9783031439803
The Lempel-Ziv (LZ) 77 factorization of a string is a widely-used algorithmic tool that plays a central role in compression and indexing. For a length-n string over a linearly-sortable alphabet, e.g., Sigma = {1, . . . , sigma} with sigma = n(O(1)), it can be computed in O(n) time. It is unknown whether this time can be achieved for the rightmost LZ parsing, where each referencing phrase points to its rightmost previous occurrence. The currently best solution takes O(n(1 + log sigma/root log n)) time (Belazzougui & Puglisi SODA2016). We show that this problem is much easier to solve for the LZ-End factorization (Kreft & Navarro DCC2010), where the rightmost factorization can be obtained in O(n) time for the greedy parsing (with phrases of maximal length), and in O(n + z root log z) time for any LZ-End parsing of z phrases. We also make advances towards a lineartime solution for the general case. We show how to solve multiple non-trivial subsets of the phrases of any LZ-like parsing in O(n) time. As a prime example, we can find the rightmost occurrence of all phrases of length Omega(log(6.66) n / log(2) sigma) in O(n / log(sigma) n) time and space.
The k-edge-connectivity augmentation problem for a specified set of vertices of a graph with degree constraints, kECA-SV-DC, is defined as follows: "Given an undirected multigraph G = (V, E), a specified set of v...
详细信息
The k-edge-connectivity augmentation problem for a specified set of vertices of a graph with degree constraints, kECA-SV-DC, is defined as follows: "Given an undirected multigraph G = (V, E), a specified set of vertices S subset of V and a function y : V --> Z(+) boolean OR {infinity}, find a smallest set E' of edges such that (V, E boolean OR E') has at least k edge-disjoint paths between any pair of vertices in S and such that, for any V E V, E' includes at most g(v) edges incident to v, where Z(+) is the set of nonnegative integers." This paper first shows polynomial time solvability of kECA-SV-DC and then gives a lineartime algorithm for 2ECA-SV-DC.
We consider the problem of counting the number of copies of a fixed graph H within an input graph G. This is one of the most well-studied algorithmic graph problems, with many theoretical and practical applications. W...
详细信息
We consider the problem of counting the number of copies of a fixed graph H within an input graph G. This is one of the most well-studied algorithmic graph problems, with many theoretical and practical applications. We focus on solving this problem when the input G has bounded degeneracy. This is a rich family of graphs, containing all graphs without a fixed minor (e.g., planar graphs), as well as graphs generated by various random processes (e.g., preferential attachment graphs). We say that H is easy if there is a linear-time algorithm for counting the number of copies of H in an input G of bounded degeneracy. A seminal result of Chiba and Nishizeki from '85 states that every H on at most 4 vertices is easy. Bera, Pashanasangi, and Seshadhri recently extended this to all H on 5 vertices and further proved that for every k > 5 there is a k-vertex H which is not easy. They left open the natural problem of characterizing all easy graphs H. Bressan has recently introduced a framework for counting subgraphs in degenerate graphs, from which one can extract a sufficient condition for a graph H to be easy. Here, we show that this sufficient condition is also necessary, thus fully answering the Bera-Pashanasangi-Seshadhri problem. We further resolve two closely related problems;namely characterizing the graphs that are easy with respect to counting induced copies, and with respect to counting homomorphisms.
We provide an $O(| V | + | E |)$ algorithm which, given a graph G, finds a smallest set of edges which, when added to G, produces a graph with no cutpoints.
We provide an $O(| V | + | E |)$ algorithm which, given a graph G, finds a smallest set of edges which, when added to G, produces a graph with no cutpoints.
暂无评论