For two graphs G and H, a set R of vertices of G, and a set S of vertices of H, the complementary product G(R)square H(S) is defined as the graph with vertex set V(G) x V (H) where two vertices (u, v) and (x, y) are a...
详细信息
For two graphs G and H, a set R of vertices of G, and a set S of vertices of H, the complementary product G(R)square H(S) is defined as the graph with vertex set V(G) x V (H) where two vertices (u, v) and (x, y) are adjacent exactly if either u = x, u is an element of R, and vy is an element of E(H);or u = x, u is an element of V (G)\ R, and vy is not an element of E(H);or v = y, v is an element of S, and ux is an element of E(G);or V = y, v is an element of V (H)\ S, and ux is not an element of E(G). We show that for a fixed connected graph H and a fixed non-empty and proper subset S of its vertex set, it can be decided in polynomial time whether, for a given input graph F, there exists a graph G such that F is isomorphic to G(V(G))square H(S). Furthermore, if such a graph G exists, then one such graph can be found in polynomial time. Our result gives a positive answer to a question posed by Haynes, Henning, Slater, and van der Merwe [5]. (c) 2013 Elsevier B.V. All rights reserved.
We analyze the cost used by a naive exhaustive search algorithm for finding a maximum independent set in random graphs under the usual G(n,p)-model where each possible edge appears independently with the same probabil...
详细信息
We analyze the cost used by a naive exhaustive search algorithm for finding a maximum independent set in random graphs under the usual G(n,p)-model where each possible edge appears independently with the same probability p. The expected cost turns out to be of the less common asymptotic order n(c) (log n), which we explore from several different perspectives. Also we collect many instances where such an order appears, from algorithmics to analysis, from probability to algebra. The limiting distribution of the cost required by the algorithm under a purely idealized random model is proved to be normal. The approach we develop is of some generality and is amenable for other graph algorithms.
graph connectivity is a graph-theoretic concept that is fundamental to the studies of many applications such as network reliability and network decomposition. For the 3-edge-connectivity problem, recently, it has been...
详细信息
graph connectivity is a graph-theoretic concept that is fundamental to the studies of many applications such as network reliability and network decomposition. For the 3-edge-connectivity problem, recently, it has been shown to be useful in a variety of apparently unrelated areas such as solving the G-irreducibility of Feynman diagram in physics and quantum chemistry, editing cluster and aligning genome in bioinformatics, placing monitors on the edges of a network in flow networks, spare capacity allocation and decomposing a social network to study its community structure. A number of linear-time algorithms for 3-edge-connectivity have thus been proposed. Of all these algorithms, the algorithm of Tsin is conceptually the simplest and also runs efficiently in a recent study. In this article, we shall show how to simplify the implementation of a key step in the algorithm making the algorithm much more easier to implement and run more efficiently. The simplification eliminates a rather complicated linked-lists structure and reduces the space requirement of that step from 0 (vertical bar E vertical bar) to O(vertical bar V vertical bar), where V and E are the vertex set and the edge set of the input graph, respectively. (C) 2013 Elsevier B.V. All rights reserved.
We study the multiterminal cut problem, which, given an n-vertex graph whose edges are integer-weighted and a set of terminals, asks for a partition of the vertex set such that each terminal is in a distinct part, and...
详细信息
We study the multiterminal cut problem, which, given an n-vertex graph whose edges are integer-weighted and a set of terminals, asks for a partition of the vertex set such that each terminal is in a distinct part, and the total weight of crossing edges is at most k. Our weapons shall be two classical results known for decades: maximum volume minimum (s, t)-cuts by Ford and Fulkerson [11] and isolating cuts by Dahlhaus et al. [9]. We sharpen these old weapons with the help of submodular functions, and apply them to this problem, which enable us to design a more elaborated branching scheme on deciding whether a non-terminal vertex is with a terminal or not. This bounded search tree algorithm can be shown to run in 1.84k . n(O(1)) time, thereby breaking the 2(k) . n(O(1)) barrier. As a by-product, it gives a 1.36(k) . n(O(1)) time algorithm for 3-terminal cut. The preprocessing applied on non-terminal vertices might be of use for study of this problem from other aspects. (C) 2013 Elsevier B.V. All rights reserved.
For a graph G, let Z (G, lambda) be the partition function of the monomer-dimer system defined by Sigma(k) m(k)(G)lambda(k), where m(k)(G) is the number of matchings of size k in G. We consider graphs of bounded degre...
详细信息
For a graph G, let Z (G, lambda) be the partition function of the monomer-dimer system defined by Sigma(k) m(k)(G)lambda(k), where m(k)(G) is the number of matchings of size k in G. We consider graphs of bounded degree and develop a sublinear-time algorithm for estimating log Z(G, lambda) at an arbitrary value lambda > 0 within additive error epsilon n with high probability. The query complexity of our algorithm does not depend on the size of G and is polynomial in 1/epsilon, and we also provide a lower bound quadratic in 1/epsilon for this problem. This is the first analysis of a sublinear-time approximation algorithm for a #P-complete problem. Our approach is based on the correlation decay of the Gibbs distribution associated with Z(G, lambda). We show that our algorithm approximates the probability for a vertex to be covered by a matching, sampled according to this Gibbs distribution, in a near-optimal sublinear time. We extend our results to approximate the average size and the entropy of such a matching within an additive error with high probability, where again the query complexity is polynomial in 1/epsilon and the lower bound is quadratic in 1/epsilon. Our algorithms are simple to implement and of practical use when dealing with massive datasets. Our results extend to other systems where the correlation decay is known to hold as for the independent set problem up to the critical activity. (C) 2014 Elsevier B.V. All rights reserved.
We prove that edge contractions do not preserve the property that a set of graphs has bounded clique-width. (C) 2013 Elsevier B.V. All rights reserved.
We prove that edge contractions do not preserve the property that a set of graphs has bounded clique-width. (C) 2013 Elsevier B.V. All rights reserved.
The Maximum Common Induced Subgraph problem (MCIS) takes a pair of graphs as input and asks for a graph of maximum order that is isomorphic to an induced subgraph of each of the input graphs. The problem is NP-hard in...
详细信息
The Maximum Common Induced Subgraph problem (MCIS) takes a pair of graphs as input and asks for a graph of maximum order that is isomorphic to an induced subgraph of each of the input graphs. The problem is NP-hard in general, and remains so on many graph classes including graphs of bounded treewidth. In the framework of parameterized complexity, the latter assertion means that MCIS is W [1]-hard when parameterized by the treewidths of input graphs. A classical graph parameter that has been used recently in many parameterization problems is Vertex Cover. In this paper we prove constructively that MCIS is fixed-parameter tractable when parameterized by the vertex cover numbers of the input graphs. Our algorithm is also an improved exact algorithm for the problem on instances where the minimum vertex cover is small compared to the order of input graphs. (C), 2013 Elsevier B.V. All rights reserved.
In this paper we consider constructive control by adding and deleting candidates in Copeland and Llull voting systems from a theoretical and an experimental point of view. We show how to characterize the optimization ...
详细信息
In this paper we consider constructive control by adding and deleting candidates in Copeland and Llull voting systems from a theoretical and an experimental point of view. We show how to characterize the optimization versions of these four control problems as special digraph problems and binary linear programming formulations of linear size. Our digraph characterizations allow us to prove the hardness of approximations with absolute performance guarantee for optimal constructive control by deleting candidates in Copeland and by adding candidates in Llull voting schemes and the nonexistence of efficient approximation schemes for optimal constructive control by adding and deleting candidates in Copeland and Llull voting schemes. Our experimental study of running times using LP solvers shows that for a lot of practical instances even the hard control problems can be solved very efficiently. (C) 2013 Elsevier B.V. All rights reserved.
Background: The sheer amounts of biological data that are generated in recent years have driven the development of network analysis tools to facilitate the interpretation and representation of these data. A fundamenta...
详细信息
Background: The sheer amounts of biological data that are generated in recent years have driven the development of network analysis tools to facilitate the interpretation and representation of these data. A fundamental challenge in this domain is the reconstruction of a protein-protein subnetwork that underlies a process of interest from a genome-wide screen of associated genes. Despite intense work in this area, current algorithmic approaches are largely limited to analyzing a single screen and are, thus, unable to account for information on condition-specific genes, or reveal the dynamics (over time or condition) of the process in question. Results: We propose a novel formulation for the problem of network reconstruction from multiple-condition data and devise an efficient integer program solution for it. We apply our algorithm to analyze the response to influenza infection and ER export regulation in humans. By comparing to an extant, single-condition tool we demonstrate the power of our new approach in integrating data from multiple conditions in a compact and coherent manner, capturing the dynamics of the underlying processes.
This note extends the analysis of incremental PageRank in Bahmani et al. (2010) [1]. In that work, the authors prove a running time of O(nR/epsilon In-2(m)) to keep PageRank updated over m edge arrivals in a graph wit...
详细信息
This note extends the analysis of incremental PageRank in Bahmani et al. (2010) [1]. In that work, the authors prove a running time of O(nR/epsilon In-2(m)) to keep PageRank updated over m edge arrivals in a graph with n nodes when the algorithm stores R random walks per node and the PageRank teleport probability is epsilon. To prove this running time, they assume that edges arrive in a random order, and leave it to future work to extend their running time guarantees to adversarial edge arrival. In this note, we show that the random edge order assumption is necessary by exhibiting a graph and adversarial edge arrival order in which the running time is Omega(Rnm(lg3/2 (1-epsilon)). More generally, for any integer d >= 2, we construct a graph and adversarial edge order in which the running time is Omega(Rnm(logd(Hd(1-epsilon)))), where H-d is the dth harmonic number. (C) 2013 Elsevier B.V. All rights reserved.
暂无评论