The Fries number of a benzenoid is the maximum number of benzenoid hexagons over all of its Kekul, structures (perfect matchings), and a Fries canonical structure is a perfect matching that realises this maximum. A re...
详细信息
The Fries number of a benzenoid is the maximum number of benzenoid hexagons over all of its Kekul, structures (perfect matchings), and a Fries canonical structure is a perfect matching that realises this maximum. A recently published algorithm claims to determine Fries canonical structures of benzenoids via iterated Hadamard products based on the adjacency matrix (Ciesielski et al. in Symmetry 2:1390-1400, 2010). This algorithm is re-examined here. Convergence is typically rapid and often yields a single candidate perfect matching, but the algorithm can give an exponential number of choices, of which only a small number are canonical. More worryingly, the algorithm is found to give incorrect results for the Fries number for some benzenoids with as few as seven hexagonal faces. We give a combinatorial reformulation of the algorithm in terms of linear combinations of perfect matchings (with weights at each stage proportional to the products of weights of the edges included in a matching). In all the cases we have examined, the algorithm converges to a maximum-weight matching (or combination of maximum-weight matchings), and where the algorithm fails, either no best Fries matching is of maximum weight, or a best Fries matching is of maximum weight but a sub-optimal matching of the same weight is chosen.
Let R be a set of vertices of a split graph G. We characterize when R allows a partition into two disjoint set R-1 and R-2 such that the convex hulls of R-1 and R-2 with respect to the P-3-convexity of G intersect. Fu...
详细信息
Let R be a set of vertices of a split graph G. We characterize when R allows a partition into two disjoint set R-1 and R-2 such that the convex hulls of R-1 and R-2 with respect to the P-3-convexity of G intersect. Furthermore, we describe a linear time algorithm that decides the existence of such a partition. Our results are related to the so-called Radon number of the P-3-convexity of G and complement earlier results in this area. (c) 2012 Elsevier BM. All rights reserved.
A novel approach for using the filmification of methods concept in the graph algorithm representation, specification, and programming is considered. it is based on a "cyberFilm" format, where a set of multim...
详细信息
A novel approach for using the filmification of methods concept in the graph algorithm representation, specification, and programming is considered. it is based on a "cyberFilm" format, where a set of multimedia frames represents algorithmic features. A brief description of the cyberFilm concept and an observation of graph algorithm features are presented. A number of cyberFilms related to Prim's and Dijkstra's algorithms have been developed and used to explain the basic ideas of the approach. Several versions of the algorithm visualization are demonstrated by corresponding examples of cyberFilm frames and icon language representations. In addition, a method for program generation from the cyberFilm specification is provided with explanations of program templates supporting the cyberFilm frames. (c) 2006 Published by Elsevier Ltd.
In network communication systems, frequently messages are routed along a minimum diameter spanning tree (MDST) of the network, to minimize the maximum travel time of messages. When a transient failure disables an edge...
详细信息
In network communication systems, frequently messages are routed along a minimum diameter spanning tree (MDST) of the network, to minimize the maximum travel time of messages. When a transient failure disables an edge of the MDST, the network is disconnected, and a temporary replacement edge must be chosen, which should ideally minimize the diameter of the new spanning tree. Such a replacement edge is called a best swap. Preparing for the failure of any edge of the MDST, the all-best-swaps (ABS) problem asks for finding the best swap for every edge of the MDST. Given a 2-edge-connected weighted graph G=(V,E), where |V|=n and |E|=m, we solve the ABS problem in O(mlog n) time and O(m) space, thus considerably improving upon the decade-old previously best solution, which requires O(n root m) time and O(m) space, for m = o(n(2)/log(2) n).
The problem of determining the cutwidth of a graph is a notoriously hard problem which remains NP-complete under severe restrictions on input graphs. Until recently, nontrivial polynomial-time cutwidth algorithms were...
详细信息
The problem of determining the cutwidth of a graph is a notoriously hard problem which remains NP-complete under severe restrictions on input graphs. Until recently, nontrivial polynomial-time cutwidth algorithms were known only for subclasses of graphs of bounded treewidth. Very recently, Heggernes et al. (SIAM J. Discrete Math., 25 (2011), pp. 1418-1437) initiated the study of cutwidth on graph classes containing graphs of unbounded treewidth and showed that a greedy algorithm computes the cutwidth of threshold graphs. We continue this line of research and present the first polynomial-time algorithm for computing the cutwidth of bipartite permutation graphs. Our algorithm runs in linear time. We stress that the cutwidth problem is NP-complete on bipartite graphs and its computational complexity is open even on small subclasses of permutation graphs, such as trivially perfect graphs.
Given a directed graph G, an edge is a strong bridge if its removal increases the number of strongly connected components of G. Similarly, we say that a vertex is a strong articulation point if its removal increases t...
详细信息
Given a directed graph G, an edge is a strong bridge if its removal increases the number of strongly connected components of G. Similarly, we say that a vertex is a strong articulation point if its removal increases the number of strongly connected components of G. In this paper, we present linear-time algorithms for computing all the strong bridges and all the strong articulation points of directed graphs, solving an open problem posed in Beldiceanu et al. (2005) [2]. (c) 2011 Elsevier B.V. All rights reserved.
This paper outlines techniques for optimization of filter coefficients in a spectral framework for anomalous subgraph detection. Restricting the scope to the detection of a known signal in i.i.d. noise, the optimal co...
详细信息
ISBN:
(纸本)9781467301831
This paper outlines techniques for optimization of filter coefficients in a spectral framework for anomalous subgraph detection. Restricting the scope to the detection of a known signal in i.i.d. noise, the optimal coefficients for maximizing the signal's power are shown to be found via a rank-1 tensor approximation of the subgraph's dynamic topology. While this technique optimizes our power metric, a filter based on average degree is shown in simulation to work nearly as well in terms of power maximization and detection performance, and better separates the signal from the noise in the eigenspace.
The paper is devoted to the implementations of the public key algorithms based on simple algebraic graphs A(n, K) and D(n, K) defined over the same finite commutative ring K . If K is a finite field both families are ...
详细信息
The paper is devoted to the implementations of the public key algorithms based on simple algebraic graphs A(n, K) and D(n, K) defined over the same finite commutative ring K . If K is a finite field both families are families of graphs with large cycle indicator. In fact, the family D(n, F-q) is a family of graphs of large girth (f.g.l.g.) with c = 1, their connected components C D(n, F-q) form the f.g.l.g. with the speed of growth 4/3. Family A(n, q), char F-q not equal 2 is a family of connected graphs with large cycle indicator with the largest possible speed of growth. The computer simulation demonstrates the advantage (better density which is the number of monomial expressions) of public rules derived from A(n, q) in comparison with symbolic algorithm based on graphs D(n, q).
In this paper, we present FlexBFS, a parallelism-aware implementation for breadth-first search on GPU. Our implementation can adjust the computation resources according to the feedback of available parallelism dynamic...
详细信息
ISBN:
(纸本)9781450311601
In this paper, we present FlexBFS, a parallelism-aware implementation for breadth-first search on GPU. Our implementation can adjust the computation resources according to the feedback of available parallelism dynamically. We also optimized our program in three ways: (1) a simplified two-level queue management,(2) a combined kernel strategy and (3) a high-degree vertices specialization approach. Our experimental results show that it can achieve 3 similar to 20 times speedup against the fastest serial version, and can outperform the TBB based multi-threading CPU version and the previous most effective GPU version on all types of input graphs.
Analysis of social networks is challenging due to the rapid changes of its members and their relationships. For many cases it impractical to recompute the metric of interest, therefore, streaming algorithms are used t...
详细信息
ISBN:
(纸本)9780769548487;9781467356381
Analysis of social networks is challenging due to the rapid changes of its members and their relationships. For many cases it impractical to recompute the metric of interest, therefore, streaming algorithms are used to reduce the total runtime following modifications to the graph. Centrality is often used for determining the relative importance of a vertex or edge in a graph. The vertex Betweenness Centrality is the fraction of shortest paths going through a vertex among all shortest paths in the graph. Vertices with a high betweenness centrality are usually key players in a social network or a bottleneck in a communication network. Evaluating the betweenness centrality for a graph G = (V, E) is computationally demanding and the best known algorithm for unweighted graphs has an upper bound time complexity of O(V-2 + V E). Consequently, it is desirable to find a way to avoid a full re-computation of betweenness centrality when a new edge is inserted into the graph. In this work, we give a novel algorithm that reduces computation for the insertion of an edge into the graph. This is the first algorithm for the computation of betweenness centrality in a streaming graph. While the upper bound time complexity of the new algorithm is the same as the upper bound for the static graph algorithm, we show significant speedups for both synthetic and real graphs. For synthetic graphs the speedup varies depending on the type of graph and the graph size. For synthetic graphs with 16384 vertices the average speedup is between 100X - 400X. For five different real world collaboration networks the average speedup per graph is in range of 36X - 148X.
暂无评论