Modern applications of graph algorithms often involve the use of the output sets (usually, a subset of edges or vertices of the input graph) as inputs to other algorithms. Since the input graphs of interest are large ...
详细信息
Modern applications of graph algorithms often involve the use of the output sets (usually, a subset of edges or vertices of the input graph) as inputs to other algorithms. Since the input graphs of interest are large and dynamic, it is desirable for an algorithm's output to not change drastically when a few random edges are removed from the input graph, so as to prevent issues in postprocessing. Alternately, having such a guarantee also means that one can revise the solution obtained by running the algorithm on the original graph in just a few places in order to obtain a solution for the new graph. We formalize this feature by introducing the notion of average sensitivity of graph algorithms, which is the average earth mover's distance between the output distributions of an algorithm on a graph and its subgraph obtained by removing an edge, where the average is over the edges removed and the distance between two outputs is the Hamming distance. In this work, we initiate a systematic study of average sensitivity of graph algorithms. After deriving basic properties of average sensitivity such as composition, we provide efficient approximation algorithms with low average sensitivities for concrete graph problems, including the minimum spanning forest problem, the global minimum cut problem, the minimum s-t cut problem, and the maximum matching problem. In addition, we prove that the average sensitivity of our global minimum cut algorithm is almost optimal, by showing a nearly matching lower bound. We also show that every algorithm for the 2-coloring problem has average sensitivity linear in the number of vertices. One of the main ideas involved in designing our algorithms with low average sensitivity is the following fact: if the presence of a vertex or an edge in the solution output by an algorithm can be decided locally, then the algorithm has a low average sensitivity, allowing us to reuse the analyses of known sublineartime algorithms and local computation algorith
SuiteSparse:graphBLAS is a full implementation of the graphBLAS standard, which defines a set of sparse matrix operations on an extended algebra of semirings using an almost unlimited variety of operators and types. W...
详细信息
SuiteSparse:graphBLAS is a full implementation of the graphBLAS standard, which defines a set of sparse matrix operations on an extended algebra of semirings using an almost unlimited variety of operators and types. When applied to sparse adjacency matrices, these algebraic operations are equivalent to computations on graphs. graphBLAS provides a powerful and expressive framework for creating graph algorithms based on the elegant mathematics of sparse matrix operations on a semiring. An overview of the graphBLAS specification is given, followed by a description of the key features and performance of its implementation in the SuiteSparse:graphBLAS package.
In this paper we present deterministic parallel algorithms for the coarse-grained multicomputer (CGM) and bulk synchronous parallel (BSP) models for solving the following well-known graph problems: (1) list ranking, (...
详细信息
In this paper we present deterministic parallel algorithms for the coarse-grained multicomputer (CGM) and bulk synchronous parallel (BSP) models for solving the following well-known graph problems: (1) list ranking, (2) Euler tour construction in a tree, (3) computing the connected components and spanning forest, (4) lowest common ancestor preprocessing, (5) tree contraction and expression tree evaluation, (6) computing an ear decomposition or open ear decomposition, and (7) 2-edge connectivity and biconnectivity (testing and component computation). The algorithms require 0 (log p) communication rounds with linear sequential work per round (p = no. processors, N = total input size). Each processor creates, during the entire algorithm, messages of total size O(log(p)(N/p)). The algorithms assume that the local memory per processor (i,e,, N/p) is larger than p(epsilon), for some fixed epsilon > 0. Our results imply BSP algorithms with O(log p) supersteps, O(g log(p)(N/p)) communication time, and O(log(p)(N/p)) local computation time. It is important to observe that the number of communication rounds/supersteps obtained in this paper is independent of the problem size, and grows only logarithmically with respect to p. With growing problem size, only the sizes of the messages grow but the total number of messages remains unchanged. Due to the considerable protocol overhead associated with each message transmission, this is an important property. The result for Problem (1) is a considerable improvement over those previously reported, The algorithms for Problems (2)-(7) are the first practically relevant parallel algorithms for these standard graph problems.
Computational nanotechnology conceptualizes the basis of bottom-up approaches for constructing potential nanosystems. This paper introduces a new methodology of computational nanotechnology to calculate a set of optim...
详细信息
Computational nanotechnology conceptualizes the basis of bottom-up approaches for constructing potential nanosystems. This paper introduces a new methodology of computational nanotechnology to calculate a set of optimal mechanical properties of the fullerene C-70 nanoparticle using graph algorithms which employ the real experimental data of C-70's structure. C-70 nanoparticle is composed of 70 equivalent carbon atoms arranged as a hollow cage in the form of a rugby-ball, egg-shaped structure which has some potential applications in relation with nanorobotic and bio-nanorobotic systems. In this work, at first, Wiener (W), hyper-Wiener (WW), Harary (Ha) and reciprocal Wiener (RW) indices were computed using dynamic programming and presented as: W (C-70) = 17749.9, WW (C-70) = 85448.4, Ha (C-70) = 120.9, and RW (C-70) = 435.6. The Hosoya and hyper-Hosoya polynomials of C-70, which are in relationship with C-70's indices, have been also computed and results revealed a good agreement with mathematical equations between indices and polynomials. Also, a graph algorithm based on greedy methods was employed to find a set of optimal electronic aspects of C-70's structure via computing the Minimum Weight Spanning Tree (MWST) of C-70. The computed MWST indicated that for connecting seventy carbon atoms of C-70 the minimum number of: (1) double bonds was 20, (2) intermediate bonds was 25, and (3) single bonds was 24. These results also showed good agreement with the principles of employed greedy algorithm.
We present a new approach for designing external graph algorithms and use it to design simple, deterministic and randomized external algorithms for computing connected components, minimum spanning forests, bottleneck ...
详细信息
We present a new approach for designing external graph algorithms and use it to design simple, deterministic and randomized external algorithms for computing connected components, minimum spanning forests, bottleneck minimum spanning forests, maximal independent sets (randomized only), and maximal matchings in undirected graphs. Our I/O bounds compete with those of previous approaches. We also introduce a semi-external model, in which the vertex set but not the edge set of a graph fits in main memory. In this model we give an improved connected components algorithm, using new results for external grouping and sorting with duplicates. Unlike previous approaches, ours is purely functional-without side effects-and is thus amenable to standard checkpointing and pro.-ramming language optimization techniques. This is an important practical consideration for applications that may take hours to run.
We describe a parallel library written with message-passing (MPI) calls that allows algorithms to be expressed in the Map Reduce paradigm. This means the calling program does not need to include explicit parallel code...
详细信息
We describe a parallel library written with message-passing (MPI) calls that allows algorithms to be expressed in the Map Reduce paradigm. This means the calling program does not need to include explicit parallel code, but instead provides "map" and "reduce" functions that operate independently on elements of a data set distributed across processors. The library performs needed data movement between processors. We describe how typical Map Reduce functionality can be implemented in an MPI context, and also in an out-of-core manner for data sets that do not fit within the aggregate memory of a parallel machine. Our motivation for creating this library was to enable graph algorithms to be written as MapReduce operations, allowing processing of terabyte-scale data sets on traditional MPI-based clusters. We outline MapReduce versions of several such algorithms: vertex ranking via PageRank, triangle finding, connected component identification, Luby's algorithm for maximally independent sets, and single-source shortest-path calculation. To test the algorithms on arbitrarily large artificial graphs we generate randomized R-MAT matrices in parallel;a MapReduce version of this operation is also described. Performance and scalability results for the various algorithms are presented for varying size graphs on a distributed-memory cluster. For some cases, we compare the results with non-MapReduce algorithms, different machines, and different MapReduce software, namely Hadoop. Our open-source library is written in C++, is callable from C++, C, Fortran, or scripting languages such as Python, and can run on any parallel platform that supports MPI. (C) 2011 Elsevier B.V. All rights reserved.
Online graph problems are considered in models where the irrevocability requirement is relaxed. We consider the Late Accept model, where a request can be accepted at a later point, but any acceptance is irrevocable. S...
详细信息
Online graph problems are considered in models where the irrevocability requirement is relaxed. We consider the Late Accept model, where a request can be accepted at a later point, but any acceptance is irrevocable. Similarly, we consider a Late Reject model, where an accepted request can later be rejected, but any rejection is irrevocable (this is sometimes called preemption). Finally, we consider the Late Accept/Reject model, where late accepts and rejects are both allowed, but any late reject is irrevocable. We consider four classical graph problems: For Maximum Independent Set, the Late Accept/Reject model is necessary to obtain a constant competitive ratio, for Minimum Vertex Cover the Late Accept model is sufficient, and for Minimum Spanning Forest the Late Reject model is sufficient. The Maximum Matching problem admits constant competitive ratios in all cases. We also consider Maximum Acyclic Subgraph and Maximum Planar Subgraph, which exhibit patterns similar to Maximum Independent Set.
Designing architectural layouts is a complex task that has garnered significant attention in the research community. While automated site layout design and flat layout design have been extensively studied, automated b...
详细信息
Designing architectural layouts is a complex task that has garnered significant attention in the research community. While automated site layout design and flat layout design have been extensively studied, automated building layout design has been relatively overlooked. This paper describes an approach for generating automated building layouts using deep learning and graph algorithms. A unique building layout dataset is created to support the proposed approach. Euclidean distance, Dice coefficient, and a force-directed graph algorithm are employed for layout selection and fine-tuning. The Input-controlled Spatial Attention U-Net model accurately segments the building region, and the resulting layout is refined through image operations, leading to comprehensive BIM models for designers. Through two generative case studies and a comparative experiment with neural networks, this paper demonstrates the effectiveness of the approach that can assist designers during the initial stages of design and enable a rapid generation of complete layouts for individual buildings.
Intensity-modulated arc therapy (IMAT) is a rotational IMRT technique. It uses a set of overlapping or nonoverlapping arcs to create a prescribed dose distribution. Despite its numerous advantages, IMAT has not gained...
详细信息
Intensity-modulated arc therapy (IMAT) is a rotational IMRT technique. It uses a set of overlapping or nonoverlapping arcs to create a prescribed dose distribution. Despite its numerous advantages, IMAT has not gained widespread clinical applications. This is mainly due to the lack of an effective IMAT leaf-sequencing algorithm that can convert the optimized intensity patterns for all beam directions into IMAT treatment arcs. To address this problem, we have developed an IMAT leaf-sequencing algorithm and software using graph algorithms in computer science. The input to our leaf-sequencing software includes (1) a set of (continuous) intensity patterns optimized by a treatment planning system at a sequence of equally spaced beam angles (typically 10 degrees apart), (2) a maximum leaf motion constraint, and (3) the number of desired arcs, k. The output is a set of treatment arcs that best approximates the set of optimized intensity patterns at all beam angles with guaranteed smooth delivery without violating the maximum leaf motion constraint. The new algorithm consists of the following key steps. First, the optimized intensity patterns are segmented into intensity profiles that are aligned with individual MLC leaf pairs. Then each intensity profile is segmented into k MLC leaf openings using a k-link shortest path algorithm. The leaf openings for all beam angles are subsequently connected together to form 1D IMAT arcs under the maximum leaf motion constraint using a shortest path algorithm. Finally, the 1D IMAT arcs are combined to form IMAT treatment arcs of MLC apertures. The performance of the implemented leaf-sequencing software has been tested for four treatment sites (prostate, breast, head and neck, and lung). In all cases, our leaf-sequencing algorithm produces efficient and highly conformal IMAT plans that rival their counterpart, the tomotherapy plans, and significantly improve the IMRT plans. Algorithm execution times ranging from a few seconds to 2 min ar
A novel approach for using the filmification of methods concept in the graph algorithm representation, specification, and programming is considered. it is based on a "cyberFilm" format, where a set of multim...
详细信息
A novel approach for using the filmification of methods concept in the graph algorithm representation, specification, and programming is considered. it is based on a "cyberFilm" format, where a set of multimedia frames represents algorithmic features. A brief description of the cyberFilm concept and an observation of graph algorithm features are presented. A number of cyberFilms related to Prim's and Dijkstra's algorithms have been developed and used to explain the basic ideas of the approach. Several versions of the algorithm visualization are demonstrated by corresponding examples of cyberFilm frames and icon language representations. In addition, a method for program generation from the cyberFilm specification is provided with explanations of program templates supporting the cyberFilm frames. (c) 2006 Published by Elsevier Ltd.
暂无评论