It has been recently shown that the iteration of Nash modification on not necessarily normal toric varieties corresponds to a purely combinatorial algorithm on the generators of the semigroup associated to the toric v...
详细信息
It has been recently shown that the iteration of Nash modification on not necessarily normal toric varieties corresponds to a purely combinatorial algorithm on the generators of the semigroup associated to the toric variety. We will show that for tone surfaces this algorithm stops for certain choices of affine charts of the Nash modification. In addition, we give a bound on the number of steps required for the algorithm to stop in the cases we consider. Let C(x(1),x(2)) be the field of rational functions of a toric surface. Then our result implies that if nu : C(x(1),x(2)) -> Gamma is any valuation centered on the toric surface and such that nu (x(1)) not equal lambda nu (x(2)) for all lambda epsilon R \ Q, then a finite iteration of Nash modification gives local uniformization along nu.
Ridges are one of the key features of interest in areas such as computer vision and image processing. Even though a significant amount of research has been directed to defining and extracting ridges some fundamental c...
详细信息
Ridges are one of the key features of interest in areas such as computer vision and image processing. Even though a significant amount of research has been directed to defining and extracting ridges some fundamental challenges remain. For example, the most popular ridge definition (height ridge) is not invariant under monotonic transformations and its global structure is typically ignored during numerical computations. Furthermore, many existing algorithms are based on numerical heuristics and are rarely guaranteed to produce consistent results. This paper reexamines a slightly different ridge definition that is consistent with all desired invariants. Nevertheless, we show that this definition results in similar structures compared to height ridges and that both formulations are equivalent for quadratic functions. Furthermore, this definition can be cast in the form of a degenerate Jacobi set, which allows insights into the global structure of ridges. In particular, we introduce the Ridge-Valley graph as the complete description of all ridges in an image. Finally, using the connection to Jacobi sets we describe a new combinatorial algorithm to extract the Ridge-Valley graph from sampled images guaranteed to produce a valid structure. (C) 2012 Elsevier B.V. All rights reserved.
In 2012, Page presented a sequential combinatorial generation algorithm for generalized types of restricted weak integer compositions called second order restricted weak integer compositions. Second order restricted w...
详细信息
In 2012, Page presented a sequential combinatorial generation algorithm for generalized types of restricted weak integer compositions called second order restricted weak integer compositions. Second order restricted weak integer compositions cover various types of restricted weak integer compositions of n parts such as integer compositions, bounded compositions, and part wise integer compositions. In this paper, we present a parallel algorithm that derives from our parallelization of Page's sequential algorithm with a focus on load balancing for shared memory machines.
The image segmentation problem is to delineate, or segment, a salient feature in an image. As such, this is a bipartition problem with the goal of separating the foreground from the background. An NP-hard optimization...
详细信息
The image segmentation problem is to delineate, or segment, a salient feature in an image. As such, this is a bipartition problem with the goal of separating the foreground from the background. An NP-hard optimization problem, the Normalized Cut problem, is often used as a model for image segmentation. The common approach for solving the normalized cut problem is the spectralmethod which generates heuristic solutions based upon finding the Fiedler eigenvector. Recently, Hochbaum (IEEE Trans Pattern Anal Mach Intell 32(5): 889-898, 2010) presented a new relaxation of the normalized cut problem, called normalized cut' problem, which is solvable in polynomial time by a combinatorial algorithm. We compare this new algorithm with the spectral method and present experimental evidence that the combinatorial algorithm provides solutions which better approximate the optimal normalized cut solution. In addition, the subjective visual quality of the segmentations provided by the combinatorial algorithm greatly improves upon those provided by the spectral method. Our study establishes an interesting observation about the normalized cut criterion that the segmentation which provides the subjectively best visual bipartition rarely corresponds to the segmentation which minimizes the objective function value of the normalized cut problem. We conclude that modeling the image segmentation problem as normalized cut criterion might not be appropriate. Instead, normalized cut not only provides better visual segmentations but is also solvable in polynomial time. Therefore, normalized cut' should be the preferred segmentation criterion for both complexity and good segmentation quality reasons.
An even factor in a digraph is a vertex-disjoint collection of directed cycles of even length and directed paths. An even factor is called independent if it satisfies a certain matroid constraint. The problem of findi...
详细信息
An even factor in a digraph is a vertex-disjoint collection of directed cycles of even length and directed paths. An even factor is called independent if it satisfies a certain matroid constraint. The problem of finding an independent even factor of maximum size is a common generalization of the nonbipartite matching and matroid intersection problems. In this paper, we present a primal-dual algorithm for the weighted independent even factor problem in odd-cycle-symmetric weighted digraphs. Cunningham and Geelen have shown that this problem is solvable via valuated matroid intersection. Their method yields a combinatorial algorithm running in O(n (3) gamma + n (6) m) time, where n and m are the number of vertices and edges, respectively, and gamma is the time for an independence test. In contrast, combining the weighted even factor and independent even factor algorithms, our algorithm works more directly and runs in O(n (4) gamma + n (5)) time. The algorithm is fully combinatorial, and thus provides a new dual integrality theorem which commonly extends the total dual integrality theorems for matching and matroid intersection.
Let G = (V, E) be a weighted undirected graph, with non-negative edge weights. We consider the problem of efficiently computing approximate distances between all pairs of vertices in G. While many efficient algorithms...
详细信息
Let G = (V, E) be a weighted undirected graph, with non-negative edge weights. We consider the problem of efficiently computing approximate distances between all pairs of vertices in G. While many efficient algorithms are known for this problem in unweighted graphs, not many results are known for this problem in weighted graphs. Zwick (J. Assoc. Comput. Mach. 49:289-317, 2002) showed that for any fixed epsilon > 0, stretch 1 + epsilon distances (a path in G between u, v is an element of V is said to be of stretch t if its length is at most t times the distance between u and v in G) between all pairs of vertices in a weighted directed graph on n vertices can be computed in (O) over tilde (n(omega)) time, where omega < 2.376 is the exponent of matrix multiplication and n is the number of vertices. It is known that finding distances of stretch less than 2 between all pairs of vertices in G is at least as hard as Boolean matrix multiplication of two n x n matrices. Here we show that all pairs stretch 2 + epsilon distances for any fixed epsilon > 0 in G can be computed in expected time O(n(9/4)). This algorithm uses a fast rectangular matrix multiplication subroutine. We also present a combinatorial algorithm (that is, it does not use fast matrix multiplication) with expected running time O(n(9/4)) for computing all-pairs stretch 5/2 distances in G. This combinatorial algorithm will serve as a key step in our all-pairs stretch 2 + epsilon distances algorithm.
combinatorial (or rule-based) methods for inferring haplotypes from genotypes on a pedigree have been studied extensively in the recent literature. These methods generally try to reconstruct the haplotypes of each ind...
详细信息
combinatorial (or rule-based) methods for inferring haplotypes from genotypes on a pedigree have been studied extensively in the recent literature. These methods generally try to reconstruct the haplotypes of each individual so that the total number of recombinants is minimized in the pedigree. The problem is NP-hard, although it is known that the number of recombinants in a practical dataset is usually very small. In this paper, we consider the question of how to efficiently infer haplotypes on a large pedigree when the number of recombinants is bounded by a small constant, i.e. the so called k-recombinant haplotype configuration (k-RHC) problem. We introduce a simple probabilistic model for k-RHC where the prior haplotype probability of a founder and the haplotype transmission probability from a parent to a child are all assumed to follow the uniform distribution and k random recombination events are assumed to have taken place uniformly and independently in the pedigree. We present an O(mnlog (k+1) n) time algorithm for k-RHC on tree pedigrees without mating loops, where m is the number of loci and n is the size of the input pedigree, and prove that when 90log n < m < n (3), the algorithm can correctly find a feasible haplotype configuration that obeys the Mendelian law of inheritance and requires no more than k recombinants with probability 1 - O (k(2). og 2n/mn + 1/n(2)) The algorithm is efficient when k is of a moderate value and could thus be used to infer haplotypes from genotypes on large tree pedigrees efficiently in practice. We have implemented the algorithm as a C++ program named Tree-k-RHC. The implementation incorporates several ideas for dealing with missing data and data with a large number of recombinants effectively. Our experimental results on both simulated and real datasets show that Tree-k-RHC can reconstruct haplotypes with a high accuracy and is much faster than the best combinatorial method in the literature.
Background: Position-specific priors (PSP) have been used with success to boost EM and Gibbs sampler-based motif discovery algorithms. PSP information has been computed from different sources, including orthologous co...
详细信息
Background: Position-specific priors (PSP) have been used with success to boost EM and Gibbs sampler-based motif discovery algorithms. PSP information has been computed from different sources, including orthologous conservation, DNA duplex stability, and nucleosome positioning. The use of prior information has not yet been used in the context of combinatorial algorithms. Moreover, priors have been used only independently, and the gain of combining priors from different sources has not yet been studied. Results: We extend RISOTTO, a combinatorial algorithm for motif discovery, by post-processing its output with a greedy procedure that uses prior information. PSP's from different sources are combined into a scoring criterion that guides the greedy search procedure. The resulting method, called GRISOTTO, was evaluated over 156 yeast TF ChIP-chip sequence-sets commonly used to benchmark prior-based motif discovery algorithms. Results show that GRISOTTO is at least as accurate as other twelve state-of-the-art approaches for the same task, even without combining priors. Furthermore, by considering combined priors, GRISOTTO is considerably more accurate than the state-of-the-art approaches for the same task. We also show that PSP's improve GRISOTTO ability to retrieve motifs from mouse ChiP-seq data, indicating that the proposed algorithm can be applied to data from a different technology and for a higher eukaryote. Conclusions: The conclusions of this work are twofold. First, post-processing the output of combinatorial algorithms by incorporating prior information leads to a very efficient and effective motif discovery method. Second, combining priors from different sources is even more beneficial than considering them separately.
The information carried by combination of alleles on the same chromosome, called haplotypes, is of crucial interest in several fields of modern genetics as population genetics or association studies. However, this inf...
详细信息
The information carried by combination of alleles on the same chromosome, called haplotypes, is of crucial interest in several fields of modern genetics as population genetics or association studies. However, this information is usually lost by sequencing and needs, therefore, to be recovered by inference. In this chapter, we give a brief overview on the methods able to tackle this problem and some practical concerns to apply them on real data. less
Recent results showing PPAD-completeness of the problem of computing an equilibrium for Fisher's market model under additively separable, piecewise-linear, concave (PLC) utilities have dealt a serious blow to the ...
详细信息
Recent results showing PPAD-completeness of the problem of computing an equilibrium for Fisher's market model under additively separable, piecewise-linear, concave (PLC) utilities have dealt a serious blow to the program of obtaining efficient algorithms for computing equilibria in "traditional" market models, and has prompted a search for alternative models that are realistic as well as amenable to efficient computation. In this paper, we show that introducing perfect price discrimination into the Fisher model with PLC utilities renders its equilibrium polynomial time computable. Moreover, its set of equilibria are captured by a convex program that generalizes the classical Eisenberg-Gale program, and always admits a rational solution. We also give a combinatorial, polynomial time algorithm for computing an equilibrium. Next, we introduce production into our model, and again give a rational convex program that captures its equilibria. We use this program to obtain surprisingly simple proofs of both welfare theorems for this model. Finally, we also give an application of our price discrimination market model to online display advertising marketplaces.
暂无评论