We prove that the average complexity of the pairwise ordered tree alignment algorithm of Jiang, Wang and Zhang is in O(nm), where n and in stand for the sizes of the two trees, respectively. We show that the same resu...
详细信息
We prove that the average complexity of the pairwise ordered tree alignment algorithm of Jiang, Wang and Zhang is in O(nm), where n and in stand for the sizes of the two trees, respectively. We show that the same result holds for the average complexity of pairwise comparison of RNA secondary structures, using a set of biologically relevant operations. (C) 2010 Elsevier B.V. All rights reserved.
In this paper we prove that for the uniform distribution on complete deterministic automata, the average time complexity of Moore's state minimization algorithm is theta(n log log n), where n is the number of stat...
详细信息
In this paper we prove that for the uniform distribution on complete deterministic automata, the average time complexity of Moore's state minimization algorithm is theta(n log log n), where n is the number of states in the input automata and the number of letters in the alphabet is fixed. Then, an unusual family of implementations of Hoperoft's algorithm is characterized, for which the algorithm will be proved to be always faster than Moore's algorithm. Finally, we present experimental results on the usual implementations of Hoperoft's algorithm. (C) 2011 Elsevier B.V. All rights reserved.
We study the algorithmic complexity of computing persistent homology of a randomly chosen filtration. Specifically, we prove upper bounds for the average fill-up (number of non-zero entries) of the boundary matrix on ...
详细信息
ISBN:
(纸本)9781450386883
We study the algorithmic complexity of computing persistent homology of a randomly chosen filtration. Specifically, we prove upper bounds for the average fill-up (number of non-zero entries) of the boundary matrix on Erdos-Renyi and Vietoris-Rips filtrations after matrix reduction. Our bounds show that, in both cases, the reduced matrix is expected to be significantly sparser than what the general worst-case predicts. Our method is based on a link between the fill-up of the boundary matrix and expected Betti numbers of random filtrations. Our bound for Vietoris-Rips complexes is asymptotically tight up to logarithmic factors. We also provide an Erdos-Renyi filtration realising the worst-case.
In this paper we consider the problem of computing y = Ax where A is an n X n sparse matrix with THETA(n) nonzero elements. We prove that, under reasonable assumptions, on a local memory machine with p processors this...
详细信息
In this paper we consider the problem of computing y = Ax where A is an n X n sparse matrix with THETA(n) nonzero elements. We prove that, under reasonable assumptions, on a local memory machine with p processors this computation requires OMEGA((n/p) log p) time. We also study the average complexity of this problem: we prove that for an important class of algorithms the computation of y = Ax requires OMEGA((n/p) log p) time with probability greater than 1/2.
Given a set of n coins, some of them weighing H, the others weighing h, h < H, we prove that to determine the set of heavy coins, an optimal algorithm requires an average of 1+rho(2)/1+rho+rho(2) n + O(1) compariso...
详细信息
Given a set of n coins, some of them weighing H, the others weighing h, h < H, we prove that to determine the set of heavy coins, an optimal algorithm requires an average of 1+rho(2)/1+rho+rho(2) n + O(1) comparisons, using a beam balance, where rho denotes the ratio of the probabilities of being light and heavy. A simple quasi-optimal algorithm is described. Similar results are derived for the majority problem. (C) 1996 John Wiley & Sons, Inc.
A standard model in network synchronised distributed computing is the LOCAL model [5]. In this model, the processors work in rounds and, in the classic setting, they know the number of vertices of the network, n. Usin...
详细信息
ISBN:
(纸本)9781450336178
A standard model in network synchronised distributed computing is the LOCAL model [5]. In this model, the processors work in rounds and, in the classic setting, they know the number of vertices of the network, n. Using n, they can compute the number of rounds after which they must all stop and output. It has been shown recently that for many problems, one can basically remove the assumption about the knowledge of n, without increasing the asymptotic running time [2][4]. In this case, it is assumed that different vertices can choose their final output at different rounds, but continue to transmit messages. In both models, the measure of the running time is the number of rounds before the last node outputs. In this brief announcement, the vertices do not have the knowledge of n, and we consider an alternative measure: the average, over the nodes, of the number of rounds before they output. We prove that the complexity of a problem can be exponentially smaller with the new measure, but that Linial's lower bound for colouring [3] still holds.
A new definition is given for the average growth of a function f : Σ* → N with respect to a probability measure μ on Σ*. This allows us to define meaningful average distributional complexity classes for arbitrary ...
详细信息
A new definition is given for the average growth of a function f : Σ* → N with respect to a probability measure μ on Σ*. This allows us to define meaningful average distributional complexity classes for arbitrary time bounds (previously, one could not guarantee arbitrary good precision). It is shown that, basically, only the ranking of the inputs by decreasing probabilities is of importance. To compare the average and worst case complexity of problems, we study average complexity classes defined by a time bound and a bound on the complexity of possible distributions. Here, the complexity is measured by the time to compute the rank functions of the distributions. We obtain tight and optimal separation results between these average classes. Also, the worst case classes can be embedded into this hierarchy. They are shown to be identical to average classes with respect to distributions of exponential complexity.
We introduce classes of narrow graphs (including grid strips of fixed width), for which the graph reliability problem admits a polynomial time algorithm. Using this algorithm, we show that graph reliability is computa...
详细信息
We introduce classes of narrow graphs (including grid strips of fixed width), for which the graph reliability problem admits a polynomial time algorithm. Using this algorithm, we show that graph reliability is computable in polynomial time for the average complexity with respect to a Gaussian distribution. The latter is defined as follows: the vertices are numbered by integers {1, 2, ...n}, and the probability that an edge between i and j is present is e-|i-j|(2).
Yao proved that in the decision-tree model, the average complexity of the best deterministic algorithm is a lower bound on the complexity of randomized algorithms that solve the same problem. Here it is shown that a s...
详细信息
Yao proved that in the decision-tree model, the average complexity of the best deterministic algorithm is a lower bound on the complexity of randomized algorithms that solve the same problem. Here it is shown that a similar result does not always hold in the common model of distributed computation, the model in which all the processors run the same program (which may depend on the processors' input). We therefore construct a new technique that together with Yao's method enables us to show that in many cases, a similar relationship does hold in the distributed model. This relationship enables us to carry over known lower bounds an the complexity of deterministic computations to the realm of randomized computations, thus obtaining new results. The new technique can also be used for obtaining results concerning algorithms with bounded error.
暂无评论