The Patricia trie is a simple modification of a regular trie. By eliminating unary branching modes, the Patricia achieves better performance than regular tries. However, the question is: how much on the average is the...
详细信息
The Patricia trie is a simple modification of a regular trie. By eliminating unary branching modes, the Patricia achieves better performance than regular tries. However, the question is: how much on the average is the Patricia better? This paper offers a thorough answer to this question by considering some statistics of the number of nodes examined in a successful search and an unsuccessful search in the Patricia tries. It is shown that for the Patricia containing n records the average of the successful search length S(n) asymptotically becomes 1/h1. In n + O(1), and the variance of S(n) is either var S(n) = c. In n + O(1) for asymmetric Patricia or var S(n) = O(1) for a symmetric Patricia, where h1 is the entropy of the alphabet over which the Patricia is built and c is an explicit constant. Higher moments of S(n) are also assessed. The number of nodes examined in an unsuccessful search U(n) is studied only for binary symmetric Patricia tries. We prove that the mth moment of the unsuccessful search length EU(n)m satisfies lim n-->EU(n)m/log2(m)n = 1, and the variance of U(n) is var U(n) = 0.87907. These results suggest that Patricia tries are very well balanced trees in the sense that a random shape of Patricia tries resembles the shape of complete trees that are ultimately balanced trees.
A polynomial time graph coloring algorithm is presented with the following property: there is a constant c > 0 such that if k=k(n) is a function such that k less than or equal to square root cn/log n, then the algo...
详细信息
A polynomial time graph coloring algorithm is presented with the following property: there is a constant c > 0 such that if k=k(n) is a function such that k less than or equal to square root cn/log n, then the algorithm colors optimally almost all graphs with n vertices and chromatic number less than or equal to k.
作者:
BUCHTA, CInst. Anal.
Tech. Math. und Versicherungsmath. Tech. Univ. Wien Wiedner Hauptstr. 8-10 A-1040 Wien Austria
Any of n vectors in d-space is called maximal if none of the remaining vectors dominates it in every component. Assuming that n vectors are distributed identically and that the d components of each vector are distribu...
详细信息
Any of n vectors in d-space is called maximal if none of the remaining vectors dominates it in every component. Assuming that n vectors are distributed identically and that the d components of each vector are distributed independently and continuously, we determine the expected number of maximal vectors explicitly for any n and d. The asymptotic behaviour of this quantity as n tends to infinity, which was investigated by Bentley, Kung, Schkolnick, Thompson and Devroye, follows immediately from our result.
In this paper the utility and the difficulties of probabilisticanalysis for optimization algorithms are discussed. Such an analysis is expected to deliver valuable criteria-better than the worst-case complexity-for t...
详细信息
In this paper the utility and the difficulties of probabilisticanalysis for optimization algorithms are discussed. Such an analysis is expected to deliver valuable criteria-better than the worst-case complexity-for the efficiency of an algorithm in practice.
We consider linear programs in which the objective function (cost) coefficients are independent non-negative random variables, and give upper bounds for the random minimum cost. One application shows that for quadrati...
详细信息
We consider linear programs in which the objective function (cost) coefficients are independent non-negative random variables, and give upper bounds for the random minimum cost. One application shows that for quadratic assignment problems with such costs certain branch-and-bound algorithms usually take more than exponential time.
It has been a challenge for mathematicians to confirm theoretically the extremely good performance of simplex-type algorithms for linear programming. In this paper the average number of steps performed by a simplex al...
详细信息
It has been a challenge for mathematicians to confirm theoretically the extremely good performance of simplex-type algorithms for linear programming. In this paper the average number of steps performed by a simplex algorithm, the so-called self-dual method, is analyzed. The algorithm is not started at the traditional point (1, … , l)T, but points of the form (1, ϵ, ϵ2, …)T, with ϵ sufficiently small, are used. The result is better, in two respects, than those of the previous analyses. First, it is shown that the expected number of steps is bounded between two quadratic functions c1(min(m, n))2 and c2(min(m, n))2 of the smaller dimension of the problem. This should be compared with the previous two major results in the field. Borgwardt proves an upper bound of O(n4m1/(n-1)) under a model that implies that the zero vector satisfies all the constraints, and also the algorithm under his consideration solves only problems from that particular subclass. Smale analyzes the self-dual algorithm starting at (1, … , 1)T. He shows that for any fixed m there is a constant c(m) such the expected number of steps is less than c(m)(ln n)m(m+1); Megiddo has shown that, under Smale's model, an upper bound C(m) exists. Thus, for the first time, a polynomial upper bound with no restrictions (except for nondegeneracy) on the problem is proved, and, for the first time, a nontrivial lower bound of precisely the same order of magnitude is established. Both Borgwardt and Smale require the input vectors to be drawn from spherically symmetric distributions. In the model in this paper, invariance is required only under certain
Bentley proposed a divide‐and‐conquer approach to solve the planar closest pair problem. In this paper, we shall show that the average case performance of this algorithm is proportional to the number of poins being ...
详细信息
Bentley proposed a divide‐and‐conquer approach to solve the planar closest pair problem. In this paper, we shall show that the average case performance of this algorithm is proportional to the number of poins being examined.
The Euclidean matching problem is described, and the usual Euclidean metric is given. The objective is to pair the points into m pairs so that the sum of the lengths of the lines joining the pairs is as small as poss...
详细信息
The Euclidean matching problem is described, and the usual Euclidean metric is given. The objective is to pair the points into m pairs so that the sum of the lengths of the lines joining the pairs is as small as possible. An algorithm is described which finds a matching of points in the unit square. It is proven that, under the assumption that the points are uniformly distributed in the square, the algorithm has a fast anticipated running time, and it yields a matching with value close to the optimum. This algorithm is almost identical to Supowit and Reingold's (1983), except that they concentrated on a worst-case analysis, while the present algorithm uses a probabilisticanalysis.
暂无评论