The present paper shows that for certain algorithms such as sorting, the parameters of the input distribution must also be taken into account, apart from the input size, for a more precise evaluation of computational ...
详细信息
The present paper shows that for certain algorithms such as sorting, the parameters of the input distribution must also be taken into account, apart from the input size, for a more precise evaluation of computational and time complexity (average case only) of the algorithm in question (the so-called "gold standard"). Some concrete results are presented to warrant a new and improved model for replacement sort (also called selection sort) as [GRAPHICS] where the LHS gives the average case time complexity, n is the input size, pi's the parameters of the input distribution characterizing the sorting elements, i is the average number of interchanges which is a function of both the input size and the parameters, the rest of the terms arising due to linear regression and have usual meanings. The error term E arises as we have fixed only the input size n in the model but varying the specific input elements and their relative positions in the array, for a particular distribution [H. Mahmoud, Sorting: A Distribution Theory, John Wiley and Sons, 2000]. The term n(n - 1)/2 represents the number of comparisons. We claim this to be an improvement over the conventional model, namely, [GRAPHICS] which stems from the O(n(2)) complexity for this algorithm. We argue that the new model in our opinion can be a guiding factor in distinguishing this algorithm from other sorting algorithms of similar order of average complexity such as bubble sort and insertion sort. Note carefully that the dependence of the number of interchanges on the parameters is more prominent for discrete distributions rather than continuous ones and we suspect this to be because the probability of a tie is zero in a continuous case. However, presence of ties and their relative positions in the array is crucial for discrete cases. And this is precisely where the parameters of the input distribution come into play. Those algorithms where ties have a greater influence on some of the computations will have greater infl
This paper proposes a lowly complex one-pass variable bit rate video coding algorithm, which gets smooth quality and satisfies constraint of constant storage size. According to the correlation among frames, it uses lo...
详细信息
ISBN:
(纸本)0780385934
This paper proposes a lowly complex one-pass variable bit rate video coding algorithm, which gets smooth quality and satisfies constraint of constant storage size. According to the correlation among frames, it uses long average-complexity of coded frames to predict the coding frame's complexity. It allocates bits through the whole sequence according to storage size and average complexity of each kind of frame and further to decide quantization scale. In most case, allocated bits differ from actually used ones. Thus it introduces the concept similar to virtual buffer's fullness in TM5 as feedback factor to tune allocated bits slightly. With feedback factors, it reaches the trade-off between used and remaining bits. Simulation shows it tunes quantization scale slightly and frequently, but totally the quantization scale change slowly. It gets higher visual quality than TM5 and other one-pass VBR algorithm. It is very effective for constant storage size. Since it does not require used bits as close as possible to allocated ones, it has pay little to CPU load and suitable for real-time application.
Given a point set P = {p(1),...,p(n)} in the d-dimensional unit hypercube, we give upper bounds on the maximal expected number of extreme points when each point pi is perturbed by small random noise chosen independent...
详细信息
ISBN:
(纸本)3540230254
Given a point set P = {p(1),...,p(n)} in the d-dimensional unit hypercube, we give upper bounds on the maximal expected number of extreme points when each point pi is perturbed by small random noise chosen independently for each point from the same noise distribution Delta. Our results are parametrized by the variance of the noise distribution. For large variance we essentially consider the average case for distribution Delta while for variance 0 we consider the worst case. Hence our results give upper bounds on the number of extreme points where our input distributions range from average case to worst case. Our main contribution is a rather general lemma that can be used to obtain upper bounds on the expected number of extreme points for a large class of noise distributions. We then apply this lemma to obtain explicit bounds for random noise coming from the Gaussian normal distribution of variance or 2 and the uniform distribution in a hypercube of side length c. For these noise distributions we show upper bounds of O((1/sigma)(d) . log(3/2-d-1) n) and O((n log n/epsilon)(d/(d+1)) respectively. Besides its theoretical motivation our model is also motivated by the observation that in many applications of convex hull algorithms the input data is inherently noisy, e.g. when the data comes from physical measurement or imprecise arithmetic is used.
Based on surrogate data hypothesis testing method, this paper presents an improved nonlinear algorithm for analyzing deterministic chaotic signals, which can be applied to the analysis of abnormal rhythm electrocardio...
详细信息
Based on surrogate data hypothesis testing method, this paper presents an improved nonlinear algorithm for analyzing deterministic chaotic signals, which can be applied to the analysis of abnormal rhythm electrocardiosignals. Ventricular tachycardia (VT) and ventricular fibrillation (VF) can be treated as nonlinear chaotic signals that are different from stochastic signals. On the basis of qualitative analysis theory in nonlinear dynamics, the authors forwarded a new definition of complexity rate and developed several relative detection methods for quantitative analysis of VT and VF. The results indicated that these qualitative analysis and quantitative analysis of VT and VF are objective and credible.
For every fixed k greater than or equal to 3 we describe an algorithm for deciding k-colorability, whose expected running time in polynomial in the probability space G(n, p) of random graphs as long as the edge probab...
详细信息
For every fixed k greater than or equal to 3 we describe an algorithm for deciding k-colorability, whose expected running time in polynomial in the probability space G(n, p) of random graphs as long as the edge probability p = p(n) satisfies p(n) greater than or equal to C/n, with C = C(k) being a sufficiently large constant. (C) 2002 Elsevier Science B.V. All rights reserved.
作者:
Enge, AUniv Augsburg
Lehrstuhl Diskrete Math Optimierung & Operat Res D-86135 Augsburg Germany
After generalising two reduction algorithms to characteristic 2, we analyse the average complexity of the arithmetic in hyperelliptic Jacobians over any finite field. To this purpose we determine the exact average num...
详细信息
After generalising two reduction algorithms to characteristic 2, we analyse the average complexity of the arithmetic in hyperelliptic Jacobians over any finite field. To this purpose we determine the exact average number of field operations for computing the greatest common divisor of polynomials over a finite field by the extended Euclidian algorithm.
In this paper we propose a random CSP model, called Model GB, which is a natural generalization of standard Model B. This paper considers Model GB in the case where each constraint is easy to satisfy. In this case Mod...
详细信息
In this paper we propose a random CSP model, called Model GB, which is a natural generalization of standard Model B. This paper considers Model GB in the case where each constraint is easy to satisfy. In this case Model GB exhibits non-trivial behaviour (not trivially satisfiable or unsatisfiable) as the number of variables approaches infinity. A detailed analysis to obtain an asymptotic estimate (good to 1+o(1)) of the average number of nodes in a search tree used by the backtracking algorithm on Model GB is also presented. It is shown that the average number of nodes required for finding all solutions or proving that no solution exists grows exponentially with the number of variables. So this model might be an interesting distribution for studying the nature of hard instances and evaluating the performance of CSP algorithms. In addition, we further investigate the behaviour of the average number of nodes as r (the ratio of constraints to variables) varies. The results indicate that as r increases, random CSP instances get easier and easier to solve, and the base for the average number of nodes that is exponential in n tends to 1 as r approaches infinity. Therefore, although the average number of nodes used by the backtracking algorithm on random CSP is exponential, many CSP instances will be very easy to solve when r is sufficiently large.
We introduce classes of narrow graphs (including grid strips of fixed width), for which the graph reliability problem admits a polynomial time algorithm. Using this algorithm, we show that graph reliability is computa...
详细信息
We introduce classes of narrow graphs (including grid strips of fixed width), for which the graph reliability problem admits a polynomial time algorithm. Using this algorithm, we show that graph reliability is computable in polynomial time for the average complexity with respect to a Gaussian distribution. The latter is defined as follows: the vertices are numbered by integers {1, 2, ...n}, and the probability that an edge between i and j is present is e-|i-j|(2).
In contrast to machine models like Turing machines or random access machines, circuits are a static computational model. The internal information flow of a computation is fixed in advance, independent of the actual in...
详细信息
In contrast to machine models like Turing machines or random access machines, circuits are a static computational model. The internal information flow of a computation is fixed in advance, independent of the actual input. Therefore, size and depth are natural and simple measures for circuits and provide a worst-case analysis. We consider a new model in which an internal gate is evaluated as soon as its result has been determined by a partial assignment of its inputs. This way, a dynamic notion of delay is obtained which gives rise to an average case measure for the time complexity of circuits. In a previous paper we have obtained tight upper and lower bounds for the average case complexity of several basic Boolean functions. This paper examines the asymptotic average case complexity for the set of all n-ary Boolean functions. In contrast to worst case analysis a simple counting argument does not work. We prove that with respect to the uniform probability distribution almost all Boolean functions require at least n - log n - log log n expected time. On the other hand, there is a significantly large subset of functions that can be computed with a constant average delay. Finally, for an arbitrary Boolean function we compare its worst case and average case complexity. It is shown that for each function that requires circuit depth d, i.e. of worst case complexity d, the expected time complexity will be at least d - log n - log d with respect to an explicitly defined probability distribution. In addition, a nontrivial upper bound on the complexity of such a distribution will be obtained. (C) 1999 Academic Press.
暂无评论