It is shown that the problem of maximising the total reward of online tasks can be solved by finding the minimum of the maximum derivatives of the reward functions. Based on the modified approach and a close observati...
详细信息
It is shown that the problem of maximising the total reward of online tasks can be solved by finding the minimum of the maximum derivatives of the reward functions. Based on the modified approach and a close observation of task arrival characteristics, a heuristic algorithm with average complexity close to O(N) is presented.
Given a set of n coins, some of them weighing H, the others weighing h, h < H, we prove that to determine the set of heavy coins, an optimal algorithm requires an average of 1+rho(2)/1+rho+rho(2) n + O(1) compariso...
详细信息
Given a set of n coins, some of them weighing H, the others weighing h, h < H, we prove that to determine the set of heavy coins, an optimal algorithm requires an average of 1+rho(2)/1+rho+rho(2) n + O(1) comparisons, using a beam balance, where rho denotes the ratio of the probabilities of being light and heavy. A simple quasi-optimal algorithm is described. Similar results are derived for the majority problem. (C) 1996 John Wiley & Sons, Inc.
A new definition is given for the average growth of a function f : Σ* → N with respect to a probability measure μ on Σ*. This allows us to define meaningful average distributional complexity classes for arbitrary ...
详细信息
A new definition is given for the average growth of a function f : Σ* → N with respect to a probability measure μ on Σ*. This allows us to define meaningful average distributional complexity classes for arbitrary time bounds (previously, one could not guarantee arbitrary good precision). It is shown that, basically, only the ranking of the inputs by decreasing probabilities is of importance. To compare the average and worst case complexity of problems, we study average complexity classes defined by a time bound and a bound on the complexity of possible distributions. Here, the complexity is measured by the time to compute the rank functions of the distributions. We obtain tight and optimal separation results between these average classes. Also, the worst case classes can be embedded into this hierarchy. They are shown to be identical to average classes with respect to distributions of exponential complexity.
Yao proved that in the decision-tree model, the average complexity of the best deterministic algorithm is a lower bound on the complexity of randomized algorithms that solve the same problem. Here it is shown that a s...
详细信息
Yao proved that in the decision-tree model, the average complexity of the best deterministic algorithm is a lower bound on the complexity of randomized algorithms that solve the same problem. Here it is shown that a similar result does not always hold in the common model of distributed computation, the model in which all the processors run the same program (which may depend on the processors' input). We therefore construct a new technique that together with Yao's method enables us to show that in many cases, a similar relationship does hold in the distributed model. This relationship enables us to carry over known lower bounds an the complexity of deterministic computations to the realm of randomized computations, thus obtaining new results. The new technique can also be used for obtaining results concerning algorithms with bounded error.
This paper is a study of the error in approximating the global maximum of a Brownian motion on the unit interval by observing the value at randomly chosen points. One point of view is to look at the error from random ...
详细信息
This paper is a study of the error in approximating the global maximum of a Brownian motion on the unit interval by observing the value at randomly chosen points. One point of view is to look at the error from random sampling for a given fixed Brownian sample path;another is to look at the error with both the path and observations random. In the first case we show that for almost all Brownian paths the error, normalized by multiplying by the square root of the number of observations, does not converge in distribution, while in the second case the normalized error does converge in distribution. We derive the limiting distribution of the normalized error averaged over all paths.
The Discrete Cosine Transform (DCT) is widely used in all transform-based image and video compression standards due to its well-known decorrelation and energy compaction properties for typical images. Many fast algori...
详细信息
ISBN:
(纸本)0819424358
The Discrete Cosine Transform (DCT) is widely used in all transform-based image and video compression standards due to its well-known decorrelation and energy compaction properties for typical images. Many fast algorithms available for the DCT optimize various parameters such as additions and multiplications but they are input independent and thus require the same number of operations for any inputs. In this paper we study the benefits of input-dependent algorithms for the DCT which are aimed at minimizing the average computation time by taking advantage of the sparseness of the input data. Here, we concentrate on the inverse DCT (IDCT) part since typical input blocks will contain a substantial number of zeros. We show how to construct an IDCT algorithm based on the statistics of the input data, which are used to optimize the algorithm for the average case. We show how, for a given input and a correct model of the complexity of the various operations, we can achieve the fastest average performance.
We describe a class of adaptive algorithms for approximating the global minimum of a continuous function on the unit interval. The limiting distribution of the error is derived under the assumption of Wiener measure o...
详细信息
We describe a class of adaptive algorithms for approximating the global minimum of a continuous function on the unit interval. The limiting distribution of the error is derived under the assumption of Wiener measure on the objective functions. For any delta > 0, we construct an algorithm which has error converging to zero at rate n(-(1-delta)) in the number of function evaluations n. This convergence rate contrasts with the n(-1/2) rate of previously studied nonadaptive methods.
In this paper we consider the problem of computing y = Ax where A is an n X n sparse matrix with THETA(n) nonzero elements. We prove that, under reasonable assumptions, on a local memory machine with p processors this...
详细信息
In this paper we consider the problem of computing y = Ax where A is an n X n sparse matrix with THETA(n) nonzero elements. We prove that, under reasonable assumptions, on a local memory machine with p processors this computation requires OMEGA((n/p) log p) time. We also study the average complexity of this problem: we prove that for an important class of algorithms the computation of y = Ax requires OMEGA((n/p) log p) time with probability greater than 1/2.
A fast nearest-neighbor algorithm is presented. It works in general spaces where the known cell (bucketing) techniques cannot be implemented for various reasons, such as the absence of coordinate structure and/or high...
详细信息
A fast nearest-neighbor algorithm is presented. It works in general spaces where the known cell (bucketing) techniques cannot be implemented for various reasons, such as the absence of coordinate structure and/or high dimensionality. The central idea has already appeared several times in the literature with extensive computer simulation results. This paper provides an exact probabilistic analysis or this family of algorithms, proving its O(1) asymptotic average complexity measured in the number of dissimilarity calculations.
The average-case analysis of algorithms for binary search trees yields very different results from those obtained under the uniform distribution. The analysis itself is more complex and replaces algebraic equations by...
详细信息
The average-case analysis of algorithms for binary search trees yields very different results from those obtained under the uniform distribution. The analysis itself is more complex and replaces algebraic equations by integral equations. In this work this analysis is carried out for the computation of the average size of the intersection of two binary trees. The development of this analysis involves Bessel functions that appear in the solutions of partial differential equations, and the result has an average size of O(n2 square-root 2-2/square-root log n), contrasting with the size O(1) obtained when considering a uniform distribution.
暂无评论