The present paper argues that it suffices for an algorithmic time complexity measure to be system invariant rather than system independent (which means predicting from the desk). (C) 2007 Elsevier Inc. All rights rese...
详细信息
The present paper argues that it suffices for an algorithmic time complexity measure to be system invariant rather than system independent (which means predicting from the desk). (C) 2007 Elsevier Inc. All rights reserved.
We investigate the average-case state and transition complexity of deterministic and nondeterministic finite automata, when choosing a finite language of a certain "size" n uniformly at random from all finit...
详细信息
We investigate the average-case state and transition complexity of deterministic and nondeterministic finite automata, when choosing a finite language of a certain "size" n uniformly at random from all finite languages of that particular size. Here size means that all words of the language are either of length n, or of length at most n. It is shown that almost all deterministic finite automata accepting finite languages over a binary input alphabet have state complexity theta(2(n)/n) while nondeterministic finite automata are shown to perform better, namely the nondeterministic state complexity is in theta(root 2(n)). Interestingly, in both cases the aforementioned bounds are asymptotically like in the worst case. However, the nondeterministic transition complexity is shown to be again theta(2(n)/n). The case of unary finite languages is also considered. Moreover, we develop a framework that allows us to investigate the average-casecomplexity of operations like, e.g., union, intersection, complementation, and reversal, on finite languages in this setup. (C) 2007 Published by Elsevier B.V.
Using the cavity equations of Mezard, Parisi, and Zecchina [Science 297 (2002), 812;Mezard and Zecchina, Phys Rev E 66 (2002), 056126] we derive the various threshold values for the number of clauses per variable of t...
详细信息
Using the cavity equations of Mezard, Parisi, and Zecchina [Science 297 (2002), 812;Mezard and Zecchina, Phys Rev E 66 (2002), 056126] we derive the various threshold values for the number of clauses per variable of the random K-satisfiability problem, generalizing the previous results to K >= 4. We also give an analytic solution of the equations, and some closed expressions for these thresholds, in an expansion around large K. The stability of the solution is also computed. For any K, the satisfiability threshold is found to be in the stable region of the solution, which adds further credit to the conjecture that this computation gives the exact satisfiability threshold. (c) 2005 Wiley Periodicals, Inc.
Many applications require approximate values of path integrals. A typical approach is to approximate the path integral by a high dimensional integral and apply a Monte Carlo (randomized) algorithm, However, Monte Carl...
详细信息
Many applications require approximate values of path integrals. A typical approach is to approximate the path integral by a high dimensional integral and apply a Monte Carlo (randomized) algorithm, However, Monte Carlo algorithm requires roughly epsilon(-2) integrand evaluations to provide an epsilon approximation. Moreover, the error bound of epsilon is guaranteed only in a stochastic sense. Do we really need to use randomized algorithms for path integrals? Perhaps, we can find a deterministic algorithm that is more effective even in the worst case setting. To answer this question, we study the worst casecomplexity of path integration, which, roughly speaking, is defined as the minimal number of the integrand evaluations needed to compute an approximation with error at most epsilon. We consider path integration with respect to a Gaussian measure, and for various classes of integrands. Tractability of path integration means that the complexity depends polynomially on 1/epsilon. We show that for the class of r times Frechet differentiable integrands, tractability of path integration holds iff the covariance operator of the Gaussian measure has finite rank. Hence, if the Gaussian measure is supported on an infinite dimensional space then path integration is intractable. In this case, there exists no effective deterministic algorithm, and the use of randomized algorithms is justified. In fact, for this class of integrands, the classical Monte Carlo algorithm is (almost) optimal and the complexity in the randomized setting is proportional to epsilon(-2). On the other hand, for a particular class of entire integrands, the worst casecomplexity of path integration is at most of order epsilon(-p) with p depending on the Gaussian measure. Hence, path integration is now tractable. Furthermore, for any Gaussian measure, the exponent p is less than or equal to 2, For the Wiener measure, p=2/3. For this class, we provide effective deterministic algorithms which solve the path
The theory of average case complexity studies the expected complexity of computational tasks under various specific distributions on the instances, rather than their worst casecomplexity. Thus, this theory deals with...
详细信息
The theory of average case complexity studies the expected complexity of computational tasks under various specific distributions on the instances, rather than their worst casecomplexity. Thus, this theory deals with distributional problems, defined as pairs each consisting of a decision problem and a probability distribution over the instances. While for applications utilizing hardness, such as cryptography, one seeks an efficient algorithm that outputs random instances of some problem that are hard for any algorithm with high probability, the resulting hard distributions in these cases are typically highly artificial, and do not establish the hardness of the problem under "interesting" or "natural" distributions. This paper studies the possibility of proving generic hardness results (i.e., for a wide class of NP-complete problems), under "natural" distributions. Since it is not clear how to define a class of "natural" distributions for general NP-complete problems, one possibility is to impose some strong computational constraint on the distributions, with the intention of this constraint being to force the distributions to "look natural". Levin, in his seminal paper on average case complexity from 1984, defined such a class of distributions, which he called P-computable distributions. He then showed that the NP-complete Tiling problem, under some P-computable distribution, is hard for the complexity class of distributional NP problems (i.e. NP with P-computable distributions). However, since then very few NP-complete problems (coupled with P-computable distributions), and in particular " natural" problems, were shown to be hard in this sense. In this paper we show that all natural NP-complete problems can be coupled with P-computable distributions such that the resulting distributional problem is hard for distributional NP.
This paper considers the problem of approximating the minimum of a continuous function using a fixed number of sequentially selected function evaluations. A lower bound on the complexity is established by analyzing th...
详细信息
This paper considers the problem of approximating the minimum of a continuous function using a fixed number of sequentially selected function evaluations. A lower bound on the complexity is established by analyzing the averagecase for the Brownian bridge. (C) 2003 Elsevier Inc. All rights reserved.
A fundamental question of complexity theory is the direct product question. A famous example is Yao's XOR-lemma, in which one assumes that some function f is hard on average for small circuits (meaning that every ...
详细信息
A fundamental question of complexity theory is the direct product question. A famous example is Yao's XOR-lemma, in which one assumes that some function f is hard on average for small circuits (meaning that every circuit of some fixed size s which attempts to compute f is wrong on a non-negligible fraction of the inputs) and concludes that every circuit of size s' only has a small advantage over guessing randomly when computing f(circle plusk)(x(1),...,x(k)) = f(x(1)) circle plus...circle plus f(x(k)) on independently chosen x(1),...,x(k) . All known proofs of this lemma have the property that s' < s . In words, the circuit which attempts to compute f(circle plusk) is smaller than the circuit which attempts to compute f on a single input! This paper addresses the issue of proving strong direct product assertions, that is, ones in which s' approximate to ks and is in particular larger than s. We study the question of proving strong direct product question for decision trees and communication protocols.
Many efficient string matching algorithms make use of q-grams and process the text in windows which are read backward. In this paper we provide a framework for analyzing the average case complexity of these algorithms...
详细信息
Many efficient string matching algorithms make use of q-grams and process the text in windows which are read backward. In this paper we provide a framework for analyzing the average case complexity of these algorithms taking into account the statistical dependencies between overlapping q-grams. We apply this to the q-gram Boyer-Moore-Horspool algorithm adapted to various string matching problems and show that the algorithm is optimal on average. (C) 2012 Elsevier B.V. All rights reserved.
This paper analyzes the concept of malignness, which is the property of probability ensembles making the averagecase running time equal to the worst case running time for a class of algorithms. The author derives low...
详细信息
This paper analyzes the concept of malignness, which is the property of probability ensembles making the averagecase running time equal to the worst case running time for a class of algorithms. The author derives lower and upper bounds on the complexity of malign ensembles, which are tight for exponential time algorithms, and which show that no polynomial time computable malign ensemble exists for the class of polynomial time algorithms. Furthermore, it is shown that for no class of superlinear algorithms a polynomial time samplable malign ensemble exists, unless every language in P has an expected polynomial time constructor.
An optimal lower bound on the average time required by any algorithm that merges two sorted lists on Valiant's parallel computation tree model is proven.
An optimal lower bound on the average time required by any algorithm that merges two sorted lists on Valiant's parallel computation tree model is proven.
暂无评论