The consequences of the worst-case assumption NP = P are very well understood. On the other hand, we only know a few consequences of the analogous average-case assumption "NP is easy on average." In this pap...
详细信息
The consequences of the worst-case assumption NP = P are very well understood. On the other hand, we only know a few consequences of the analogous average-case assumption "NP is easy on average." In this paper we establish several new results on the worst-casecomplexity of Arthur-Merlin games (the class AM) under the average-case complexity assumption "NP is easy on average." We first consider a stronger notion of "NP is easy on average" namely NP is easy on average with respect to distributions that are computable by polynomial size circuit families. Under this assumption we show that AM can be derandomized to nondeterministic subexponential time. Under the assumption that NP is easy on average with respect to polynomial-time computable distributions, we show (a) AME = E where AME is the exponential version of AM. This improves an earlier known result that if NP is easy on average then NE = E. (b) For every c > 0, AM subset of [io-pseudo(NTIME(nc))]-NP. Roughly this means that for any language L in AM there is a language L' in NP so that it is computationally infeasible to distinguish L from L'. We use recent results from the area of derandomization for establishing our results.
We investigate the average-case complexity of decision problems for finitely generated groups, in particular, the word and membership problems. Using our recent results on "generic-casecomplexity", we show ...
详细信息
We investigate the average-case complexity of decision problems for finitely generated groups, in particular, the word and membership problems. Using our recent results on "generic-casecomplexity", we show that if a finitely generated group G has word problem solvable in subexponential time and has a subgroup of finite index which possesses a non-elementary word-hyperbolic quotient group, then the average-case complexity of the word problem of G is linear time, uniformly with respect to the collection of all length-invariant measures on G. This results applies to many of the groups usually studied in geometric group theory: for example, all braid groups B-n all groups of hyperbolic knots, many Coxeter groups and all Artin groups of extra-large type. (C) 2003 Elsevier Inc. All rights reserved.
Hardness amplification is the fundamental task of converting a delta-hard function f : {0, 1}(n) -> {0, 1} into a (1/2 - epsilon)-hardfunction Amp(f), where f is gamma-hard if small circuits fail to compute f on at...
详细信息
ISBN:
(纸本)9781605580470
Hardness amplification is the fundamental task of converting a delta-hard function f : {0, 1}(n) -> {0, 1} into a (1/2 - epsilon)-hardfunction Amp(f), where f is gamma-hard if small circuits fail to compute f on at least a gamma fraction of the inputs. Typically, epsilon, delta are small (and delta = 2(-k) captures the case where f is worst-case hard). Achieving epsilon = 1/n(w(1)) is a prerequisite for cryptography and most pseudorandom-generator constructions. In this paper we study the complexity of black-box proofs of hardness amplification. A class of circuits V proves a hardness amplification result if for any function h that agrees with Amp(f) on a 1/2 + epsilon fraction of the inputs there exists an oracle circuit D is an element of D such that D-h agrees with f on a 1 - delta fraction of the inputs. We focus on the case where every D is an element of D makes non-adaptive queries to h. This setting captures most hardness amplification techniques. We prove two main results: 1. The circuits in D "can be used" to compute the majority function on 1/epsilon bits. In particular, these circuits have large depth when epsilon <= 1/poly log n. 2. The circuits in D must make Omega (log(1/delta)/epsilon(2)) oracle queries. Both our bounds on the depth and on the number of queries are tight up to constant factors. Our results explain why hardness amplification techniques have failed to transform known lower bounds against constant-depth circuit classes into strong average-case lower bounds. When coupled with the celebrated "Natural Proofs" result by Razborov and Rudich (J. CSS '97) and the pseudorandom functions by Naor and Reingold (J. ACM '04), Our results show that standard techniques for hardness amplification can only be applied to those circuit classes for which standard techniques cannot prove circuit lower bounds. Our results reveal a contrast between Yao's XOR Lemma (Amp(f) := f(x(1)) circle plus ... circle plus f(x(t)) is an element of {0, 1}) and the Direct-Pr
This paper is a study of the complexity of optimization of continuous univariate functions using a fixed number of sequentially selected function evaluations. The complexity is studied in the averagecase under a cond...
详细信息
This paper is a study of the complexity of optimization of continuous univariate functions using a fixed number of sequentially selected function evaluations. The complexity is studied in the averagecase under a conditioned Wiener measure. We show that to obtain an error of at most E, on the order of log log(1/epsilon) log(1/epsilon) function evaluations are required. (c) 2007 Elsevier B.V. All rights reserved.
We prove that if NP not subset of BPP, i.e., if SAT is worst-case hard, then for every probabilistic polynomial-time algorithm trying to decide SAT, there exists some polynomially samplable distribution that is hard f...
详细信息
We prove that if NP not subset of BPP, i.e., if SAT is worst-case hard, then for every probabilistic polynomial-time algorithm trying to decide SAT, there exists some polynomially samplable distribution that is hard for it. That is, the algorithm often errs on inputs from this distribution. This is the first worst-case to average-case reduction for NP of any kind. We stress however, that this does not mean that there exists one fixed samplable distribution that is hard for all probabilistic polynomial-time algorithms, which is a pre-requisite assumption needed for one-way functions and cryptography (even if not a sufficient assumption). Nevertheless, we do show that there is a fixed distribution on instances of NP-complete languages, that is samplable in quasi-polynomial time and is hard for all probabilistic polynomial-time algorithms (unless NP is easy in the worst case). Our results are based on the following lemma that may be of independent interest: Given the description of an efficient (probabilistic) algorithm that fails to solve SAT in the worst case, we can efficiently generate at most three Boolean formulae (of increasing lengths) such that the algorithm errs on at least one of them.
We investigate the average-case complexity of a generalization of the compact knapsack problem to arbitrary rings: given m (random) ring elements a(1), ... , a(m) is an element of R and a (random) target value b is an...
详细信息
We investigate the average-case complexity of a generalization of the compact knapsack problem to arbitrary rings: given m (random) ring elements a(1), ... , a(m) is an element of R and a (random) target value b is an element of R, find coefficients x(1), ... , x(m) is an element of S (where S is an appropriately chosen subset of R) such that Sigma a(i) . x(i) = b. We consider compact versions of the generalized knapsack where the set S is large and the number of weights m is small. Most variants of this problem considered in the past (e.g., when R = Z is the ring of the integers) can be easily solved in polynomial time even in the worst case. We propose a new choice of the ring R and subset S that yields generalized compact knapsacks that are seemingly very hard to solve on the average, even for very small values of m. Namely, we prove that for any unbounded function m = W(1) with arbitrarily slow growth rate, solving our generalized compact knapsack problems on the average is at least as hard as the worst-case instance of various approximation problems over cyclic lattices. Specific worst-case lattice problems considered in this paper are the shortest independent vector problem SIVP and the guaranteed distance decoding problem GDD (a variant of the closest vector problem, CVP) for approximation factors n(1+epsilon) almost linear in the dimension of the lattice. Our results yield very efficient and provably secure one-way functions (based on worst-casecomplexity assumptions) with key size and time complexity almost linear in the security parameter n. Previous constructions with similar security guarantees required quadratic key size and computation time. Our results can also be formulated as a connection between the worst-case and average-case complexity of various lattice problems over cyclic and quasi-cyclic lattices.
The average-case complexity of partial Boolean functions is considered. For almost all functions it is shown that, up to a multiplicative constant, the average-case complexity does not depend on the size of the functi...
详细信息
ISBN:
(纸本)3540201033
The average-case complexity of partial Boolean functions is considered. For almost all functions it is shown that, up to a multiplicative constant, the average-case complexity does not depend on the size of the function's domain but depends only on the number of tuples on which the function is equal to unity.
We show that if an NP-complete problem has a nonadaptive self-corrector with respect to any samplable distribution, then coNP is contained in NP/poly and the polynomial hierarchy collapses to the third level. Feigenba...
详细信息
We show that if an NP-complete problem has a nonadaptive self-corrector with respect to any samplable distribution, then coNP is contained in NP/poly and the polynomial hierarchy collapses to the third level. Feigenbaum and Fortnow [SIAM J. Comput., 22 ( 1993), pp. 994 - 1005] show the same conclusion under the stronger assumption that an NP-complete problem has a nonadaptive random self-reduction. A self-corrector for a language L with respect to a distribution D is a worst-case to average-case reduction that transforms any given algorithm that correctly decides L on most inputs ( with respect to D) into an algorithm of comparable efficiency that decides L correctly on every input. A random self-reduction is a special case of a self-corrector, where the reduction, given an input x, is restricted to only making oracle queries that are distributed according to D. The result of Feigenbaum and Fortnow depends essentially on the property that the distribution of each query in a random self-reduction is independent of the input of the reduction. Our result implies that the average-case hardness of a problem in NP or the security of a one-way function cannot be based on the worst-casecomplexity of an NP-complete problem via nonadaptive reductions ( unless the polynomial hierarchy collapses).
We study the maximal rate of convergence (mrc) of algorithms for (multivariate) integration and approximation of d-variate functions from reproducing kernel Hilbert spaces H(K-d). Here K-d is an arbitrary kernel all o...
详细信息
We study the maximal rate of convergence (mrc) of algorithms for (multivariate) integration and approximation of d-variate functions from reproducing kernel Hilbert spaces H(K-d). Here K-d is an arbitrary kernel all of whose partial derivatives up to order r satisfy a Holder- type condition with exponent 2 beta. Algorithms use n function values and we analyze their rate of convergence as n tends to infinity. We focus on universal algorithms which depend on d, r, and beta but not on the specific kernel K-d, and nonuniversal algorithms which may depend additionally on K-d. For universal algorithms the mrc is (r + beta)/d for both integration and approximation, and for nonuniversal algorithms it is 1/2+( r+beta)/d for integration and a+(r+ beta)/d with a is an element of [1/(4+ 4(r+ beta)/d), 1/2] for approximation. Hence, the mrc for universal algorithms suffers from the curse of dimensionality if d is large relative to r + beta, whereas the mrc for nonuniversal algorithms does not since it is always at least 1/2 for integration, and 1/4 for approximation. This is the price we have to pay for using universal algorithms. On the other hand, if r+ beta is large relative to d, then the mrc for universal and nonuniversal algorithms is approximately the same. We also consider the case when we have the additional knowledge that the kernel K-d has product structure, K-d,K- r,K-beta = circle times(d)(j=1) K-rj,K- beta j. Here K-rj,K-beta j are some univariate kernels whose all derivatives up to order r(j) satisfy a Holdertype condition with exponent 2 beta(j). Then the mrc for universal algorithms is q := min(j=1,2,...,) d(r(j) + beta(j)) for both integration and approximation, and for nonuniversal algorithms it is 1/ 2 + q for integration and a + q with a. [1/(4 + 4q), 1/2] for approximation. If r(j) = 1 or beta(j) >= beta > 0 for all j, then the mrc is at least min(1, beta), and the curse of dimensionality is not present. Hence, the product form of reproducing kernels breaks
We revisit the problem of hardness amplification in NP, as recently studied by O'Donnell [J. Comput. System Sci., 69 (2004), pp. 68-94]. We prove that if NP has a balanced function f such that any circuit of size ...
详细信息
We revisit the problem of hardness amplification in NP, as recently studied by O'Donnell [J. Comput. System Sci., 69 (2004), pp. 68-94]. We prove that if NP has a balanced function f such that any circuit of size s(n) fails to compute f on a 1/poly(n) fraction of inputs, then NP has a function f' such that any circuit of size s'(n) = s(root n)(Omega(1)) fails to compute f' on a 1/2 - 1/s' (n) fraction of inputs. In particular, 1. if s(n) = n(omega(1)), we amplify to hardness 1/ 2 - 1/n(omega(1));2. if s(n) = 2(n Omega(1)), we amplify to hardness 1/2 - 1/2(n Omega(1));3. if s(n) = 2(Omega(n)), we amplify to hardness 1/2 - 1/2(Omega(root n)). Our results improve those of of O'Donnell, which amplify to 1/2 - 1/root n. O'Donnell also proved that no construction of a certain general form could amplify beyond 1/ 2 - 1/ n. We bypass this barrier by using both derandomization and nondeterminism in the construction of f'. We also prove impossibility results demonstrating that both our use of nondeterminism and the hypothesis that f is balanced are necessary for "black-box" hardness amplification procedures (such as ours).
暂无评论