In this paper, the problem of minimizing a function f(x) subject to a constraint φ{symbol}(x)=0 is considered, where f is a scalar, x an n-vector, and φ{symbol} a q-vector, with q s is such that φ{symbol}(xs)=0, an...
详细信息
In this paper, the problem of minimizing a function f(x) subject to a constraint φ{symbol}(x)=0 is considered, where f is a scalar, x an n-vector, and φ{symbol} a q-vector, with q algorithms are analyzed: these algorithms are composed of the alternate succession of conjugate gradient phases and restoration phases. In the conjugate gradient phase, one tries to improve the value of the function while avoiding excessive constraint violation. In the restoration phase, one tries to reduce the constraint error, while avoiding excessive change in the value of the function. Concerning the conjugate gradient phase, two classes of algorithms are considered: for algorithms of Class I, the multiplier λ is determined so that the error in the optimum condition is minimized for given x;for algorithms of Class II, the multiplier λ is determined so that the constraint is satisfied to first order. Concerning the restoration phase, two topics are investigated: (a) restoration type, that is, complete restoration vs incomplete restoration and (b) restoration frequency, that is, frequent restoration vs infrequent restoration. Depending on the combination of type and frequency of restoration, four algorithms are generated within Class I and within Class II, respectively: algorithm (α) is characterized by complete and frequent restoration;algorithm (β) is characterized by incomplete and frequent restoration;algorithm (γ) is characterized by complete and infrequent restoration;and algorithm (δ) is characterized by incomplete and infrequent restoration. If the function f(x) is quadratic and the constraint φ{symbol}(x) is linear, all of the previous algorithms are identical, that is, they produce the same sequence of points and converge to the solution in the same number of iterations. This number of iterations is at most N* =n -q if the starting point xs is such that φ{symbol}(xs)=0, and at most N*=1+n -q if the starting point
We develop polynomial-time online algorithms for learning disjunctions while trading off between the number of mistakes and the number of "I don't know" answers. In this model, we are given an online adv...
详细信息
ISBN:
(纸本)9781611972511
We develop polynomial-time online algorithms for learning disjunctions while trading off between the number of mistakes and the number of "I don't know" answers. In this model, we are given an online adversarial sequence of inputs for an unknown function of the form f(x_1, x_2,..., x_n) = V_(i∈S)~(x_i), and for each such input, we must guess "true", "false", or "I don't know", after which we find out the correct output for that input. On the algorithm side, we show how to make at most εn mistakes while answering "I don't know" at most (1/ε)~(2O(1/ε) n times, which is linear for any constant ε > 0 and polynomial for some ε = c = lg lg n. Furthermore, we show how to make O (nlog log n/log n) mistakes while answering "I don't know" O(n~2 log log n) times. On the lower bound side, we show that any algorithm making o(n = log n) mistakes must answer "I don't know" a superpolynomial number of times. By contrast, no previous lower bounds were known, and the best previous algorithms (by Sayedi et al. who introduced the model) either make at most 1/3n mistakes while answering "I don't know" O(n) times with linear running time per answer, or make O(n= log n) mistakes while answering "I don't know" O(n~2) times with exponential running time per answer.
暂无评论