A technique is presented for constructing real-valued locally lipschitz functions defined on separable Banach spaces. Using the construction, many known examples of pathological locally lipschitz functions are recreat...
详细信息
A technique is presented for constructing real-valued locally lipschitz functions defined on separable Banach spaces. Using the construction, many known examples of pathological locally lipschitz functions are recreated. The variables used in the equations are defined. Results revealed that on any separable Banach space, it is impossible to construct a locally lipschitz function whose Michel-Penot subgradient is identically equal to a polytype.
This paper deals with two kinds of the one-dimensional global optimization problem over a closed finite interval: (i) the objective function f(x) satisfies the lipschitz condition with a constant L;(ii) the first deri...
详细信息
This paper deals with two kinds of the one-dimensional global optimization problem over a closed finite interval: (i) the objective function f(x) satisfies the lipschitz condition with a constant L;(ii) the first derivative of f(x) satisfies the lipschitz condition with a constant M. In the paper, six algorithms are presented for the case (i) and six algorithms for the case (ii). In both cases, auxiliary functions are constructed and adaptively improved during the search. In the case (i), piecewise linear functions are constructed and in the case (ii) smooth piecewise quadratic functions are used. The constants L and M either are taken as values known a priori or are dynamically estimated during the search. A recent technique that adaptively estimates the local lipschitz constants over different zones of the search region is used to accelerate the search. A new technique called the local improvement is introduced in order to accelerate the search in both cases (i) and (ii). The algorithms are described in a unique framework, their properties are studied from a general viewpoint, and convergence conditions of the proposed algorithms are given. Numerical experiments executed on 120 test problems taken from the literature show quite a promising performance of the new acceleration techniques.
This paper deals with global stochastic optimization where the decision variable belongs to a compact subset X of R. The objective function is the mathematical expectation of a partial bivariate lipschitz function f (...
详细信息
This paper deals with global stochastic optimization where the decision variable belongs to a compact subset X of R. The objective function is the mathematical expectation of a partial bivariate lipschitz function f (x, Theta) depending on a decision variable x and a random variable Theta, whose probability distribution depends on x. In the first part of the present paper, we propose a branch and bound algorithm based on tangent minorants that provides a global minimum. In the second part, we consider the case where the function f is discontinuous. We show the manner we correct that discontinuity without modifying the global minimum of E (x, Theta)). We also illustrate how to extend this framework to multidimensional stochastic optimization problems by using the Alienor method. Then we validate the proposed method by applying it to some test functions and compare it to known algorithms. (C) 2019 Elsevier B.V. All rights reserved.
Rotund norms, Clarke subdifferentials and extensions of lipschitz functions were analyzed to study their applicability as optimization tools. lipschitz separated Banach spaces were characterized in terms of a rotundit...
详细信息
Rotund norms, Clarke subdifferentials and extensions of lipschitz functions were analyzed to study their applicability as optimization tools. lipschitz separated Banach spaces were characterized in terms of a rotundity property intermediate to rotundity and weak uniform rotundity. Every uniformly rotund Branch space was found to have the maximal subdifferential extension property.
A function f (x(1),..., x(d)), where each input is an integer from 1 to n and output is a real number, is lipschitz if changing one of the inputs by 1 changes the output by at most 1. In other words, lipschitz functio...
详细信息
A function f (x(1),..., x(d)), where each input is an integer from 1 to n and output is a real number, is lipschitz if changing one of the inputs by 1 changes the output by at most 1. In other words, lipschitz functions are not very sensitive to small changes in the input. Our main result is an efficient tester for the lipschitz property of functions f : [n](d) -> delta Z, where delta is an element of (0, 1] and delta Z is the set of integer multiples of delta. A property tester is given an oracle access to a function f and a proximity parameter epsilon, and it has to distinguish, with high probability, functions that have the property from functions that differ on at least an epsilon fraction of values from every function with the property. The lipschitz property was first studied by Jha and Raskhodnikova (FOCS' 11) who motivated it by applications to data privacy and program verification. They presented efficient testers for the lipschitz property of functions on the domains {0, 1}(d) and [n]. Our tester for functions on the more general domain [n](d) runs in time O(d(1.5)n log n) for constant epsilon and delta. The main tool in the analysis of our tester is a smoothing procedure that makes a function lipschitz by modifying it at a few points. Its analysis is already nontrivial for the 1-dimensional version, which we call Bubble Smooth, in analogy to Bubble Sort. In one step, Bubble Smooth modifies two values that violate the lipschitz property, namely, differ by more than 1, by transferring delta units from the larger to the smaller. We define a transfer graph to keep track of the transfers, and use it to show that the l(1) distance between f and BubbleSmooth(f) is at most twice the l(1) distance from f to the nearest lipschitz function. Bubble Smooth has several other important properties that allow us to obtain a dimension reduction, i.e., a reduction from testing functions on multidimensional domains to testing functions on one-dimensional domains. Our dimensi
We are given an unknown univariate lipschitz continuous function that we wish to estimate by evaluating the function sequentially at distinct points. We provide a procedure for recursively selecting this sequence of p...
详细信息
We are given an unknown univariate lipschitz continuous function that we wish to estimate by evaluating the function sequentially at distinct points. We provide a procedure for recursively selecting this sequence of points so that, averaging over points in the domain the resulting worst case error between the estimating and actual functions is minimized. Upper and lower bounds on these errors is also provided.
We discuss the relationship between lipschitz functions and convex functions. By these relations, we give a sufficient condition for the set of points where lipschitz functions on a Hilbert space is Frechet differenti...
详细信息
We discuss the relationship between lipschitz functions and convex functions. By these relations, we give a sufficient condition for the set of points where lipschitz functions on a Hilbert space is Frechet differentiate to be residual.
Considered here is the problem of learning a nonlinear mapping with uncountable domain and range. The learning model used is that of piecewise linear interpolation on random samples from the domain. More specifically,...
详细信息
Considered here is the problem of learning a nonlinear mapping with uncountable domain and range. The learning model used is that of piecewise linear interpolation on random samples from the domain. More specifically, a network barns a function by approximating its Value, typically within some small epsilon, when presented an arbitrary element of the domain. For reliable learning, the network should accurately return the function's value with high probability, typically higher than 1-delta for some small delta. The focal results of this article are the derivations of bounds showing that, given epsilon and delta and an arbitrary lipschitz function f:[0, 1](k)-->R, m greater than or equal to(3M root k)(k) .(1/epsilon)(k) .(kln3M root k+kln1/epsilon+ln1/delta) samples from the uniform distribution on [0, 1](k) are sufficient to reliably learn f, and that m greater than or equal to(M/2 root k)(k) .(1/e)(k) . ln1/delta samples are necessary for reliable learning. The Delaunay triangulation technique, which necessarily keeps simplex sizes small, is exploited.
In this paper, some properties of the set-valued mapping D-alpha f (.) connected with the new approximation method of a function f (.) defined in the first part of the article are given. Continuity and lipschitz prope...
详细信息
In this paper, some properties of the set-valued mapping D-alpha f (.) connected with the new approximation method of a function f (.) defined in the first part of the article are given. Continuity and lipschitz properties of D-alpha f (.) are formulated. A continuous extension of the Clarke subdifferential of any function represented as a difference of two convex functions is given. For the convex case, the set-valued mapping D-alpha f (.) is similar to the epsilon-subdifferential mapping. (c) 2007 Published by Elsevier Ltd.
In this paper we extend classical Titchmarsh theorems on the Fourier transform of Holder-lipschitz functions to the setting of compact homogeneous manifolds. As an application, we derive a Fourier multiplier theorem f...
详细信息
In this paper we extend classical Titchmarsh theorems on the Fourier transform of Holder-lipschitz functions to the setting of compact homogeneous manifolds. As an application, we derive a Fourier multiplier theorem for L2-Holder-lipschitz spaces on compact Lie groups. We also derive conditions and a characterisation for Dini-lipschitz classes on compact homogeneous manifolds in terms of the behaviour of their Fourier coefficients.
暂无评论