We consider a dynamical system approach to solve finite-dimensional smooth optimization problems with a compact and connected feasible set. In fact, by the well-known technique of equalizing inequality constraints usi...
详细信息
We consider a dynamical system approach to solve finite-dimensional smooth optimization problems with a compact and connected feasible set. In fact, by the well-known technique of equalizing inequality constraints using quadratic slack variables, we transform a general optimization problem into an associated problem without inequality constraints in a higher-dimensional space. We compute the projected gradient for the latter problem and consider its projection on the feasible set in the original, lower-dimensional space. In this way, we obtain an ordinary differential equation in the original variables, which is specially adapted to treat inequality constraints (for the idea, see Jongen and Stein, Frontiers in Global Optimization, pp. 223-236, Kluwer Academic, Dordrecht, 2003). The article shows that the derived ordinary differential equation possesses the basic properties which make it appropriate to solve the underlying optimization problem: the longtime behavior of its trajectories becomes stationary, all singularities are critical points, and the stable singularities are exactly the local minima. Finally, we sketch two numerical methods based on our approach.
In this paper, we discuss the convergence of the Broyden algorithms with revised search direction. Under some inexact line searches, we prove that the algorithms are globally convergent for continuously differentiable...
详细信息
In this paper, we discuss the convergence of the Broyden algorithms with revised search direction. Under some inexact line searches, we prove that the algorithms are globally convergent for continuously differentiable functions and the rate of local convergence of the algorithms is one-step superlinear and n-step second-order for uniformly convex objective functions.
In this paper, we discuss the convergence of the Broyden algorithms withrevised search direction. Under some inexact line searches, we prove that the algorithms areglobally convergent for continuously differentiable f...
详细信息
In this paper, we discuss the convergence of the Broyden algorithms withrevised search direction. Under some inexact line searches, we prove that the algorithms areglobally convergent for continuously differentiable functions and the rate of convergence of thealgorithms is one-step superlinear and n-step second-order for uniformly convex objective functions.
Let the DFP algorithm for unconstrained optimization be applied to an objective function that has continuous second derivatives and bounded level sets, where each line search finds the first local minimum. it is prove...
详细信息
Let the DFP algorithm for unconstrained optimization be applied to an objective function that has continuous second derivatives and bounded level sets, where each line search finds the first local minimum. it is proved that the calculated gradients are not bounded away from zero if there are only two variables. The new feature of this work is that there is no need for the objective function to be convex.
In this paper, we discuss the convergence of Broyden algorithms for the functions which are non-twice differentiable, but have LC gradient. We prove that the rate of convergence of the algorithms is linear for uniform...
详细信息
In this paper, we discuss the convergence of Broyden algorithms for the functions which are non-twice differentiable, but have LC gradient. We prove that the rate of convergence of the algorithms is linear for uniformly convex functions. We also demonstrate that under some mild conditions the algorithms are superlinsarly convergent.
In this paper we discuss the convergence of the Broyden algorithms with revised search direction. We prove that the algorithms are globally convergent for continuously differentiable functions and the rate of converge...
详细信息
In this paper we discuss the convergence of the Broyden algorithms with revised search direction. We prove that the algorithms are globally convergent for continuously differentiable functions and the rate of convergence of the algorithms is one-step superlinear and n-step second-order for uniformly convex objective functions.
This paper introduces an algorithm for convex minimization which includes quasi-Newton updates within a proximal point algorithm that depends on a preconditioned bundle subalgorithm. The method uses the Hessian of a c...
详细信息
This paper introduces an algorithm for convex minimization which includes quasi-Newton updates within a proximal point algorithm that depends on a preconditioned bundle subalgorithm. The method uses the Hessian of a certain outer function which depends on the Jacobian of a proximal point mapping which, in turn, depends on the preconditioner matrix and on a Lagrangian Hessian relative to a certain tangent space. Convergence is proved under boundedness assumptions on the preconditioner sequence.
Let B be a positive definite symmetric approximation to the second derivative matrix of the objective function of a minimization calculation. We study modifications of the BFGS method that apply a scaling technique to...
详细信息
Let B be a positive definite symmetric approximation to the second derivative matrix of the objective function of a minimization calculation. We study modifications of the BFGS method that apply a scaling technique to the columns of a conjugate direction matrix Z satisfying Z(T)BZ = I. For a simple scaling technique similar to the ones considered by Powell (1987) and (1989) we show that, due to a two iteration cycle, linear convergence can occur when the method is applied to a quadratic function and Wolfe's line search is employed, although for exact line searches quadratic termination can be proved. We then suggest a different scaling technique that prevents certain columns thought to contain important curvature information from being scaled. For this algorithm we prove global and superlinear convergence and demonstrate the superiority of our method over the BFGS formula on a range of ill-conditioned optimization problems. Moreover, we present an implementation of our algorithm tht requires only 3n2 + O(n) multiplications per iteration.
The development of software for minimization problems is often based on a line search method. We consider line search methods that satisfy sufficient decrease and curvature conditions, and formulate the problem of det...
详细信息
The development of software for minimization problems is often based on a line search method. We consider line search methods that satisfy sufficient decrease and curvature conditions, and formulate the problem of determining a point that satisfies these two conditions in terms of finding a point in a set T(mu). We describe a search algorithm for this problem that produces a sequence of iterates that converge to a point in T(mu) and that, except for pathological cases, terminates in a finite number of steps. Numerical results for an implementation of the search algorithm on a set of test functions show that the algorithm terminates within a small number of iterations.
暂无评论