In this paper we describe a Newton-type algorithm model for solving smooth constrainedoptimization problems with nonlinear objective function, general linear constraints and bounded variables. The algorithm model is ...
详细信息
In this paper we describe a Newton-type algorithm model for solving smooth constrainedoptimization problems with nonlinear objective function, general linear constraints and bounded variables. The algorithm model is based on the definition of a continuously differentiable exact merit function that follows an exact penalty approach for the box constraints and an exact augmented Lagrangian approach for the general linear constraints. Under very mild assumptions and without requiring the strict complementarity assumption, the algorithm model produces a sequence of pairs {x(k), lambda(k)} converging quadratically to a pair ((x) over bar, <(lambda)over bar>) where (x) over bar satisfies the first order necessary conditions and <(lambda)over bar> is a KKT multipliers vector associated to the linear constraints. As regards the behaviour of the sequence {x(k)} alone, it is guaranteed that it converges at least superlinearly. At each iteration, the algorithm requires only the solution of a linear system that can be performed by means of conjugate gradient methods. Numerical experiments and comparison are reported.
Wolfe [J. Soc. Indust. Appl. Math., 11 (1963), pp. 205-211] describes a novel and very useful method for resolving degeneracy in the Simplex method for Linear Programming (LP). The simplicity and reliability of the me...
详细信息
Wolfe [J. Soc. Indust. Appl. Math., 11 (1963), pp. 205-211] describes a novel and very useful method for resolving degeneracy in the Simplex method for Linear Programming (LP). The simplicity and reliability of the method makes it an excellent choice in this author's opinion. The method is described here in the context of an active set method (ASM) format for LP. The method solves recursively generated subproblems that are smaller than the original, which contributes to efficiency. Data structures are described that enable the recursive aspect of the method to be implemented very easily with minimal storage overhead. The method is extended to solve a general form of linearly constrained optimization problem that includes quadratic programming (QP) and allows simple bounds on the variables and both equation and inequality general linear constraints. Issues of round-off error are discussed and heuristics are described that have proved to be very reliable in practice. The method is further extended to QP problems in which general linear constraints are handled as L-1 terms in the objective function.
linearly constrained optimization problems with simple bounds are considered in the present work. First, a preconditioned spectral gradient method is defined for the case in which no simple bounds are present. This al...
详细信息
linearly constrained optimization problems with simple bounds are considered in the present work. First, a preconditioned spectral gradient method is defined for the case in which no simple bounds are present. This algorithm can be viewed as a quasi-Newton method in which the approximate Hessians satisfy a weak secant equation. The spectral choice of steplength is embedded into the Hessian approximation and the whole process is combined with a nonmonotone line search strategy. The simple bounds are then taken into account by placing them in an exponential penalty term that modifies the objective function. The exponential penalty scheme defines the outer iterations of the process. Each outer iteration involves the application of the previously defined preconditioned spectral gradient method for linear equality constrained problems. Therefore, an equality constrained convex quadratic programming problem needs to be solved at every inner iteration. The associated extended KKT matrix remains constant unless the process is reinitiated. In ordinary inner iterations, only the right-hand side of the KKT system changes. Therefore, suitable sparse factorization techniques can be applied and exploited effectively. Encouraging numerical experiments are presented.
Chen and Zhang [Sci. China, Set. A, 45, 1390-1397 (2002)] introduced an affine scaling trust region algorithm for linearly constrained optimization and analyzed its global convergence. In this paper, we derive a new...
详细信息
Chen and Zhang [Sci. China, Set. A, 45, 1390-1397 (2002)] introduced an affine scaling trust region algorithm for linearly constrained optimization and analyzed its global convergence. In this paper, we derive a new affine scaling trust region algorithm with dwindling filter for linearly constrained optimization. Different from Chen and Zhang's work, the trial points generated by the new algorithm axe accepted if they improve the objective function or improve the first order necessary optimality conditions. Under mild conditions, we discuss both the global and local convergence of the new algorithm. Preliminary numerical results are reported.
We consider the algebraic issues concerning the solution of general, large-scale, linearlyconstrained nonlinear optimization problems. Particular attention is given to suitable methods for solving the linear systems ...
详细信息
We consider the algebraic issues concerning the solution of general, large-scale, linearlyconstrained nonlinear optimization problems. Particular attention is given to suitable methods for solving the linear systems that occur at each iteration of such methods. The main issue addressed is how to ensure that a quadratic model of the objective function is positive definite in the null-space of the constraints while neither adversely affecting the convergence of Newton's method nor incurring a significant computational overhead. Numerical evidence to support the theoretical developments is provided.
Many practical problems require the solution of large-scale constrainedoptimization problems for which preserving feasibility is a key issue, and the evaluation of the objective function is very expensive. In these c...
详细信息
Many practical problems require the solution of large-scale constrainedoptimization problems for which preserving feasibility is a key issue, and the evaluation of the objective function is very expensive. In these cases it is mandatory to start with a feasible approximation of the solution, the obtention of which should not require objective function evaluations. The necessity of solving this type of problems motivated us to revisit the classical barrier approach for nonlinear optimization, providing a careful implementation of a modern version of this method. This is the main objective of the present paper. For completeness, we provide global convergence results and comparative numerical experiments with one of the state-of-the-art interior-point solvers for continuous optimization.
A method for linearly constrained optimization which modifies and generalizes recent box-constraint optimization algorithms is introduced. The new algorithm is based on a relaxed form of Spectral Projected Gradient it...
详细信息
A method for linearly constrained optimization which modifies and generalizes recent box-constraint optimization algorithms is introduced. The new algorithm is based on a relaxed form of Spectral Projected Gradient iterations. Intercalated with these projected steps, internal iterations restricted to faces of the polytope are performed, which enhance the efficiency of the algorithm. Convergence proofs are given and numerical experiments are included and commented. Software supporting this paper is available through the Tango Project web page: http://***/similar to egbirgin/tango/.
Fixed-point algorithms for computing equilibria in economies with production or stationary points in constrainedoptimization generally use point-to-set mappings and therefore converge slowly. An alternative implement...
详细信息
Fixed-point algorithms for computing equilibria in economies with production or stationary points in constrainedoptimization generally use point-to-set mappings and therefore converge slowly. An alternative implementation uses continuous functions with a higher dimensionality corresponding to the inclusion of activity levels or dual variables. Here we develop algorithms that only increase the dimensionality implicitly. The solution path is piecewise-linear as in other algorithms. However, when viewed in the low-dimensional space, the path within each simplex can be piecewise-linear rather than linear. Asymptotically, these paths are linear and quadratic convergence is attained.
This paper describes an algorithm for optimization of a smooth function subject to general linear constraints. An algorithm of the gradient projection class is used, with the important feature that the "projectio...
详细信息
This paper describes an algorithm for optimization of a smooth function subject to general linear constraints. An algorithm of the gradient projection class is used, with the important feature that the "projection" at each iteration is performed by using a primal-dual interior-point method for convex quadratic programming. Convergence properties can be maintained even if the projection is done inexactly in a well-defined way. Higher-order derivative information on the manifold defined by the apparently active constraints can be used to increase the rate of local convergence.
We consider the problem of planar grid generation. This problem can be easily reformulated as an unconstrainedoptimization problem with the usual variational method. This is a very simple technique to produce grids o...
详细信息
We consider the problem of planar grid generation. This problem can be easily reformulated as an unconstrainedoptimization problem with the usual variational method. This is a very simple technique to produce grids on planar domains, but it cannot be used in automatic grid generation codes. As a matter of fact, its formulation depends on some parameters, whose right value can disagree considerably among different domains. A simple modification of the variational method is proposed. This modification is mainly based on a linearly constrained optimization problem and it gives a significant increase of the robustness of the method. (c) 2005 IMACS. Published by Elsevier B.V. All rights reserved.
暂无评论