We study a class of methods for solving convex programs, which are based on non-quadratic augmented Lagrangians for which the penalty parameters are functions of the multipliers. This gives rise to Lagrangians which a...
详细信息
We study a class of methods for solving convex programs, which are based on non-quadratic augmented Lagrangians for which the penalty parameters are functions of the multipliers. This gives rise to Lagrangians which are nonlinear in the multipliers. Each augmented Lagrangian is specified by a choice of a penalty function phi and a penalty-updating function pi. The requirements on pi are mild and allow for the inclusion of most of the previously suggested augmented Lagrangians. More importantly, a new type of penalty/barrier function (having a logarithmic branch glued to a quadratic branch) is introduced and used to construct an efficient algorithm. Convergence of the algorithms is proved for the case of pi being a sublinear function of the dual multipliers. The algorithms are tested on large-scale quadratically constrained problems arising in structural optimization.
We consider the volumetric cutting plane method for finding a point in a convex set C subset of R-n that is characterized by a separation oracle. We prove polynomiality of the algorithm with each added cut placed dire...
详细信息
We consider the volumetric cutting plane method for finding a point in a convex set C subset of R-n that is characterized by a separation oracle. We prove polynomiality of the algorithm with each added cut placed directly through the current point and show that this "central cut" version of the method can be implemented using no more than 25n constraints at any time.
For the correction of a convex programming problem with potentially inconsistent constraint system (an improper problem), we apply the residual method, which is a standard regularization procedure for ill-posed optimi...
详细信息
For the correction of a convex programming problem with potentially inconsistent constraint system (an improper problem), we apply the residual method, which is a standard regularization procedure for ill-posed optimization models. A problem statement typical for the residual method is reduced to a minimization problem for an appropriate penalty function. We apply two classical penalty functions: the quadratic penalty function and the exact EreminZangwill penalty function. For each of the approaches, we establish convergence conditions and bounds for the approximation error.
We present a greedy algorithm for solving a special class of convex programming problems and establish a connection with polymatroid theory which yields a theoretical explanation and verification of the algorithm via ...
详细信息
We present a greedy algorithm for solving a special class of convex programming problems and establish a connection with polymatroid theory which yields a theoretical explanation and verification of the algorithm via some recent results of S. Fujishige.
The performance of linear dynamic systems with respect to a non-linear increasing value function that aggregates convex functionals is investigated within a convex programming framework. The methodology developed in t...
详细信息
The performance of linear dynamic systems with respect to a non-linear increasing value function that aggregates convex functionals is investigated within a convex programming framework. The methodology developed in this paper combines fundamental properties of convex sets in order to decompose the multicriterion control problem into a two-level structure. The upper level of this structure comprises only decision-making aspects of the problem, and involves the solution of a relaxed and static multicriterion problem. The lower level of the structure comprises, in turn, only optimal control aspects of the problem. The overall procedure is easily implemented. The solution of the aggregated problem is obtained as the limit of a sequence of optimal control problems that preserve their original solution properties. Relationships between the approach adopted here and existing solution techniques are discussed. The solutions of the classical minimax and Salukvadze problems are obtained as special cases of the proposed approach.
Applied mathematical programming problems are often approximations of larger, more detailed problems. One criterion to evaluate an approximating program is the magnitude of the difference between the optimal objective...
详细信息
Applied mathematical programming problems are often approximations of larger, more detailed problems. One criterion to evaluate an approximating program is the magnitude of the difference between the optimal objective values of the original and the approximating program. The approximation we consider is variable aggregation in a convex program. Bounds are derived on the difference between the two optimal objective values. Previous results of Geoffrion and Zipkin are obtained by specializing our results to linear programming. Also, we apply our bounds to a convex transportation problem.
The limits of a class of primal and dual solution trajectories associated with the Sequential Unconstrained Minimization Technique (SUMT) are investigated for convex programming problems with non-unique optima. Logari...
详细信息
The limits of a class of primal and dual solution trajectories associated with the Sequential Unconstrained Minimization Technique (SUMT) are investigated for convex programming problems with non-unique optima. Logarithmic barrier terms are assumed. For linear programming problems, such limits of both primal and dual trajectories - are strongly optimal, strictly complementary, and can be characterized as analytic centers of, loosely speaking, optimality regions. Examples are given, which show that those results do not hold in general for convex programming problems. If the latter are weakly analytic (Bank et al. [3]), primal trajectory limits can be characterized in analogy to the linear programming case and without assuming differentiability. That class of programming problems contains faithfully convex, linear, and convex quadratic programming problems as strict subsets. In the differential case, dual trajectory limits can be characterized similarly, albeit under different conditions, one of which suffices for strict complementarity.
In this paper we study two stage problems of stochastic convex programming. Solving the problems is very hard. A L-shaped method for it is given. The implement of the algorithm is simple, so less computation work is n...
详细信息
In this paper we study two stage problems of stochastic convex programming. Solving the problems is very hard. A L-shaped method for it is given. The implement of the algorithm is simple, so less computation work is needed. The result of computation shows that the algorithm is effective.
The problem of generalized equation 0 is an element of T (z), for maximal monotone operator T : H paired right arrows H, covers many important optimization problems. The solution trajectory u(lambda)(t) of the classic...
详细信息
The problem of generalized equation 0 is an element of T (z), for maximal monotone operator T : H paired right arrows H, covers many important optimization problems. The solution trajectory u(lambda)(t) of the classical Yosida-regularization based differential equation system together with numerical approximation trajectories u(lambda)(alpha)(t), for finding solutions to the generalized equation, is investigated in this paper. The uniform convergence of the approximate trajectories u(lambda)(alpha)(t) -> u(lambda)(t) on interval [0,+infinity) is proved, as alpha tends to 0. Importantly, it is proved that the solution trajectory u(lambda)(t) has the exponential convergence rate under the upper Lipschitz continuity of T-1 at the origin. Moreover, the proposed differential equation approach is applied to the Karush-Kuhn-Tucker system for a smooth convex optimization problem. A numerical algorithm for getting an approximate solution trajectories of Yosida-regularization based differential equation of nonlinear programming is presented and an illustrative example is implemented by the numerical algorithm to verify the theory developed.
We extend Clarkson's randomized algorithm for linear programming to a general scheme for solving convex optimization problems. The scheme can be used to speed up existing algorithms on problems which have many mor...
详细信息
We extend Clarkson's randomized algorithm for linear programming to a general scheme for solving convex optimization problems. The scheme can be used to speed up existing algorithms on problems which have many more constraints than variables. In particular, we give a randomized algorithm for solving convex quadratic and linear programs, which uses that scheme together with a variant of Karmarkar's interior point method. For problems with n constraints, d variables, and input length L, if n = OMEGA(d2), the expected total number of major Karmarkar's iterations is O(d2(log n)L), compared to the best known deterministic bound of O(square-root n L). We also present several other results which follow from the general scheme.
暂无评论