We present a primal-dual row-action method for the minimization of a convex function subject to general convex constraints. Constraints are used one at a time, no changes are made in the constraint functions and their...
详细信息
We present a primal-dual row-action method for the minimization of a convex function subject to general convex constraints. Constraints are used one at a time, no changes are made in the constraint functions and their Jacobian matrix (thus, the row-action nature of the algorithm), and at each iteration a subproblem is solved consisting of minimization of the objective function subject to one or two linear equations. The algorithm generates two sequences: one of them, called primal, converges to the solution of the problem;the other one, called dual, approximates a vector of optimal KKT multipliers for the problem. We prove convergence of the primal sequence for general convex constraints. In the case of linear constraints, we prove that the primal sequence converges at least linearly and obtain as a consequence the convergence of the dual sequence.
The paper presents a logarithmic barrier cutting plane algorithm for convex (possibly non-smooth, semi-infinite) programming. Most cutting plane methods, like that of Kelley, and Cheney and Goldstein, solve a linear a...
详细信息
The paper presents a logarithmic barrier cutting plane algorithm for convex (possibly non-smooth, semi-infinite) programming. Most cutting plane methods, like that of Kelley, and Cheney and Goldstein, solve a linear approximation (localization) of the problem and then generate an additional cut to remove the linear program's optimal point. Other methods, like the ''central cutting'' plane methods of Elzinga-Moore and Goffin-Vial, calculate a center of the linear approximation and then adjust the level of the objective, or separate the current center from the feasible set. In contrast to these existing techniques, we develop a method which does not solve the linear relaxations to optimality, but rather stays in the interior of the feasible set. The iterates follow the central path of a linear relaxation, until the current iterate either leaves the feasible set or is too close to the boundary. When this occurs, a new cut is generated and the algorithm iterates. We use the tools developed by den Hertog, Roos and Terlaky to analyze the effect of adding and deleting constraints in long-step logarithmic barrier methods for linear programming. Finally, implementation issues and computational results are presented. The test problems come from the class of numerically difficult convex geometric and semi-infinite programming problems.
The performance of linear dynamic systems with respect to a non-linear increasing value function that aggregates convex functionals is investigated within a convex programming framework. The methodology developed in t...
详细信息
The performance of linear dynamic systems with respect to a non-linear increasing value function that aggregates convex functionals is investigated within a convex programming framework. The methodology developed in this paper combines fundamental properties of convex sets in order to decompose the multicriterion control problem into a two-level structure. The upper level of this structure comprises only decision-making aspects of the problem, and involves the solution of a relaxed and static multicriterion problem. The lower level of the structure comprises, in turn, only optimal control aspects of the problem. The overall procedure is easily implemented. The solution of the aggregated problem is obtained as the limit of a sequence of optimal control problems that preserve their original solution properties. Relationships between the approach adopted here and existing solution techniques are discussed. The solutions of the classical minimax and Salukvadze problems are obtained as special cases of the proposed approach.
Distributed Memory Multicomputers (DMMs), such as the IBM SP-2, the Intel Paragon. and the Thinking Machines CM-5, offer significant advantages over shared memory multiprocessors in terms of cost and scalability, Unfo...
详细信息
Distributed Memory Multicomputers (DMMs), such as the IBM SP-2, the Intel Paragon. and the Thinking Machines CM-5, offer significant advantages over shared memory multiprocessors in terms of cost and scalability, Unfortunately, the utilization of all the available computational power in these machines involves a tremendous programming effort on the part of users, which creates a need for sophisticated compiler and run-time support for distributed memory machines. In this paper, we explore a new compiler optimization for regular scientific applications-the simultaneous exploitation of task and data parallelism. Our optimization is implemented as part of the PARADIGM HPF compiler framework we have developed. The intuitive idea behind the optimization is the use of task parallelism to control the degree of data parallelism of individual tasks. The reason this provides increased performance is that data parallelism provides diminishing returns as the number of processors used is increased. By controlling the number of processors used for each data parallel task in an application and by concurrently executing these tasks, we make program execution more efficient and, therefore, faster. A practical implementation of a task and data parallel scheme of execution for an application on a distributed memory multicomputer also involves data redistribution. This data redistribution causes an overhead. However, as our experimental results show, this overhead is not a problem;execution of a program using task and data parallelism together can be significantly faster than its execution using data parallelism alone. This makes our proposed optimization practical and extremely useful.
In this paper we study nonlinear semidefinite programming problems. convexity, duality and first-order optimality conditions for such problems are presented. A second-order analysis is also given. Second-order necessa...
详细信息
In this paper we study nonlinear semidefinite programming problems. convexity, duality and first-order optimality conditions for such problems are presented. A second-order analysis is also given. Second-order necessary and sufficient optimality conditions are derived. Finally, sensitivity analysis of such programs is discussed, (C) 1997 The Mathematical programming Society, Inc. Published by Elsevier Science B.V.
In this paper we discuss the main concepts of structural optimization, a field of nonlinear programming, which was formed by the intensive development of modem interior-point schemes. (C) 1997 The Mathematical Program...
详细信息
In this paper we discuss the main concepts of structural optimization, a field of nonlinear programming, which was formed by the intensive development of modem interior-point schemes. (C) 1997 The Mathematical programming Society, Inc. Published by Elsevier Science B.V.
We prove that the existence of a polynomial time rho-approximation algorithm (where rho < 1 is a fixed constant) for a class of independent set problems, leads to a polynomial time approximation algorithm with appr...
详细信息
We prove that the existence of a polynomial time rho-approximation algorithm (where rho < 1 is a fixed constant) for a class of independent set problems, leads to a polynomial time approximation algorithm with approximation ratio strictly smaller than 2 for vertex covering, while the non-existence cf such an algorithm induces a lower bound on the ratio of every polynomial time approximation algorithm for vertex covering. We also prove a similar result for a (maximization) convex programming problem including quadratic programming as subproblem.
作者:
Fan, MKHGong, YAssociate Professor
School of Electrical and Computer Engineering Georgia Institute of Technology Atlanta Georgia Graduate Student
School of Electrical and Computer Engineering Georgia Institute of Technology Atlanta Georgia
A semidefinite programming problem is a mathematical program in which the objective function is linear in the unknowns and the constraint set is defined by a linear matrix inequality. This problem is nonlinear, nondif...
详细信息
A semidefinite programming problem is a mathematical program in which the objective function is linear in the unknowns and the constraint set is defined by a linear matrix inequality. This problem is nonlinear, nondifferentiable, but convex. It covers several standard problems (such as linear and quadratic programming) and has many applications in engineering. Typically, the optimal eigenvalue multiplicity associated with a linear matrix inequality is larger than one. Algorithms based on prior knowledge of the optimal eigenvalue multiplicity for solving the underlying problem have been shown to be efficient. In this paper, we propose a scheme to estimate the optimal eigenvalue multiplicity from points close to the solution. With some mild assumptions, it is shown that there exists an open neighborhood around the minimizer so that our scheme applied to any point in the neighborhood will always give the correct optimal eigenvalue multiplicity. We then show how to incorporate this result into a generalization of an existing local method for solving the semidefinite programming problem. Finally, a numerical example is included to illustrate the results.
We compute constrained equilibria satisfying an optimality condition. Important examples include convex programming, saddle problems, noncooperative games, and variational inequalities. Under a monotonicity hypothesis...
详细信息
We compute constrained equilibria satisfying an optimality condition. Important examples include convex programming, saddle problems, noncooperative games, and variational inequalities. Under a monotonicity hypothesis we show that equilibrium solutions can be found via iterative convex minimization. In the main algorithm each stage of computation requires two proximal steps, possibly using Bregman functions. One step serves to predict the next point;the other helps to correct the new prediction. To enhance practical applicability we tolerate numerical errors. (C) 1997 The Mathematical programming Society, Inc. Published by Elsevier Science B.V.
We consider methods for minimizing a convex function f that generate a sequence {x(k)} by taking x(k+1) to be an approximate minimizer of f(x) + D-h(x,x(k))/c(k), where c(k) > 0 and D-h is the D-function of a Bregm...
详细信息
We consider methods for minimizing a convex function f that generate a sequence {x(k)} by taking x(k+1) to be an approximate minimizer of f(x) + D-h(x,x(k))/c(k), where c(k) > 0 and D-h is the D-function of a Bregman function h. Extensions are made to B-functions that generalize Bregman functions and cover more applications. Convergence is established under criteria amenable to implementation. Applications are made to nonquadratic multiplier methods for nonlinear programs.
暂无评论