Consider the problem of minimizing a strictly convex (possibly nondifferentiable and nonseparable) cost subject to linear constraints. We propose a dual coordinate ascent method for this problem that uses inexact line...
详细信息
Consider the problem of minimizing a strictly convex (possibly nondifferentiable and nonseparable) cost subject to linear constraints. We propose a dual coordinate ascent method for this problem that uses inexact line search and either essentially cyclic or Gauss-Southwell order of coordinate relaxation. We show, under very weak conditions, that this method generates a sequence of primal vectors converging to the optimal primal solution. Under an additional regularity assumption, and assuming that the effective domain of the cost function is polyhedral, we show that a related sequence of dual vectors converges in cost to the optimal cost. If the constraint set has an interior point in the effective domain of the cost function, then this sequence of dual vectors is bounded and each of its limit point(s) is an optimal dual solution. When the cost function is strongly convex, we show that the order of coordinate relaxation can become progressively more chaotic. These results significantly improve upon those obtained previously.
Consider problems of the form \[({\text{P}})\qquad min \{ . {f(x)} |Ex \geqq b\} ,\] where f is a strictly convex (possibly nondifferentiable) function and E and b are, respectively, a matrix and a vector. A popular m...
详细信息
Consider problems of the form \[({\text{P}})\qquad min \{ . {f(x)} |Ex \geqq b\} ,\] where f is a strictly convex (possibly nondifferentiable) function and E and b are, respectively, a matrix and a vector. A popular method for solving special cases of (P) (e.g., network flow, entropy maximization, quadratic program) is to dualize the constraints $Ex \geqq b$ to obtain a differentiable maximization problem and then apply an iterative ascent method to solve it. This method is simple and can exploit sparsity, thus making it ideal for large-scale optimization and, in certain cases, for parallel computation. Despite its simplicity, however, convergence of this method has been shown only under certain very restrictive conditions and only for certain special cases of (P). In this paper a block coordinate ascent method is presented for solving (P) that contains as special cases both dual coordinate ascent methods and dual gradient methods. It is shown, under certain mild assumptions on f and (P), that this method converges. Also the line searches are allowed to be inexact and, when f is separable, can be done in parallel.
A parallel algorithm is proposed in this paper for solving the problem $\min \{ q(x)|x \in C_1 \cap \cdots \cap C_m \} $ where q is an uniformly convex function and $C_i$ are closed convex sets in $R^n$. In each it...
详细信息
A parallel algorithm is proposed in this paper for solving the problem $\min \{ q(x)|x \in C_1 \cap \cdots \cap C_m \} $ where q is an uniformly convex function and $C_i$ are closed convex sets in $R^n$. In each iteration of the method, we solve in parallel m independent subproblems, each minimizing a definite quadratic function over an individual set $C_i$. The method has attractive convergence properties and can be implemented as parallel algorithms for tackling definite quadratic programs, linear programs, systems of linear equations and systems of generalized nonlinear inequalities.
Mathematical models are considered as input-output systems. The input is data (technological coefficients, available energy, prices) and the output is the feasible set, the set of optimal solutions, and the optimal va...
详细信息
Mathematical models are considered as input-output systems. The input is data (technological coefficients, available energy, prices) and the output is the feasible set, the set of optimal solutions, and the optimal value. We study when output is a continuous function of input and identify optimal (minimal) realizations of mathematical models. These are states of the model having the property that every stable perturbation of input results in a locally worse (higher) value of the optimal value function. In input optimization we “optimize” mathematical model rather than a specific mathematical program.
This paper presents a new class of outer approximation methods for solving general convex programs. The methods solve at each iteration a subproblem whose constraints contain the feasible set of the original problem. ...
详细信息
This paper presents a new class of outer approximation methods for solving general convex programs. The methods solve at each iteration a subproblem whose constraints contain the feasible set of the original problem. Moreover, the methods employ quadratic objective functions in the subproblems by adding a simple quadratic term to the objective function of the original problem, while other outer approximation methods usually use the original objective function itself throughout the iterations. By this modification, convergence of the methods can be proved under mild conditions. Furthermore, it is shown that generalized versions of the cut construction schemes in Kelley-Cheney-Goldstein's cutting plane method and Veinott's supporting hyperplane method can be incorporated with the present methods and a cut generated at each iteration need not be retained in the succeeding iterations.
A piecewise convex program is a convex program such that the constraint set can be decomposed in a finite number of closed convex sets, called the cells of the decomposition, and such that on each of these cells the o...
详细信息
A piecewise convex program is a convex program such that the constraint set can be decomposed in a finite number of closed convex sets, called the cells of the decomposition, and such that on each of these cells the objective function can be described by a continuously differentiable convex *** a first part, a cutting hyperplane method is proposed, which successively considers the various cells of the decomposition, checks whether the cell contains an optimal solution to the problem, and, if not, imposes a convexity cut which rejects the whole cell from the feasibility region. This elimination, which is basically a dual decomposition method but with an efficient use of the specific structure of the problem is shown to be finitely *** second part of this paper is devoted to the study of some special cases of piecewise convex program and in particular the piecewise quadratic program having a polyhedral constraint set. Such a program arises naturally in stochastic quadratic programming with recourse, which is the subject of the last section.
When all the functions that define a convex program are positively homogeneous, then a dual convex program can be constructed which is defined in terms of the primal data only (the primal variables do not appear). Fur...
详细信息
When all the functions that define a convex program are positively homogeneous, then a dual convex program can be constructed which is defined in terms of the primal data only (the primal variables do not appear). Furthermore, the dualizing process, carried out on the dual program, yields the primal. Several well-known examples of convex programs with explicit duals are shown to be special cases.
The solution of nonconvex nonlinear programs with sums of r-convex functions is considered. An algorithm consisting of a sequence of approximating convex programs which converges to a Kuhn-Tucker point is described.
The solution of nonconvex nonlinear programs with sums of r-convex functions is considered. An algorithm consisting of a sequence of approximating convex programs which converges to a Kuhn-Tucker point is described.
This paper is concerned with a pair of naturally symmetric problems related by duality. Self-duality has been investigated for the class of non-differentiable convex programs.
This paper is concerned with a pair of naturally symmetric problems related by duality. Self-duality has been investigated for the class of non-differentiable convex programs.
暂无评论