In this paper,we consider the positive semi-definite space tensor cone constrained convex program,its structure and *** study defining functions,defining sequences and polyhedral outer approximations for this positive...
详细信息
In this paper,we consider the positive semi-definite space tensor cone constrained convex program,its structure and *** study defining functions,defining sequences and polyhedral outer approximations for this positive semidefinite space tensor cone,give an error bound for the polyhedral outer approximation approach,and thus establish convergence of three polyhedral outer approximation algorithms for solving this *** then study some other approaches for solving this structured convex *** include the conic linear programming approach,the nonsmooth convex program approach and the bi-level program *** numerical examples are presented.
Recent results showing PPAD-completeness of the problem of computing an equilibrium for Fisher's market model under additively separable, piecewise-linear, concave (PLC) utilities have dealt a serious blow to the ...
详细信息
Recent results showing PPAD-completeness of the problem of computing an equilibrium for Fisher's market model under additively separable, piecewise-linear, concave (PLC) utilities have dealt a serious blow to the program of obtaining efficient algorithms for computing equilibria in "traditional" market models, and has prompted a search for alternative models that are realistic as well as amenable to efficient computation. In this paper, we show that introducing perfect price discrimination into the Fisher model with PLC utilities renders its equilibrium polynomial time computable. Moreover, its set of equilibria are captured by a convex program that generalizes the classical Eisenberg-Gale program, and always admits a rational solution. We also give a combinatorial, polynomial time algorithm for computing an equilibrium. Next, we introduce production into our model, and again give a rational convex program that captures its equilibria. We use this program to obtain surprisingly simple proofs of both welfare theorems for this model. Finally, we also give an application of our price discrimination market model to online display advertising marketplaces.
Dynamic stochastic programs are prototypical for optimization problems with an inherent tree structure inducing characteristic sparsity patterns in the KKT systems of interior methods. We propose an integrated modelin...
详细信息
Dynamic stochastic programs are prototypical for optimization problems with an inherent tree structure inducing characteristic sparsity patterns in the KKT systems of interior methods. We propose an integrated modeling and solution approach for such tree-sparse programs. Three closely related natural formulations are theoretically analyzed from a control-theoretic perspective and compared to each other. Associated KKT system solution algorithms with linear complexity are developed and comparisons to other interior approaches and related problem formulations are discussed.
A bound is obtained in this note for the distance between the integer and real solutions to convex quadratic programs. This bound is a function of the condition number of the Hessian matrix. We further extend this pro...
详细信息
A bound is obtained in this note for the distance between the integer and real solutions to convex quadratic programs. This bound is a function of the condition number of the Hessian matrix. We further extend this proximity result to convex programs and mixed-integer convex programs. We also show that this bound is achievable in certain situations and the distance between the integer and continuous minimizers may tend to infinity. (C) 2001 Elsevier Science B.V. All rights reserved.
This paper studies a difficult and fundamental problem that arises throughout electrical engineering, applied mathematics, and statistics. Suppose that one forms a short linear combination of elementary signals drawn ...
详细信息
This paper studies a difficult and fundamental problem that arises throughout electrical engineering, applied mathematics, and statistics. Suppose that one forms a short linear combination of elementary signals drawn from a large, fixed collection. Given an observation of the linear combination that has been contaminated with additive noise, the goal is to identify which elementary signals participated and to approximate their coefficients. Although many algorithms have been proposed, there is little theory which guarantees that these algorithms can accurately and efficiently solve the problem. This paper studies a method called convex relaxation, which attempts to recover the ideal sparse signal by solving a convex program. This approach is powerful because the optimization can be completed in polynomial time with standard scientific software. The paper provides general conditions which ensure that convex relaxation succeeds. As evidence of the broad impact of these results, the paper describes how convex relaxation can be used for several concrete signal recovery problems. It also describes applications to channel coding, linear regression, and numerical analysis.
This note derives bounds on the length of the primal-dual affine scaling directions associated with a linearly constrained convex program satisfying the following conditions: 1) the problem has a solution satisfying s...
详细信息
This note derives bounds on the length of the primal-dual affine scaling directions associated with a linearly constrained convex program satisfying the following conditions: 1) the problem has a solution satisfying strict complementarity, 2) the Hessian of the objective function satisfies a certain invariance property. We illustrate the usefulness of these bounds by establishing the superlinear convergence of the algorithm presented in Wright and Ralph [22] for solving the optimality conditions associated with a linearly constrained convex program satisfying the above conditions.
The regularization of a convex program is exact if all solutions of the regularized problem are also solutions of the original problem for all values of the regularization parameter below some positive threshold. For ...
详细信息
The regularization of a convex program is exact if all solutions of the regularized problem are also solutions of the original problem for all values of the regularization parameter below some positive threshold. For a general convex program, we show that the regularization is exact if and only if a certain selection problem has a Lagrange multiplier. Moreover, the regularization parameter threshold is inversely related to the Lagrange multiplier. We use this result to generalize an exact regularization result of Ferris and Mangasarian [Appl. Math. Optim., 23 (1991), pp. 266-273] involving a linearized selection problem. We also use it to derive necessary and sufficient conditions for exact penalization, similar to those obtained by Bertsekas [Math. programming, 9 (1975), pp. 87-99] and by Bertsekas, Nedic, and Ozdaglar [convex Analysis and Optimization, Athena Scientific, Belmont, MA, 2003]. When the regularization is not exact, we derive error bounds on the distance from the regularized solution to the original solution set. We also show that existence of a "weak sharp minimum" is in some sense close to being necessary for exact regularization. We illustrate the main result with numerical experiments on the l(1) regularization of benchmark (degenerate) linear programs and semidefinite/second-order cone programs. The experiments demonstrate the usefulness of l(1) regularization in finding sparse solutions.
This paper gives several equivalent conditions which guarantee the existence of the weighted central paths for a given convex programming problem satisfying some mild conditions. When the objective and constraint func...
详细信息
This paper gives several equivalent conditions which guarantee the existence of the weighted central paths for a given convex programming problem satisfying some mild conditions. When the objective and constraint functions of the problem are analytic, we also characterize the limiting behavior of these paths as they approach the set of optimal solutions. A duality relationship between a certain pair of logarithmic barrier problems is also discussed.
We consider convex constrained optimization problems, and we enhance the classical Fritz John optimality conditions to assert the existence of multipliers with special sensitivity properties. In particular, we prove t...
详细信息
We consider convex constrained optimization problems, and we enhance the classical Fritz John optimality conditions to assert the existence of multipliers with special sensitivity properties. In particular, we prove the existence of Fritz John multipliers that are informative in the sense that they identify constraints whose relaxation, at rates proportional to the multipliers, strictly improves the primal optimal value. Moreover, we show that if the set of geometric multipliers is nonempty, then the minimum-norm vector of this set is informative and defines the optimal rate of cost improvement per unit constraint violation. Our assumptions are very general and, in particular, allow for the presence of a duality gap and the nonexistence of optimal solutions. In particular, for the case where there is a duality gap, we establish enhanced Fritz John conditions involving the dual optimal value and dual optimal solutions.
In this paper, a new neural network was presented for solving nonlinear convex programs with linear constrains. Under the condition that the objective function is convex, the proposed neural network is shown to be sta...
详细信息
In this paper, a new neural network was presented for solving nonlinear convex programs with linear constrains. Under the condition that the objective function is convex, the proposed neural network is shown to be stable in the sense of Lyapunov and globally converges to the optimal solution of the original problem. Several numerical examples show the effectiveness of the proposed neural network. (C) 2011 Elsevier B.V. All rights reserved.
暂无评论