In this paper, we study the relaxed smoothing problems with general closed convex constraints. It is pointed out that such problems can be converted to a convexquadratic minimization problem for which there are good ...
详细信息
In this paper, we study the relaxed smoothing problems with general closed convex constraints. It is pointed out that such problems can be converted to a convexquadratic minimization problem for which there are good programs in software libraries.
Fletcher & Powell (1974, Math. Comput., 28, 1067-1087) proposed a numerically stable method for updating the LDLT factorization of a symmetric positive-definite matrix when a symmetric low-rank term is added to it...
详细信息
Fletcher & Powell (1974, Math. Comput., 28, 1067-1087) proposed a numerically stable method for updating the LDLT factorization of a symmetric positive-definite matrix when a symmetric low-rank term is added to it. In Goldfarb & Scheinberg (2004, Math. Program., 99, 1-34) we proposed a product-form version of the method of Fletcher and Powell for use in interior point methods for linear programming and studied its numerical stability. In this paper we extend these results to convex quadratic programming where the Hessian matrix of the objective function is, or can be approximated by, a diagonal matrix plus a matrix of low rank. Our new results are based on showing that the elements of the unit lower triangular matrix in the LDLT factorizations that arise in this context are uniformly bounded as the duality gap is driven to zero. Practicable versions of our approach are described for structured quadratic programs that arise in support vector machines and portfolio optimization.
Infeasible interior point methods have been very popular and effective. In this paper, we propose a predictor-corrector infeasible interior point algorithm for convex quadratic programming, and we prove its convergenc...
详细信息
Infeasible interior point methods have been very popular and effective. In this paper, we propose a predictor-corrector infeasible interior point algorithm for convex quadratic programming, and we prove its convergence and analyze its complexity. The algorithm has the polynomial numerical complexity with O(nL)-iteration.
The conjugate gradient (CG) method is an efficient iterative method for solving large-scale strongly convex quadratic programming (QP). In this paper, we propose some generalized CG (GCG) methods for solving the l(1)-...
详细信息
The conjugate gradient (CG) method is an efficient iterative method for solving large-scale strongly convex quadratic programming (QP). In this paper, we propose some generalized CG (GCG) methods for solving the l(1)-regularized (possibly not strongly) convex QP that terminate at an optimal solution in a finite number of iterations. At each iteration, our methods first identify a face of an orthant and then either perform an exact line search along the direction of the negative projected minimum-norm subgradient of the objective function or execute a CG subroutine that conducts a sequence of CG iterations until a CG iterate crosses the boundary of this face or an approximate minimizer of over this face or a subface is found. We determine which type of step should be taken by comparing the magnitude of some components of the minimum-norm subgradient of the objective function to that of its rest components. Our analysis on finite convergence of these methods makes use of an error bound result and some key properties of the aforementioned exact line search and the CG subroutine. We also show that the proposed methods are capable of finding an approximate solution of the problem by allowing some inexactness on the execution of the CG subroutine. The overall arithmetic operation cost of our GCG methods for finding an epsilon-optimal solution depends on epsilon in O (log(1/epsilon)), which is superior to the accelerated proximal gradient method (Beck and Teboulle [Beck A, Teboulle M (2009) A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1):183-202], Nesterov [Nesterov Yu (2013) Gradient methods for minimizing composite functions. Math. Program. 140(1):125-161]) that depends on epsilon in O (1/root epsilon). In addition, our GCG methods can be extended straight-forwardly to solve box-constrained convex QP with finite convergence. Numerical results demonstrate that our methods are very favorable for solving ill-condition
Range-space methods for convex quadratic programming improve in efficiency as the number of constraints active at the solution decreases. In this paper we describe a range-space method based upon updating a weighted G...
详细信息
Range-space methods for convex quadratic programming improve in efficiency as the number of constraints active at the solution decreases. In this paper we describe a range-space method based upon updating a weighted Gram-Schmidt factorization of the constraints in the active set. The updating methods described are applicable to both primal and dual quadraticprogramming algorithms that use an active-set *** quadraticprogramming problems include simple bounds on all the variables as well as general linear constraints. A feature of the proposed method is that it is able to exploit the structure of simple bound constraints. This allows the method to retain efficiency when the number ofgeneral constraints active at the solution is small. Furthermore, the efficiency of the method improves as the number of active bound constraints increases.
This article proposes a class of infeasible interior point algorithms for convex quadratic programming, and analyzes its complexity. It is shown that this algorithm has the polynomial complexity. Its best complexity i...
详细信息
This article proposes a class of infeasible interior point algorithms for convex quadratic programming, and analyzes its complexity. It is shown that this algorithm has the polynomial complexity. Its best complexity is O(nL).
In this paper we combine an infeasible Interior Point Method (IPM) with the Proximal Method of Multipliers (PMM). The resulting algorithm (IP-PMM) is interpreted as a primal-dual regularized IPM, suitable for solving ...
详细信息
In this paper we combine an infeasible Interior Point Method (IPM) with the Proximal Method of Multipliers (PMM). The resulting algorithm (IP-PMM) is interpreted as a primal-dual regularized IPM, suitable for solving linearly constrained convex quadratic programming problems. We apply few iterations of the interior point method to each sub-problem of the proximal method of multipliers. Once a satisfactory solution of the PMM sub-problem is found, we update the PMM parameters, form a new IPM neighbourhood and repeat this process. Given this framework, we prove polynomial complexity of the algorithm, under standard assumptions. To our knowledge, this is the first polynomial complexity result for a primal-dual regularized IPM. The algorithm is guided by the use of a single penalty parameter;that of the logarithmic barrier. In other words, we show that IP-PMM inherits the polynomial complexity of IPMs, as well as the strict convexity of the PMM sub-problems. The updates of the penalty parameter are controlled by IPM, and hence are well-tuned, and do not depend on the problem solved. Furthermore, we study the behavior of the method when it is applied to an infeasible problem, and identify a necessary condition for infeasibility. The latter is used to construct an infeasibility detection mechanism. Subsequently, we provide a robust implementation of the presented algorithm and test it over a set of small to large scale linear and convex quadratic programming problems. The numerical results demonstrate the benefits of using regularization in IPMs as well as the reliability of the method.
We present a primal interior point method for convex quadratic programming which is based upon a logarithmic barrier function approach. This approach generates a sequence of problems, each of which is approximately so...
详细信息
We present a primal interior point method for convex quadratic programming which is based upon a logarithmic barrier function approach. This approach generates a sequence of problems, each of which is approximately solved by taking a single Newton step. It is shown that the method requires O(square-root nL) iterations and O(n3.5L) arithmetic operations. By using modified Newton steps the number of arithmetic operations required by the algorithm can be reduced to O(n3L).
In this paper an exterior point polynomial time algorithm for convex quadratic programming problems is proposed. We convert a convexquadratic program into an unconstrained convex program problem with a self-concordan...
详细信息
In this paper an exterior point polynomial time algorithm for convex quadratic programming problems is proposed. We convert a convexquadratic program into an unconstrained convex program problem with a self-concordant objective function. We show that, only with duality, the Path-following method is valid. The computational complexity analysis of the algorithm is given.
We propose IPRQP, an enhanced primal-dual interior-point relaxation method (IPRM), for solving convex quadratic programming. This method is based on a smoothing barrier augmented Lagrangian function for convex quadrat...
详细信息
We propose IPRQP, an enhanced primal-dual interior-point relaxation method (IPRM), for solving convex quadratic programming. This method is based on a smoothing barrier augmented Lagrangian function for convex quadratic programming. IPRQP inherits the advantages of IPRM, including not requiring iterative points to be interior points, which makes IPRQP suitable for the warm-starting of combinatorial optimization problems. Compared to IPRM, the customized starting points allow the line search of IPRQP to contain only vector operations. In addition, IPRQP improves the updating scheme of the barrier parameter and provides a certificate of infeasibility. Some results on global convergence are presented. We implement the algorithm on convex quadratic programming problems from Maros-Meszaros and the benchmark problem sets NETLIB and Kennington, which contain feasible and infeasible linear programming problems. The numerical results show that our algorithm is reliable for feasible problems and efficient for detecting the infeasibility of infeasible problems.
暂无评论