We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's me...
详细信息
We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worst-case analysis shows that the number of iterations grows as the square root of the problem size, but in practice it appears to grow more slowly. As in other interior-point methods the overall computational effort is therefore dominated by the least-squares system that must be solved in each iteration. A type of conjugate-gradient algorithm can be used for this purpose, which results in important savings for two reasons. First, it allows us to take advantage of the special structure the problems often have (e.g., Lyapunov or algebraic Riccati inequalities). Second, we show that the polynomial bound on the number of iterations remains valid even if the conjugate-gradient algorithm is not run until completion, which in practice can greatly reduce the computational effort per iteration. We describe in detail how the algorithm works for optimization problems with L Lyapunov inequalities, each of size m. We prove an overall worst-case operation count of O(m(5.5)L(1.5)). Th, average-case complexity appears to be closer to O(m(4)L(1.5)). This estimate is justified by extensive numerical experimentation, and is consistent with other researchers' experience with the practical performance of interior-point algorithms for linear programming. This result means that the computational cost of extending current control theory based on the solution of Lyapunov or Riccati equations to a theory that is based on the solution of (multiple, coupled) Lyapunov or Riccati inequalities is modest.
We study an upper bound on the max-cut problem defined via a relaxation of the discrete problem to a continuous nonlinear convex problem, which can be solved efficiently. We demonstrate how far the approach can be pus...
详细信息
We study an upper bound on the max-cut problem defined via a relaxation of the discrete problem to a continuous nonlinear convex problem, which can be solved efficiently. We demonstrate how far the approach can be pushed using advanced techniques from numerical linear algebra and nonsmooth optimization. Various classes of graphs with up to 50 000 nodes and up to four million edges are considered. Since the theoretical bound can be computed only with a certain precision in practice, we use duality between node- and edge-oriented relaxations to estimate the difference between the theoretical and the computed bounds.
HDSDP is a numerical software solving semidefinite programming problems. The main framework of HDSDP resembles the dual-scaling interior point solver DSDP[Benson and Ye, 2008] and several new features, including a dua...
详细信息
HDSDP is a numerical software solving semidefinite programming problems. The main framework of HDSDP resembles the dual-scaling interior point solver DSDP[Benson and Ye, 2008] and several new features, including a dual method based on the simplified homogeneous self-dual embedding, have been implemented. The embedding technique enhances the stability of the dual method, and several new heuristics and computational techniques are designed to accelerate its convergence. HDSDP aims to show how the dual-scaling algorithm benefits from the self-dual embedding, and it is developed in parallel to DSDP5.8. Numerical experiments over several classical benchmark datasets exhibit its robustness and efficiency, particularly its advantages on SDP instances featuring low-rank structure and sparsity. HDSDP is open-sourced under an MIT license and available at https://***/Gwzwpxz/HDSDP.
暂无评论