SDP (SemiDefinite programming) is one of the most attractive optimization models. It has many applications from various fields such as control theory, combinatorial and robust optimization, and quantum chemistry. The ...
详细信息
SDP (SemiDefinite programming) is one of the most attractive optimization models. It has many applications from various fields such as control theory, combinatorial and robust optimization, and quantum chemistry. The SDPA (SemiDefinite programming Algorithm) is a software package for solving general SDPs based on primal-dual interior-pointmethods with the HRVW/KSH/M search direction. It is written in C++ with the help of LAPACK for numerical linear algebra for dense matrix computation. The purpose of this paper is to present a brief description of the latest version of the SDPA and its high performance for large scale problems through numerical experiments and comparisons with some other major software packages for general SDPs.
The problem of multiple sequence alignment is recast as an optimization problem using Markov decision theory. One seeks to minimize the expected or average cost of alignment subject to data-derived constraints. In thi...
详细信息
The problem of multiple sequence alignment is recast as an optimization problem using Markov decision theory. One seeks to minimize the expected or average cost of alignment subject to data-derived constraints. In this setting, the problem is equivalent to a linear program which can be solved efficiently using modern interior-pointmethods. We present numerical results from an implementation of the algorithm for protein sequence alignment. (C) 2003 Elsevier Science Ltd. All rights reserved.
Condition numbers based on the "distance to ill-posedness" rho(d) have been shown to play a crucial role in the theoretical complexity of solving convex optimization models. In this paper, we present two alg...
详细信息
Condition numbers based on the "distance to ill-posedness" rho(d) have been shown to play a crucial role in the theoretical complexity of solving convex optimization models. In this paper, we present two algorithms and corresponding complexity analysis for computing estimates of rho(d) for a finite-dimensional convex feasibility problem P(d) in standard primal form: find x that satisfies Ax=b, x is an element of C-X, where d=(A,b) is the data for the problem P(d). Under one choice of norms for the m- and n-dimensional spaces, the problem of estimating rho(d) is hard (co-NP complete even when C-X=r(+)(N)). However, when the norms are suitably chosen, the problem becomes much easier: We can estimate rho(d) to within a constant factor of its true value with complexity bounds that are linear in ln(C(d)) (where C(d) is the condition number of the data d for P(d)), plus other quantities that arise naturally in consideration of the problem P(d). The first algorithm is an interior-point algorithm, and the second algorithm is a variant of the ellipsoid algorithm. The main conclusion of this work is that when the norms are suitably chosen, computing an estimate of the condition measures of P(d) is essentially not much harder than computing a solution of P(d) itself.
The modern theory of condition measures for convex optimization problems was initially developed for convex problems in the conic format [GRAPHICS] and several aspects of the theory have now been extended to handle no...
详细信息
The modern theory of condition measures for convex optimization problems was initially developed for convex problems in the conic format [GRAPHICS] and several aspects of the theory have now been extended to handle nonconic formats as well. In this theory, the (Renegar) condition measure C(d) for (CPd) has been shown to be connected to bounds on a wide variety of behavioral and computational characteristics of (CPd), from sizes of optimal solutions to the complexity of algorithms for solving (CPd). Herein we test the practical relevance of the condition measure theory, as applied to linear optimization problems that one might typically encounter in practice. Using the NETLIB suite of linear optimization problems as a test bed, we found that 71% of the NETLIB suite problem instances have infinite condition measure. In order to examine condition measures of the problems that are the actual input to a modern interior-point-method (IPM) solver, we also computed condition measures for the NETLIB suite problems after preprocessing by CPLEX 7.1. Here we found that 19% of the postprocessed problem instances in the NETLIB suite have infinite condition measure, and that log C(d) of the postprocessed problems is fairly nicely distributed. Furthermore, among those problem instances with finite condition measure after preprocessing, there is a positive linear relationship between IPM iterations and log C(d) of the postprocessed problem instances (significant at the 95% confidence level), and 42% of the variation in IPM iterations among these NETLIB suite problem instances is accounted for by log C(d) of the postprocessed problem instances.
We introduce a novel optimization method based on semidefinite programming relaxations to the field of computer vision and apply it to the combinatorial problem of minimizing quadratic functionals in binary decision v...
详细信息
We introduce a novel optimization method based on semidefinite programming relaxations to the field of computer vision and apply it to the combinatorial problem of minimizing quadratic functionals in binary decision variables subject to linear constraints. The approach is (tuning) parameter-free and computes high-quality combinatorial solutions using interior-pointmethods (convex programming) and a randomized hyperplane technique. Apart from a symmetry condition, no assumptions (such as metric pairwise interactions) are made with respect to the objective criterion. As a consequence, the approach can be applied to a wide range of problems. Applications to unsupervised partitioning, figure-ground discrimination, and binary restoration are presented along with extensive ground-truth experiments. From the viewpoint of relaxation of the underlying combinatorial problem, we show the superiority of our approach to relaxations based on spectral graph theory and prove performance bounds.
We consider primal-dual algorithms for certain types of infinite-dimensional optimization problems. Our approach is based on the generalization of the technique of finite-dimensional Euclidean Jordan algebras to the c...
详细信息
We consider primal-dual algorithms for certain types of infinite-dimensional optimization problems. Our approach is based on the generalization of the technique of finite-dimensional Euclidean Jordan algebras to the case of infinite-dimensional JB-algebras of finite rank. This generalization enables us to develop polynomial-time primal-dual algorithms for "infinite-dimensional second-order cone programs." We consider as an example a long-step primal-dual algorithm based on the Nesterov-Todd direction. It is shown that this algorithm can be generalized along with complexity estimates to the infinite-dimensional situation under consideration. An application is given to an important problem of control theory: multi-criteria analytic design of the linear regulator. The calculation of the Nesterov-Todd direction requires in this case solving one matrix differential Riccati equation plus solving a finite-dimensional system of linear algebraic equations on each iteration. The number of equations and unknown variables of this algebraic system is m+1, where m is a number of quadratic performance criteria.
Research on interior-pointmethods (IPMs) has dominated the field of mathematical programming for the last two decades. Two contrasting approaches in the analysis and implementation of IPMs are the so-called small-upd...
ISBN:
(数字)9781400825134
ISBN:
(纸本)9780691091921
Research on interior-pointmethods (IPMs) has dominated the field of mathematical programming for the last two decades. Two contrasting approaches in the analysis and implementation of IPMs are the so-called small-update and large-update methods, although, until now, there has been a notorious gap between the theory and practical performance of these two strategies. This book comes close to bridging that gap, presenting a new framework for the theory of primal-dual IPMs based on the notion of the self-regularity of a function. The authors deal with linear optimization, nonlinear complementarity problems, semidefinite optimization, and second-order conic optimization problems. The framework also covers large classes of linear complementarity problems and convex optimization. The algorithm considered can be interpreted as a path-following method or a potential reduction method. Starting from a primal-dual strictly feasible point, the algorithm chooses a search direction defined by some Newton-type system derived from the self-regular proximity. The iterate is then updated, with the iterates staying in a certain neighborhood of the central path until an approximate solution to the problem is found. By extensively exploring some intriguing properties of self-regular functions, the authors establish that the complexity of large-update IPMs can come arbitrarily close to the best known iteration bounds of IPMs. Researchers and postgraduate students in all areas of linear and nonlinear optimization will find this book an important and invaluable aid to their work.
We have developed a new algorithm based on the mathematical theory of linearprogramming (LP) and implemented it in our program RAPTOR. Our new approach provides an elegant formulation of the protein-threading problem...
详细信息
We have developed a new algorithm based on the mathematical theory of linearprogramming (LP) and implemented it in our program RAPTOR. Our new approach provides an elegant formulation of the protein-threading problem, overcomes the intractability problem of protein threading, in practice, and allows us to use existing powerful linearprogramming software to obtain optimal protein threading solutions. CASP5 and CAFASP3 gave us the first chance to test RAPTOR in an unbiased way. RAPTOR was ranked as the top individual (automatic) server for fold recognition by the CAFASP3 organizers. In this short article, we describe RAPTOR's LP formulation, assess RAPTOR's performance in CAFASP3/CASP5, explain why it has superceded other existing automatic individual methods, and point out its strengths, limitations, extensions, and prospects for improvement. (C) 2003 Wiley-Liss, Inc.
The support vector machine (SVM) is a supervised learning algorithm used in a variety of applications, including robust target classification. The SVM training problem can be formulated as dense quadratic programming ...
详细信息
ISBN:
(纸本)0819450782
The support vector machine (SVM) is a supervised learning algorithm used in a variety of applications, including robust target classification. The SVM training problem can be formulated as dense quadratic programming problem (QP). In practice, this QP is solved in batch mode, using general-purpose interior-point solvers. Although quite efficient, these implementations are not well suited in situations where the training vectors are made available sequentially. In this paper we discuss a recursive algorithm for SVM training. The algorithm is based on efficient updates of approximate solutions on the dual central path of the QP and can be analyzed using the convergence theory recently developed for interior-pointmethods. The idea is related to cutting-plane methods for large-scale optimization and sequential analytic centering techniques used successfully in set-membership estimation methods in signal processing.
暂无评论