Recently, researchers have been interested in studying the semidefinite programming (SDP) relaxation model, where the matrix is both positive semidefinite and entry-wise nonnegative, for quadratically constrained qu...
详细信息
Recently, researchers have been interested in studying the semidefinite programming (SDP) relaxation model, where the matrix is both positive semidefinite and entry-wise nonnegative, for quadratically constrained quadratic programming (QCQP). Comparing to the basic SDP relaxation, this doubly-positive SDP model possesses additional O(n2) constraints, which makes the SDP solution complexity substantially higher than that for the basic model with O(n) constraints. In this paper, we prove that the doubly-positive SDP model is equivalent to the basic one with a set of valid quadratic cuts. When QCQP is symmetric and homogeneous (which represents many classical combinatorial and non- convex optimization problems), the doubly-positive SDP model is equivalent to the basic SDP even without any valid cut. On the other hand, the doubly-positive SDP model could help to tighten the bound up to 36%, but no more. Finally, we manage to extend some of the previous results to quartic models.
As one of important nonparametric regression method, support vector regression can achieve nonlinear capability by kernel trick. This paper discusses multivariate support vector regression when its regression function...
详细信息
As one of important nonparametric regression method, support vector regression can achieve nonlinear capability by kernel trick. This paper discusses multivariate support vector regression when its regression function is restricted to be convex. This paper approximates this convex shape restriction with a series of linear matrix inequality constraints and transforms its training to a semidefinite programming problem, which is computationally tractable. Extensions to multivariate concave case, l(2)-norm Regularization, l(1) and l(2)-norm loss functions, are also studied in this paper. Experimental results on both toy data sets and a real data set clearly show that, by exploiting this prior shape knowledge, this method can achieve better performance than the classical support vector regression. (C) 2011 Elsevier B.V. All rights reserved.
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inne...
详细信息
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space-classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm- using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.
In this paper we consider covariance structural models with which we associate semidefinite programming problems. We discuss statistical properties of estimates of the respective optimal value and optimal solutions wh...
详细信息
In this paper we consider covariance structural models with which we associate semidefinite programming problems. We discuss statistical properties of estimates of the respective optimal value and optimal solutions when the 'true' covariance matrix is estimated by its sample counterpart. The analysis is based on perturbation theory of semidefinite programming. As an example we consider asymptotics of the so-called minimum trace factor analysis. We also discuss the minimum rank matrix completion problem and its SDP counterparts.
We use rank one Gaussian perturbations to derive a smooth stochastic approximation of the maximum eigenvalue function. We then combine this smoothing result with an optimal smooth stochastic optimization algorithm to ...
详细信息
We use rank one Gaussian perturbations to derive a smooth stochastic approximation of the maximum eigenvalue function. We then combine this smoothing result with an optimal smooth stochastic optimization algorithm to produce an efficient method for solving maximum eigenvalue minimization problems, and detail a variant of this stochastic algorithm with monotonic line search. Overall, compared to classical smooth algorithms, this method runs a larger number of significantly cheaper iterations and, in certain precision/dimension regimes, its total complexity is lower than that of deterministic smoothing algorithms.
We consider semidefinite optimization in a saddle point formulation where the primal solution is in the spectrahedron and the dual solution is a distribution over affine functions. We present an approximation algorith...
详细信息
We consider semidefinite optimization in a saddle point formulation where the primal solution is in the spectrahedron and the dual solution is a distribution over affine functions. We present an approximation algorithm for this problem that runs in sublinear time in the size of the data. To the best of our knowledge, this is the first algorithm to achieve this. Our algorithm is also guaranteed to produce low-rank solutions. We further prove lower bounds on the running time of any algorithm for this problem, showing that certain terms in the running time of our algorithm cannot be further improved. Finally, we consider a non-affine version of the saddle point problem and give an algorithm that under certain assumptions runs in sublinear time.
In this paper, we study the epsilon-properly efficiency of multiobjective semidefinite programming with set-valued functions. Firstly, we obtain the scalarization theorems under the condition of the generalized cone-s...
详细信息
In this paper, we study the epsilon-properly efficiency of multiobjective semidefinite programming with set-valued functions. Firstly, we obtain the scalarization theorems under the condition of the generalized cone-subconvexlikeness. Then, we establish the alternative theorem which contains matrixes and vectors, the epsilon-Lagrange multiplier theorems, and the epsilon-proper saddle point theorems of the primal programming under some suitable conditions.
The Lasserre hierarchy of semidefinite programming approximations to convex polynomial optimization problems is known to converge finitely under some assumptions. [J. B. Lasserre, Convexity in semialgebraic geometry a...
详细信息
The Lasserre hierarchy of semidefinite programming approximations to convex polynomial optimization problems is known to converge finitely under some assumptions. [J. B. Lasserre, Convexity in semialgebraic geometry and polynomial optimization, SIAM J. Optim., 19 (2009), pp. 1995-2014]. We give a new proof of the finite convergence property under weaker assumptions than were known before. In addition, we show that-under the assumptions for finite convergence-the number of steps needed for convergence depends on more than the input size of the problem.
We propose a wide neighborhood primal-dual interior-point algorithm with arc-search for semidefinite programming. In every iteration, the algorithm constructs an ellipse and searches an epsilon-approximate solution of...
详细信息
We propose a wide neighborhood primal-dual interior-point algorithm with arc-search for semidefinite programming. In every iteration, the algorithm constructs an ellipse and searches an epsilon-approximate solution of the problem along the ellipsoidal approximation of the central path. Assuming a strictly feasible starting point is available, we show that the algorithm has the iteration complexity bound O (n(3/4) log X-0.S-0/epsilon) for the Nesterov-Todd direction, which is similar to that of the corresponding algorithm for linear programming. The numerical results show that our algorithm is efficient and promising.
We propose a classification approach exploiting relationships between ellipsoidal separation and Supportvector Machine (SVM) with quadratic kernel. By adding a (semidefinite programming) SDP constraint to SVM model we...
详细信息
We propose a classification approach exploiting relationships between ellipsoidal separation and Supportvector Machine (SVM) with quadratic kernel. By adding a (semidefinite programming) SDP constraint to SVM model we ensure that the chosen hyperplane in feature space represents a non-degenerate ellipsoid in input space. This allows us to exploit SDP techniques within Support-vector Regression (SVR) approaches, yielding better results in case ellipsoid-shaped separators are appropriate for classification tasks. We compare our approach with spherical separation and SVM on some classification problems.(c) 2023 Elsevier B.V. All rights reserved.
暂无评论