This paper aims at a predictor-corrector interior-point algorithm for solving weighted linear complementarity problem with P-*(kappa)-matrices, which is a variant of weighted complementarity problem and has wide appli...
详细信息
This paper aims at a predictor-corrector interior-point algorithm for solving weighted linear complementarity problem with P-*(kappa)-matrices, which is a variant of weighted complementarity problem and has wide applications in science, engineering, and economics. We first apply the algebraic equivalent transformation technique, and then use the identity function to determine the new search directions. Under suitable conditions, the feasibility and convergence of the algorithm are established. Moreover, we show that the proposed algorithm has polynomial-time complexity. As far as we know, this is the first predictor-corrector interior-point algorithm for P-*(kappa)-weighted linear complementarity problem based on the above-mentioned search directions. Preliminary numerical results demonstrate that our algorithm performs well and efficiently on the test problems.
Due to the non-convex nature of training Deep Neural Network (DNN) models, their effectiveness relies on the use of non-convex optimization heuristics. Traditional methods for training DNNs often require costly empiri...
详细信息
Due to the non-convex nature of training Deep Neural Network (DNN) models, their effectiveness relies on the use of non-convex optimization heuristics. Traditional methods for training DNNs often require costly empirical methods to produce successful models and do not have a clear theoretical foundation. In this study, we examine the use of convex optimization theory and sparse recovery models to refine the training process of neural networks and provide a better interpretation of their optimal weights. We focus on training two-layer neural networks with piecewise linear activations and demonstrate that they can be formulated as a finite-dimensional convex program. These programs include a regularization term that promotes sparsity, which constitutes a variant of group Lasso. We first utilize semi-infinite programming theory to prove strong duality for finite width neural networks and then we express these architectures equivalently as high dimensional convex sparse recovery models. Remarkably, the worst-case complexity to solve the convex program is polynomial in the number of samples and number of neurons when the rank of the data matrix is bounded, which is the case in convolutional networks. To extend our method to training data of arbitrary rank, we develop a novel polynomial-time approximation scheme based on zonotope subsampling that comes with a guaranteed approximation ratio. We also show that all the stationary points of the nonconvex training objective can be characterized as the global optimum of a subsampled convex program. Our convex models can be trained using standard convex solvers without resorting to heuristics or extensive hyper-parameter tuning unlike non-convex methods. Due to the convexity, optimizer hyperparameters such as initialization, batch sizes, and step size schedules have no effect on the final model. Through extensive numerical experiments, we show that convex models can outperform traditional non-convex methods and are not sensitive
In this paper, we present a polynomial-time barrier algorithm for solving multi-stage stochastic convex semidefinite optimization based on the Lagrangian dual method which relaxes the nonanticipativity constraints. We...
详细信息
In this paper, we present a polynomial-time barrier algorithm for solving multi-stage stochastic convex semidefinite optimization based on the Lagrangian dual method which relaxes the nonanticipativity constraints. We show that the barrier Lagrangian dual functions for our setting form self-concordant families with respect to barrier parameters. We also use the barrier function method to improve the smoothness of the dual objective function so that the Newton search directions can be exploited for usage. We finally implement the proposed algorithm on different test problems to show its effectiveness and efficiency.
An interior point algorithm is proposed for linearly constrained convex programming following a parameterized central path, which is a generalization of the central path and requires weaker convergence conditions. The...
详细信息
An interior point algorithm is proposed for linearly constrained convex programming following a parameterized central path, which is a generalization of the central path and requires weaker convergence conditions. The convergence and polynomial-time complexity of the proposed algorithm are proved under the assumption that the Hessian of the objective function is locally Lipschitz continuous. In addition, an initialization strategy is proposed and some numerical results are provided to show the efficiency and attractiveness of the proposed algorithm.
We present a deterministic algorithm which computes the multilinear factors of multivariate lacunary polynomials over number fields. Its complexity is polynomial in l(n) where l is the lacunary size of the input polyn...
详细信息
We present a deterministic algorithm which computes the multilinear factors of multivariate lacunary polynomials over number fields. Its complexity is polynomial in l(n) where l is the lacunary size of the input polynomial and nits number of variables, that is in particular polynomial in the logarithm of its degree. We also provide a randomized algorithm for the same problem of complexitypolynomial in l and n. Over other fields of characteristic zero and finite fields of large characteristic, our algorithms compute the multilinear factors having at least three monomials of multivariate polynomials. Lower bounds are provided to explain the limitations of our algorithm. As a by-product, we also design polynomial-time deterministic polynomial identity tests for families of polynomials which were not known to admit any. Our results are based on so-called Gap Theorem which reduce high-degree factorization to repeated low-degree factorizations. While previous algorithms used Gap Theorems expressed in terms of the heights of the coefficients, our Gap Theorems only depend on the exponents of the polynomials. This makes our algorithms more elementary and general, and faster in most cases. (C) 2020 Elsevier Ltd. All rights reserved.
Tools for description of bounds for word variable values are offered. The use of these descriptions provides the conditions of a problem belonging to the class P or to the class NP. NP-completeness of an unification p...
详细信息
ISBN:
(纸本)9781509019496
Tools for description of bounds for word variable values are offered. The use of these descriptions provides the conditions of a problem belonging to the class P or to the class NP. NP-completeness of an unification problem is proved. The validity of the proved bounds may make possible to create effective algorithms for unification of word terms, used in such programming language as Refal.
作者:
Nikolai K. KosovskiiSPbSU
Computer Science Chair of St. Petersburg State University St. Petersburg RUSSIA
Tools for description of bounds for word variable values are offered. The use of these descriptions provides the conditions of a problem belonging to the class P or to the class NP. NP-completeness of an unification p...
详细信息
Tools for description of bounds for word variable values are offered. The use of these descriptions provides the conditions of a problem belonging to the class P or to the class NP. NP-completeness of an unification problem is proved. The validity of the proved bounds may make possible to create effective algorithms for unification of word terms, used in such programming language as Refal.
The choice of self-concordant functions is the key to efficient algorithms for linear and quadratic convex optimizations, which provide a method with polynomial-time iterations to solve linear and quadratic convex opt...
详细信息
The choice of self-concordant functions is the key to efficient algorithms for linear and quadratic convex optimizations, which provide a method with polynomial-time iterations to solve linear and quadratic convex optimization problems. The parameters of a self-concordant barrier function can be used to compute the complexity bound of the proposed algorithm. In this paper, it is proved that the finite barrier function is a local self-concordant barrier function. By deriving the local values of parameters of this barrier function, the desired complexity bound of an interior-point algorithm based on this local self-concordant function for linear optimization problem is obtained. The bound matches the best known bound for small-update methods.
Based on the idea of Dikin-type primal-dual affine scaling method for linear program-ming,we describe a high-order Dikin-type algorithm for P_*(κ)-matrix linear complementarity problem in a wide neighborhood of the c...
详细信息
Based on the idea of Dikin-type primal-dual affine scaling method for linear program-ming,we describe a high-order Dikin-type algorithm for P_*(κ)-matrix linear complementarity problem in a wide neighborhood of the central path,and its polynomial-time complexity bound is ***,two numerical experiments are provided to show the effectiveness of the proposed algorithms.
Based on the idea of Dikin-type primal-dual affine scaling method for linear program-ming,we describe a high-order Dikin-type algorithm for P(κ)-matrix linear complementarity problem in a wide neighborhood of the c...
详细信息
Based on the idea of Dikin-type primal-dual affine scaling method for linear program-ming,we describe a high-order Dikin-type algorithm for P(κ)-matrix linear complementarity problem in a wide neighborhood of the central path,and its polynomial-time complexity bound is ***,two numerical experiments are provided to show the effectiveness of the proposed algorithms.
暂无评论