In many practical applications including remote sensing, multi-task learning, and multi-spectrum imaging, data are described as a set of matrices sharing a common column space. We consider the joint estimation of such...
详细信息
In many practical applications including remote sensing, multi-task learning, and multi-spectrum imaging, data are described as a set of matrices sharing a common column space. We consider the joint estimation of such matrices from their noisy linear measurements. We study a convex estimator regularized by a pair of matrix norms. The measurement model corresponds to block-wise sensing and the reconstruction is possible only when the total energy is well distributed over blocks. The first norm, which is the maximum-block-Frobenius norm, favors such a solution. This condition is analogous to the notion of low-spikiness in matrix completion or column-wise sensing. The second norm, which is a tensor norm on a pair of suitable Banach spaces, induces low-rankness in the solution together with the first norm. We demonstrate that the joint estimation provides a significant gain over the individual recovery of each matrix when the number of matrices sharing a column space and the ambient dimension of the shared column space are large relative to the number of columns in each matrix. The convex estimator is cast as a semidefinite program and an efficient ADMM algorithm is derived. The empirical behavior of the convex estimator is illustrated using Monte Carlo simulations and recovery performance is compared to existing methods in the literature.
In this paper, we propose and analyze an inexact version of the symmetric proximal alternating direction method of multipliers (ADMM) for solving linearly constrained optimization problems. Basically, the method allow...
详细信息
In this paper, we propose and analyze an inexact version of the symmetric proximal alternating direction method of multipliers (ADMM) for solving linearly constrained optimization problems. Basically, the method allows its first subproblem to be solved inexactly in such way that a relative approximate criterion is satisfied. In terms of root the iteration number k, we establish global O(1/ k) pointwise and O(1/k) ergodic convergence rates of the method for a domain of the acceleration parameters, which is consistent with the largest known one in the exact case. Since the symmetric proximal ADMM can be seen as a class of ADMM variants, the new algorithm as well as its convergence rates generalize, in particular, many others in the literature. Numerical experiments illustrating the practical advantages of the method are reported. To the best of our knowledge, this work is the first one to study an inexact version of the symmetric proximal ADMM.
In this work, we study discrete-time Markov decision processes (MDPs) under constraints with Borel state and action spaces and where all the performance functions have the same form of the expected total reward (ETR) ...
详细信息
In this work, we study discrete-time Markov decision processes (MDPs) under constraints with Borel state and action spaces and where all the performance functions have the same form of the expected total reward (ETR) criterion over the infinite time horizon. One of our objective is to propose a convex programming formulation for this type of MDP. It will be shown that the values of the constrained control problem and the associated convex program coincide. Moreover, if there exists an optimal solution to the convex program then there exists a stationary randomized policy which is optimal for the MDP. It will be also shown that in the framework of constrained control problems, the supremum of the ETRs over the set of randomized policies is equal to the supremum of the ETRs over the set of stationary randomized policies. We consider standard hypotheses such as the so-called continuity-compactness conditions and a Slater-type condition. Our assumptions are quite weak to deal with cases that have not yet been addressed in the literature. Examples are presented to illustrate our results.
The Douglas-Rachford splitting method is a classical and powerful method for minimizing the sum of two convex functions. In this paper, we introduce two dynamical systems based on this method for solving the minimum v...
详细信息
The Douglas-Rachford splitting method is a classical and powerful method for minimizing the sum of two convex functions. In this paper, we introduce two dynamical systems based on this method for solving the minimum value problem of the sum of a strongly convex function and a weakly convex function. Under mild conditions, it is shown that the proposed dynamical systems are globally convergent to fixed point sets of the corresponding Douglas-Rachford operators, respectively, and are globally asymptotically stable if the corresponding fixed point sets are singleton. Furthermore, the globally exponential convergence of the proposed dynamical systems is established under some regularity conditions. A numerical example is reported to illustrate the effectiveness of the dynamical splitting method.
Motivated by the convergence result of mirror-descent algorithms to market equilibria in linear Fisher markets, it is natural for one to consider designing dynamics (specifically, iterative algorithms) for agents to a...
详细信息
Motivated by the convergence result of mirror-descent algorithms to market equilibria in linear Fisher markets, it is natural for one to consider designing dynamics (specifically, iterative algorithms) for agents to arrive at linear Arrow-Debreu market equilibria. Jain (SIAM J. Comput. 37(1), 303-318,2007) reduced equilibrium computation in linear Arrow-Debreu markets to the equilibrium computation in bijective markets, where everyone is a seller of only one good and a buyer for a bundle of goods. In this paper, we design an algorithm for computing linear bijective market equilibrium, based on solving the rational convex program formulated by Devanur et al. The algorithm repeatedly alternates between a step of gradient-descent-like updates and a distributed step of optimization exploiting the property of such convex program. Convergence can be ensured by a new analysis different from the analysis for linear Fisher market equilibria.
Binary non-linear programs belong to the class of combinatorial problems which are computationally hard even to approximate. This paper aims to explore some conditions on the problem structure, under which the resulti...
详细信息
Binary non-linear programs belong to the class of combinatorial problems which are computationally hard even to approximate. This paper aims to explore some conditions on the problem structure, under which the resulting problem can be well approximated. Particularly, we consider a setting when both objective function and constraint are low-rank functions, which depend only on a few linear combinations of the input variables, and provide polynomial time approximation schemes. Our result generalizes and unifies some existing results in the literature. (C) 2021 Elsevier B.V. All rights reserved.
In this paper,we consider the positive semi-definite space tensor cone constrained convex program,its structure and *** study defining functions,defining sequences and polyhedral outer approximations for this positive...
详细信息
In this paper,we consider the positive semi-definite space tensor cone constrained convex program,its structure and *** study defining functions,defining sequences and polyhedral outer approximations for this positive semidefinite space tensor cone,give an error bound for the polyhedral outer approximation approach,and thus establish convergence of three polyhedral outer approximation algorithms for solving this *** then study some other approaches for solving this structured convex *** include the conic linear programming approach,the nonsmooth convex program approach and the bi-level program *** numerical examples are presented.
We consider the problem of minimizing the sum of a convex-concave function and a convex function over a convex set (SFC). It can be reformulated as a univariate minimization problem, where the objective function is ev...
详细信息
We consider the problem of minimizing the sum of a convex-concave function and a convex function over a convex set (SFC). It can be reformulated as a univariate minimization problem, where the objective function is evaluated by solving convex optimization. The optimal Lagrangian multipliers of the convex subproblems are used to construct sawtooth curve lower bounds, which play a key role in developing the branch-and-bound algorithm for globally solving (SFC). In this paper, we improve the existing sawtooth-curve bounds to new wave-curve bounds, which are used to develop a more efficient branch-and-bound algorithm. Moreover, we can show that the new algorithm finds an epsilon\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\epsilon $$\end{document}-approximate optimal solution in at most O mml:mfenced close=")" open="("1 epsilon\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O\left( \frac{1}{\epsilon }\right) $$\end{document} iterations. Numerical results demonstrate the efficiency of our algorithm.
In this paper we discuss recovering two signals from their convolution in 3 dimensions. One of the signals is assumed to lie in a known subspace and the other one is assumed to be sparse. Various applications such as ...
详细信息
In this paper we discuss recovering two signals from their convolution in 3 dimensions. One of the signals is assumed to lie in a known subspace and the other one is assumed to be sparse. Various applications such as super resolution, radar imaging, and direction of arrival estimation can be described in this framework. We introduce a method to estimate parameters of a signal in a low-dimensional subspace which is convolved with another signal comprised of some impulses in time domain. We transform the problem to a convex optimization in the form of a positive semi-definite program using lifting and the atomic norm. We demonstrate that unknown parameters can be recovered by lowpass observations. Numerical simulations show excellent performance of the proposed method.
This paper proposes and analyzes an inexact variant of the proximal generalized alternating direction method of multipliers (ADMM) for solving separable linearly constrained convex optimization problems. In this varia...
详细信息
This paper proposes and analyzes an inexact variant of the proximal generalized alternating direction method of multipliers (ADMM) for solving separable linearly constrained convex optimization problems. In this variant, the first subproblem is approximately solved using a relative error condition whereas the second one is assumed to be easy to solve. In many ADMM applications, one of the subproblems has a closed-form solution;for instance, l1 regularized convex composite optimization problems. The proposed method possesses iteration-complexity bounds similar to its exact version. More specifically, it is shown that, for a given tolerance rho>0, an approximate solution of the Lagrangian system associated to the problem under consideration is obtained in at most O(1/rho 2) (resp. O(1/rho) in the ergodic case) iterations. Numerical experiments are presented to illustrate the performance of the proposed scheme.
暂无评论