This study proposes a novel method to solve nonlinear fractional programming (NFP) problems occurring frequently in engineering design and management. Fractional terms composed of signomial functions are first decompo...
详细信息
This study proposes a novel method to solve nonlinear fractional programming (NFP) problems occurring frequently in engineering design and management. Fractional terms composed of signomial functions are first decomposed into convex and concave terms by convexification strategies. Then the piecewise linearization technique is used to approximate the concave terms. The NFP program is then converted into a convex program. A global optimum of the fractional program can finally be found within a tolerable error. When compared with most of the current fractional programming methods, which can only treat a problem with linear functions or a single quotient term, the proposed method can solve a more general fractional programming program with nonlinear functions and multiple quotient terms. Numerical examples are presented to demonstrate the usefulness of the proposed method.
The basis pursuit problem seeks a minimum one-norm solution of an underdetermined least-squares problem. Basis pursuit denoise (BPDN) fits the least-squares problem only approximately, and a single parameter determine...
详细信息
The basis pursuit problem seeks a minimum one-norm solution of an underdetermined least-squares problem. Basis pursuit denoise (BPDN) fits the least-squares problem only approximately, and a single parameter determines a curve that traces the optimal trade-off between the least-squares fit and the one-norm of the solution. We prove that this curve is convex and continuously differentiable over all points of interest, and show that it gives an explicit relationship to two other optimization problems closely related to BPDN. We describe a root-finding algorithm for finding arbitrary points on this curve;the algorithm is suitable for problems that are large scale and for those that are in the complex domain. At each iteration, a spectral gradient-projection method approximately minimizes a least-squares problem with an explicit one-norm constraint. Only matrix-vector operations are required. The primal-dual solution of this problem gives function and derivative information needed for the root-finding method. Numerical experiments on a comprehensive set of test problems demonstrate that the method scales well to large problems.
Consider problems of the form \[({\text{P}})\qquad min \{ . {f(x)} |Ex \geqq b\} ,\] where f is a strictly convex (possibly nondifferentiable) function and E and b are, respectively, a matrix and a vector. A popular m...
详细信息
Consider problems of the form \[({\text{P}})\qquad min \{ . {f(x)} |Ex \geqq b\} ,\] where f is a strictly convex (possibly nondifferentiable) function and E and b are, respectively, a matrix and a vector. A popular method for solving special cases of (P) (e.g., network flow, entropy maximization, quadratic program) is to dualize the constraints $Ex \geqq b$ to obtain a differentiable maximization problem and then apply an iterative ascent method to solve it. This method is simple and can exploit sparsity, thus making it ideal for large-scale optimization and, in certain cases, for parallel computation. Despite its simplicity, however, convergence of this method has been shown only under certain very restrictive conditions and only for certain special cases of (P). In this paper a block coordinate ascent method is presented for solving (P) that contains as special cases both dual coordinate ascent methods and dual gradient methods. It is shown, under certain mild assumptions on f and (P), that this method converges. Also the line searches are allowed to be inexact and, when f is separable, can be done in parallel.
The use of convex optimization for the recovery of sparse signals from incomplete or compressed data is now common practice. Motivated by the success of basis pursuit in recovering sparse vectors, new formulations hav...
详细信息
The use of convex optimization for the recovery of sparse signals from incomplete or compressed data is now common practice. Motivated by the success of basis pursuit in recovering sparse vectors, new formulations have been proposed that take advantage of different types of sparsity. In this paper we propose an efficient algorithm for solving a general class of sparsifying formulations. For several common types of sparsity we provide applications, along with details on how to apply the algorithm, and experimental results.
This paper describes a regularized variant of the alternating direction method of multipliers (ADMM) for solving linearly constrained convex programs. It is shown that the pointwise iteration-complexity of the new var...
详细信息
This paper describes a regularized variant of the alternating direction method of multipliers (ADMM) for solving linearly constrained convex programs. It is shown that the pointwise iteration-complexity of the new variant is better than the corresponding one for the standard ADMM method and that, up to a logarithmic term, is identical to the ergodic iteration-complexity of the latter method. Our analysis is based on first presenting and establishing the pointwise iteration-complexity of a regularized non-Euclidean hybrid proximal extragradient framework whose error condition at each iteration includes both a relative error and a summable error. It is then shown that the new ADMM variant is a special instance of the latter framework where the sequence of summable errors is identically zero when the ADMM stepsize is less than one or a nontrivial sequence when the stepsize is in the interval [1, (1 + root 5)/2).
In this paper, we identify the local rate function governing the sample path large deviation principle for a rescaled process n(-1)Q(nt), where Q(t) represents the joint number of clients at time t in a polling system...
详细信息
In this paper, we identify the local rate function governing the sample path large deviation principle for a rescaled process n(-1)Q(nt), where Q(t) represents the joint number of clients at time t in a polling system with N nodes, one server and Markovian routing. By the way, the large deviation principle is proved and the rate function is shown to have the form conjectured by Dupuis and Ellis. We introduce a so called empirical generator consisting of Q(t) and of two empirical measures associated with S-t, the position of the server at time t. One of the main step is to derive large deviations bounds for a localized version of the empirical generator. The analysis relies on a suitable change of measure and on a representation of fluid limits for polling systems. Finally, the rate function is solution of a meaningful convex program. The method seems to have a wide range of application including the famous Jackson networks, as shown at the end of this study. An example illustrates how this technique can be used to estimate stationary probability decay rate.
With the objective of generating "shape-preserving" smooth interpolating curves that represent data with abrupt changes in magnitude and/or knot spacing, we study a class of first-derivative-based C-1-smooth...
详细信息
With the objective of generating "shape-preserving" smooth interpolating curves that represent data with abrupt changes in magnitude and/or knot spacing, we study a class of first-derivative-based C-1-smooth univariate cubic L-1 splines. An L-1 spline minimizes the L-1 norm of the difference between the first-order derivative of the spline and the local divided difference of the data. Calculating the coefficients of an L-1 spline is a nonsmooth non-linear convex program. Via Fenchel's conjugate transformation, the geometric dual program is a smooth convex program with a linear objective function and convex cubic constraints. The dual-to-primal transformation is accomplished by solving a linear program.
This paper proposes a partially inexact proximal alternating direction method of multipliers for computing approximate solutions of a linearly constrained convex optimization problem. This method allows its first subp...
详细信息
This paper proposes a partially inexact proximal alternating direction method of multipliers for computing approximate solutions of a linearly constrained convex optimization problem. This method allows its first subproblem to be solved inexactly using a relative approximate criterion, whereas a proximal term is added to its second subproblem in order to simplify it. A stepsize parameter is included in the updating rule of the Lagrangian multiplier to improve its computational performance. Pointwise and ergodic iteration-complexity bounds for the proposed method are established. To the best of our knowledge, this is the first time that complexity results for an inexact alternating direction method of multipliers with relative error criteria have been analyzed. Some preliminary numerical experiments are reported to illustrate the advantages of the new method.
In this paper, we propose and analyze an inexact version of the symmetric proximal alternating direction method of multipliers (ADMM) for solving linearly constrained optimization problems. Basically, the method allow...
详细信息
In this paper, we propose and analyze an inexact version of the symmetric proximal alternating direction method of multipliers (ADMM) for solving linearly constrained optimization problems. Basically, the method allows its first subproblem to be solved inexactly in such way that a relative approximate criterion is satisfied. In terms of root the iteration number k, we establish global O(1/ k) pointwise and O(1/k) ergodic convergence rates of the method for a domain of the acceleration parameters, which is consistent with the largest known one in the exact case. Since the symmetric proximal ADMM can be seen as a class of ADMM variants, the new algorithm as well as its convergence rates generalize, in particular, many others in the literature. Numerical experiments illustrating the practical advantages of the method are reported. To the best of our knowledge, this work is the first one to study an inexact version of the symmetric proximal ADMM.
When all the functions that define a convex program are positively homogeneous, then a dual convex program can be constructed which is defined in terms of the primal data only (the primal variables do not appear). Fur...
详细信息
When all the functions that define a convex program are positively homogeneous, then a dual convex program can be constructed which is defined in terms of the primal data only (the primal variables do not appear). Furthermore, the dualizing process, carried out on the dual program, yields the primal. Several well-known examples of convex programs with explicit duals are shown to be special cases.
暂无评论