A penalty method for convex functions which cannot necessarily be extended outside their effective domains by an everywhere finite convex function is proposed and combined with the proximal method. Proofs of convergen...
详细信息
A penalty method for convex functions which cannot necessarily be extended outside their effective domains by an everywhere finite convex function is proposed and combined with the proximal method. Proofs of convergence rely on variational convergence theory.
In this study a fuzzy c-means clustering algorithm based method is proposed for solving a capacitated multi-facility location problem of known demand points which are served from capacitated supply centres. It involve...
详细信息
In this study a fuzzy c-means clustering algorithm based method is proposed for solving a capacitated multi-facility location problem of known demand points which are served from capacitated supply centres. It involves the integrated use of fuzzy c-means and convex programming. In fuzzy c-means, data points are allowed to belong to several clusters with different degrees of membership. This feature is used here to split demands between supply centers. The cluster number is determined by an incremental method that starts with two and designated when capacity of each cluster is sufficient for its demand. Finally, each group of cluster and each model are solved as a single facility location problem. Then each single facility location problem given by fuzzy c-means is solved by convex programming which optimizes transportation cost is used to fine-tune the facility location. Proposed method is applied to several facility location problems from OR library (Osman & Christofides, 1994) and compared with centre of gravity and particle swarm optimization based algorithms. Numerical results of an asphalt producer's real-world data in Turkey are reported. Numerical results show that the proposed approach performs better than using original fuzzy c-means, integrated use of fuzzy c-means and center of gravity methods in terms of transportation costs. (C) 2011 Elsevier Ltd. All rights reserved.
The augmented Lagrangian method (ALM) is a benchmark for solving a convex minimization model with linear constraints. We consider the special case where the objective is the sum of m functions without coupled variable...
详细信息
The augmented Lagrangian method (ALM) is a benchmark for solving a convex minimization model with linear constraints. We consider the special case where the objective is the sum of m functions without coupled variables. For solving this separable convex minimization model, it is usually required to decompose the ALM subproblem at each iteration into m smaller subproblems, each of which only involves one function in the original objective. Easier subproblems capable of taking full advantage of the functions' properties individually could thus be generated. In this paper, we focus on the case where full Jacobian decomposition is applied to ALM subproblems, i.e., all the decomposed ALM subproblems are eligible for parallel computation at each iteration. For the first time, we show, by an example, that the ALM with full Jacobian decomposition could be divergent. To guarantee the convergence, we suggest combining a relaxation step and the output of the ALM with full Jacobian decomposition. A novel analysis is presented to illustrate how to choose refined step sizes for this relaxation step. Accordingly, a new splitting version of the ALM with full Jacobian decomposition is proposed. We derive the worst-case O(1/k) convergence rate measured by the iteration complexity (where k represents the iteration counter) in both the ergodic and nonergodic senses for the new algorithm. Finally, some numerical results are reported to show the efficiency of the new algorithm.
In this paper, we present two parallel multiplicative algorithms for convex programming. If the objective function has compact level sets and has a locally Lipschitz continuous gradient, we discuss convergence of the ...
详细信息
In this paper, we present two parallel multiplicative algorithms for convex programming. If the objective function has compact level sets and has a locally Lipschitz continuous gradient, we discuss convergence of the algorithms. The proofs are essentially based on the results of sequential methods shown by Eggermontt[1].
We describe a simplified and strengthened version of Vaidya's volumetric cutting plane method for finding a point in a convex set C subset of R(n). At each step the algorithm has a system of linear inequality cons...
详细信息
We describe a simplified and strengthened version of Vaidya's volumetric cutting plane method for finding a point in a convex set C subset of R(n). At each step the algorithm has a system of linear inequality constraints which defines a polyhedron P superset of C, and an interior point x is an element of P. The algorithm then either drops one constraint, or calls an oracle to check if x is an element of C, and, if not, obtain a new constraint that separates x from C. Following the addition or deletion of a constraint, the algorithm takes a small number of Newton steps for the volumetric barrier V(.). Progress of the algorithm is measured in terms of changes in V(.). The algorithm is terminated when either it is discovered that x is an element of C, or V(.) becomes large enough to demonstrate that the volume of C must be below some prescribed amount. The complexity of the algorithm compares favorably with that of the ellipsoid method, especially in terms of the number of calls to the separation oracle. Compared to Vaidya's original analysis, we decrease the total number of Newton steps required for termination by a factor of about 1.3 million, white at the same time decreasing the maximum number of constraints used to define P from 10(7)n to 200n.
We consider the multivariate max-linear regression problem where the model parameters beta(1),& mldr;,beta(k )is an element of R-p need to be estimated from n independent samples of the (noisy) observations y = ma...
详细信息
We consider the multivariate max-linear regression problem where the model parameters beta(1),& mldr;,beta(k )is an element of R-p need to be estimated from n independent samples of the (noisy) observations y = max(1 <= j <= k)beta(T)(j)x + noise . The max-linear model vastly generalizes the conventional linear model, and it can approximate any convex function to an arbitrary accuracy when the number of linear models k is large enough. However, the inherent nonlinearity of the max-linear model renders the estimation of the regression parameters computationally challenging. Particularly, no estimator based on convex programming is known in the literature. We formulate and analyze a scalable convex program given by anchored regression (AR) as the estimator for the max-linear regression problem. Under the standard Gaussian observation setting, we present a non-asymptotic performance guarantee showing that the convex program recovers the parameters with high probability. When the k linear components are equally likely to achieve the maximum, our result shows a sufficient number of noise-free observations for exact recovery scales as k(4)p up to a logarithmic factor. This sample complexity coincides with that by alternating minimization (Ghosh et al., 2021). Moreover, the same sample complexity applies when the observations are corrupted with arbitrary deterministic noise. We provide empirical results that show that our method performs as our theoretical result predicts, and is competitive with the alternating minimization algorithm particularly in presence of multiplicative Bernoulli noise. Furthermore, we also show empirically that a recursive application of AR can significantly improve the estimation accuracy.
Hiriart-Urruty gave formulas of the first-order and second-order ε-directional derivatives of a marginal function for a convex programming problem with linear equality constraints, that is, the image of a function un...
详细信息
Hiriart-Urruty gave formulas of the first-order and second-order ε-directional derivatives of a marginal function for a convex programming problem with linear equality constraints, that is, the image of a function under linear mapping (Ref. 1). In this paper, we extend his results to a problem with linear inequality constraints. The formula of the first-order derivative is given with the help of a duality theorem. A lower estimate for the second-order ε-directional derivative is given.
The strictly contractive Peaceman-Rachford splitting method is one of effective methods for solving separable convex optimization problem, and the inertial proximal Peaceman-Rachford splitting method is one of its imp...
详细信息
The strictly contractive Peaceman-Rachford splitting method is one of effective methods for solving separable convex optimization problem, and the inertial proximal Peaceman-Rachford splitting method is one of its important variants. It is known that the convergence of the inertial proximal Peaceman- Rachford splitting method can be ensured if the relaxation factor in Lagrangian multiplier updates is underdetermined, which means that the steps for the Lagrangian multiplier updates are shrunk conservatively. Although small steps play an important role in ensuring convergence, they should be strongly avoided in practice. In this article, we propose a relaxed inertial proximal Peaceman- Rachford splitting method, which has a larger feasible set for the relaxation factor. Thus, our method provides the possibility to admit larger steps in the Lagrangian multiplier updates. We establish the global convergence of the proposed algorithm under the same conditions as the inertial proximal Peaceman-Rachford splitting method. Numerical experimental results on a sparse signal recovery problem in compressive sensing and a total variation based image denoising problem demonstrate the effectiveness of our method.
The convergence of central paths has been a focal point of research on interior point methods. Quite detailed analyses have been made for the linear case. However, when it comes to the convex case, even if the constra...
详细信息
The convergence of central paths has been a focal point of research on interior point methods. Quite detailed analyses have been made for the linear case. However, when it comes to the convex case, even if the constraints remain linear, the problem is unsettled. In [Math. Program., 103 (2005), pp. 63-94], Gilbert, Gonzaga, and Karas presented some examples in convex optimization, where the central path fails to converge. In this paper, we aim at finding some continuous trajectories which can converge for all linearly constrained convex optimization problems under some mild assumptions. We design and analyze a class of continuous trajectories, which are the solutions of certain ordinary differential equation (ODE) systems for solving linearly constrained smooth convex programming. The solutions of these ODE systems are named generalized central paths. By only assuming the existence of a finite optimal solution, we are able to show that, starting from any interior feasible point, (i) all of the generalized central paths are convergent, and (ii) the limit point(s) are indeed the optimal solution(s) of the original optimization problem. Furthermore, we illustrate that for the key example of Gilbert, Gonzaga, and Karas, our generalized central paths converge to the optimal solutions.
In this paper, existence and characterization of solutions and duality aspects of infinite-dimensional convex programming problems are examined. Applications of the results to constrained approximation problems are co...
详细信息
In this paper, existence and characterization of solutions and duality aspects of infinite-dimensional convex programming problems are examined. Applications of the results to constrained approximation problems are considered. Various duality properties for constrained interpolation problems over convex sets are established under general regularity conditions. The regularity conditions are shown to hold for many constrained interpolation problems. Characterizations of local proximinal sets and the set of best approximations are also given in normed linear spaces.
暂无评论