In this paper we consider the minimization of a continuous function that is potentially not differentiable or not twice differentiable on the boundary of the feasible region. By exploiting an interior point technique,...
详细信息
In this paper we consider the minimization of a continuous function that is potentially not differentiable or not twice differentiable on the boundary of the feasible region. By exploiting an interior point technique, we present first- and second-order optimality conditions for this problem that reduces to classical ones when the derivative on the boundary is available. For this type of problems, existing necessary conditions often rely on the notion of subdifferential or become non-trivially weaker than the KKT condition in the (twice-)differentiable counterpart problems. In contrast, this paper presents a new set of first- and second-order necessary conditions that are derived without the use of subdifferential and reduce to exactly the KKT condition when (twice-)differentiability holds. As a result, these conditions are stronger than some existing ones considered for the discussed minimization problem when only non-negativity constraints are present. To solve for these optimality conditions in the special but important case of linearly constrained problems, we present two novel interior point trust-region algorithms and show that their worst-case computational efficiency in achieving the potentially stronger optimality conditions match the best known complexity bounds. Since this work considers a more general problem than those in the literature, our results also indicate that best known existing complexity bounds are actually held for a wider class of nonlinear programming problems. This new development is significant since optimality conditions play a fundamental role in computational optimization and more and more nonlinear and nonconvex problems need to be solved in practice.
Cross-manifold clustering is an extreme challenge learning problem. Since the low-density hypothesis is not satisfied in cross-manifold problems, many traditional clustering methods failed to discover the cross-manifo...
详细信息
Cross-manifold clustering is an extreme challenge learning problem. Since the low-density hypothesis is not satisfied in cross-manifold problems, many traditional clustering methods failed to discover the cross-manifold structures. In this article, we propose multiple flat projections clustering (MFPC) for cross-manifold clustering. In our MFPC, the given samples are projected into multiple localized flats to discover the global structures of implicit manifolds. Thus, the intersected clusters are distinguished in various projection flats. In MFPC, a series of nonconvex matrix optimization problems is solved by a proposed recursive algorithm. Furthermore, a nonlinear version of MFPC is extended via kernel tricks to deal with a more complex cross-manifold learning situation. The synthetic tests show that our MFPC works on the cross-manifold structures well. Moreover, experimental results on the benchmark datasets and object tracking videos show excellent performance of our MFPC compared with some state-of-the-art manifold clustering methods.
We discuss the L-p (0 <= p < 1) minimization problem arising from sparse solution construction and compressed sensing. For any fixed 0 < p < 1, we prove that finding the global minimal value of the problem...
详细信息
We discuss the L-p (0 <= p < 1) minimization problem arising from sparse solution construction and compressed sensing. For any fixed 0 < p < 1, we prove that finding the global minimal value of the problem is strongly NP-Hard, but computing a local minimizer of the problem can be done in polynomial time. We also develop an interior-point potential reduction algorithm with a provable complexity bound and demonstrate preliminary computational results of effectiveness of the algorithm.
In this paper, we examine the convergence of mirror descent in a class of stochastic optimization problems that are not necessarily convex (or even quasi-convex) and which we call variationally coherent. Since the sta...
详细信息
In this paper, we examine the convergence of mirror descent in a class of stochastic optimization problems that are not necessarily convex (or even quasi-convex) and which we call variationally coherent. Since the standard technique of "ergodic averaging" offers no tangible benefits beyond convex programming, we focus directly on the algorithm's last generated sample (its "last iterate"), and we show that it converges with probabiility 1 if the underlying problem is coherent. We further consider a localized version of variational coherence which ensures local convergence of Stochastic mirror descent (SMD) with high probability. These results contribute to the landscape of nonconvex stochastic optimization by showing that (quasi-)convexity is not essential for convergence to a global minimum: rather, variational coherence, a much weaker requirement, suffices. Finally, building on the above, we reveal an interesting insight regarding the convergence speed of SMD: in problems with sharp minima (such as generic linear programs or concave minimization problems), SMD reaches a minimum point in a finite number of steps (a.s.), even in the presence of persistent gradient noise. This result is to be contrasted with existing black-box convergence rate estimates that are only asymptotic.
Hyperspectral image (HSI) and multispectral image (MSI) fusion aims at producing a super-resolution image (SRI). In this paper, we establish a nonconvex optimization model for image fusion problem through low rank ten...
详细信息
Hyperspectral image (HSI) and multispectral image (MSI) fusion aims at producing a super-resolution image (SRI). In this paper, we establish a nonconvex optimization model for image fusion problem through low rank tensor triple decomposition. Using the limited memory BFGS (L-BFGS) approach, we develop a first-order optimization algorithm for obtaining the desired super-resolution image (TTDSR). Furthermore, two detailed methods are provided for calculating the gradient of the objective function. With the aid of the KurdykaLojasiewicz property, the iterative sequence is proved to converge to a stationary point. Finally, experimental results on different datasets show the effectiveness of our proposed approach.
Mathematical programming problems dealing with functions, each of which can be represented as a difference of two convex functions, are called DC programming problems. The purpose of this overview is to discuss main t...
详细信息
Mathematical programming problems dealing with functions, each of which can be represented as a difference of two convex functions, are called DC programming problems. The purpose of this overview is to discuss main theoretical results, some applications, and solution methods for this interesting and important class of programming problems. Some modifications and new results on the optimality conditions and development of algorithms are also presented.
The problem (P) of optimizing a linear function over the efficient set of a multiple-objective linear program serves many useful purposes in multiple-criteria decision making. Mathematically, problem (P) can be classi...
详细信息
The problem (P) of optimizing a linear function over the efficient set of a multiple-objective linear program serves many useful purposes in multiple-criteria decision making. Mathematically, problem (P) can be classified as a global optimization problem. Such problems are much more difficult to solve than convex programming problems. In this paper, a nonadjacent extreme-point search algorithm is presented for finding a globally optimal solution for problem (P). The algorithm finds an exact extreme-point optimal solution for the problem after a finite number of iterations. It can be implemented using only linear programming methods. Convergence of the algorithm is proven, and a discussion is included of its main advantages and disadvantages.
For solving a broad class of nonconvex programming problems on an unbounded constraint set, we provide a self-adaptive step-size strategy that does not include line-search techniques and establishes the convergence of...
详细信息
For solving a broad class of nonconvex programming problems on an unbounded constraint set, we provide a self-adaptive step-size strategy that does not include line-search techniques and establishes the convergence of a generic approach under mild assumptions. Specifically, the objective function may not satisfy the convexity condition. Unlike descent line-search algorithms, it does not need a known Lipschitz constant to figure out how big the first step should be. The crucial feature of this process is the steady reduction of the step size until a certain condition is fulfilled. In particular, it can provide a new gradient projection approach to optimization problems with an unbounded constrained set. To demonstrate the effectiveness of the proposed technique for large-scale problems, we apply it to some experiments on machine learning, such as supervised feature selection, multi-variable logistic regressions and neural networks for classification.
We consider the problem of designing a suboptimal H-2/H-infinity feedback control law for a linear time-invariant control system when a complete set of state variables is not available. This problem can be necessarily...
详细信息
We consider the problem of designing a suboptimal H-2/H-infinity feedback control law for a linear time-invariant control system when a complete set of state variables is not available. This problem can be necessarily restated as a nonconvex optimization problem with a bilinear, multiobjective functional under suitably chosen linear matrix inequality (LMI) constraints. To solve such a problem, we propose an LMI-based procedure which is a sequential linearization programming approach. The properties and the convergence of the algorithm are discussed in detail. Finally, several numerical examples for static H-2/H-infinity output feedback problems demonstrate the applicability of the considered algorithm and also verify the theoretical results numerically.
A method of constructing test problems with known global solution for a class of reverse convex programs or linear programs with an additional reverse convex constraint is presented. The initial polyhedron is assumed ...
详细信息
A method of constructing test problems with known global solution for a class of reverse convex programs or linear programs with an additional reverse convex constraint is presented. The initial polyhedron is assumed to be a hypercube. The method then systematically generates cuts that slice the cube in such a way that a prespecified global solution on its edge remains intact. The proposed method does not require the solution of linear programs or systems of linear equations as is often required by existing techniques.
暂无评论