We study quasi-convex optimization problems, where only a subset of the constraints can be sampled, and yet one would like a probabilistic guarantee on the obtained solution with respect to the initial (unknown) optim...
详细信息
We study quasi-convex optimization problems, where only a subset of the constraints can be sampled, and yet one would like a probabilistic guarantee on the obtained solution with respect to the initial (unknown) optimization problem. Even though our results are partly applicable to general quasi-convex problems, in this work we introduce and study a particular subclass, which we call "quasi-linear problems". We provide optimality conditions for these problems. Thriving on this, we extend the approach of chance-constrained convex optimization to quasi-linear optimization problems. Finally, we show that this approach is useful for the stability analysis of black-box switched linear systems, from a finite set of sampled trajectories. It allows us to compute probabilistic upper bounds on the JSR of a large class of switched linear systems.
quasi-convex optimization acts a pivotal part in many fields including economics and finance;the subgradient method is an effective iterative algorithm for solving large-scale quasi-convex optimization problems. In th...
详细信息
quasi-convex optimization acts a pivotal part in many fields including economics and finance;the subgradient method is an effective iterative algorithm for solving large-scale quasi-convex optimization problems. In this paper, we investigate the quantitative convergence theory, including the iteration complexity and convergence rates, of various subgradient methods for solving quasi-convex optimization problems in a unified framework. In particular, we consider a sequence satisfying a general (inexact) basic inequality, and investigate the global convergence theorem and the iteration complexity when using the constant, diminishing or dynamic stepsize rules. More importantly, we establish the linear (or sublinear) convergence rates of the sequence under an additional assumption of weak sharp minima of Holderian order and upper bounded noise. These convergence theorems are applied to establish the iteration complexity and convergence rates of several subgradient methods, including the standard/inexact/conditional subgradient methods, for solving quasi-convex optimization problems under the assumptions of the Holder condition and/or the weak sharp minima of Holderian order.
The sum of ratios problem has a variety of important applications in economics and management science, but it is difficult to globally solve this problem. In this paper, we consider the minimization problem of the sum...
详细信息
The sum of ratios problem has a variety of important applications in economics and management science, but it is difficult to globally solve this problem. In this paper, we consider the minimization problem of the sum of a number of nondifferentiable quasi-convex component functions over a closed and convex set. The sum of quasi-convex component functions is not necessarily to be quasi-convex, and so, this study goes beyond quasi-convex optimization. Exploiting the structure of the sum-minimization problem, we propose a new incremental quasi-subgradient method for this problem and investigate its convergence properties to a global optimal value/solution when using the constant, diminishing or dynamic stepsize rules and under a homogeneous assumption and the Holder condition. To economize on the computation cost of subgradients of a large number of component functions, we further propose a randomized incremental quasi-subgradient method, in which only one component function is randomly selected to construct the subgradient direction at each iteration. The convergence properties are obtained in terms of function values and iterates with probability 1. The proposed incremental quasi-subgradient methods are applied to solve the quasi-convex feasibility problem and the sum of ratios problem, as well as the multiple Cobb-Douglas productions efficiency problem, and the numerical results show that the proposed methods are efficient for solving the large-scale sum of ratios problem.
quasi-convex optimization is fundamental to the modelling of many practical problems in various fields such as economics, finance and industrial organization. Subgradient methods are practical iterative algorithms for...
详细信息
quasi-convex optimization is fundamental to the modelling of many practical problems in various fields such as economics, finance and industrial organization. Subgradient methods are practical iterative algorithms for solving large-scale quasi-convex optimization problems. In the present paper, focusing on quasi-convex optimization, we develop an abstract convergence theorem for a class of sequences, which satisfy a general basic inequality, under some suitable assumptions on parameters. The convergence properties in both function values and distances of iterates from the optimal solution set are discussed. The abstract convergence theorem covers relevant results of many types of subgradient methods studied in the literature, for either convex or quasi-convex optimization. Furthermore, we propose a new subgradient method, in which a perturbation of the successive direction is employed at each iteration. As an application of the abstract convergence theorem, we obtain the convergence results of the proposed subgradient method under the assumption of the Holder condition of order p and by using the constant, diminishing or dynamic stepsize rules, respectively. A preliminary numerical study shows that the proposed method outperforms the standard, stochastic and primal-dual subgradient methods in solving the Cobb-Douglas production efficiency problem.
We consider two classes of proximal-like algorithms for minimizing a proper lower semicontinuous quasi-convex function f(x) subject to non-negative constraints x >= 0 .The algorithms are based on an entropy-like se...
详细信息
We consider two classes of proximal-like algorithms for minimizing a proper lower semicontinuous quasi-convex function f(x) subject to non-negative constraints x >= 0 .The algorithms are based on an entropy-like second-order homogeneous distance function. Under the assumption that the global minimizer set is nonempty and bounded, we prove the full convergence of the sequence generated by the algorithms, and furthermore, obtain two important convergence results through imposing certain conditions on the proximal parameters. One is that the sequence generated will converge to a stationary point if the proximal parameters are bounded and the problem is continuously differentiable, and the other is that the sequence generated will converge to a solution of the problem if the proximal parameters approach to zero. Numerical experiments are done for a class of quasi-convex optimization problems where the function f(x) is a composition of a quadratic convex function from R(n) to R and a continuously differentiable increasing function from R to R, and computational results indicate that these algorithms are very promising in finding a global optimal solution to these quasi-convex problems.
A new notion of "adjusted sublevel set" of a function is introduced and studied. These sets lie between the sublevel and strict sublevel sets of the function. In contrast to the normal operators to sublevel ...
详细信息
A new notion of "adjusted sublevel set" of a function is introduced and studied. These sets lie between the sublevel and strict sublevel sets of the function. In contrast to the normal operators to sublevel or strict sublevel sets that were studied in the literature so far, the normal operator to the adjusted sublevel sets is both quasi-monotone and, in the case of quasi-convex functions, cone upper-semicontinuous. This makes this new notion appropriate for all kinds of quasi-convex functions and, in particular, for quasi-convex functions whose graph presents a "flat part." Application is given to quasi-convex optimization through the study of an associated variational inequality problem.
暂无评论