This paper establishes enhanced convergence rates for the conditional gradient method in multiobjective optimization under assumptions weaker than uniform convexity for the objective functions. Structural conditions a...
详细信息
This paper establishes enhanced convergence rates for the conditional gradient method in multiobjective optimization under assumptions weaker than uniform convexity for the objective functions. Structural conditions are imposed on the constraint set, including uniformly convex sets and polytopes. We analyze both adaptive step sizes and diminishing step sizes of the form l k+l with l >= 1, improving upon the previously used 4 k+4. The results show faster convergence rates surpassing the standard O(k1). Numerical experiments verify the theoretical results.
This article deals with multiobjective composite optimization problems that consist of simultaneously minimizing several objective functions, each of which is composed of a combination of smooth and non-smooth functio...
详细信息
This article deals with multiobjective composite optimization problems that consist of simultaneously minimizing several objective functions, each of which is composed of a combination of smooth and non-smooth functions. To tackle these problems, we propose a generalized version of the conditional gradient method, also known as Frank-Wolfe method. The method is analysed with three step size strategies, including Armijo-type, adaptive, and diminishing step sizes. We establish asymptotic convergence properties and iteration-complexity bounds, with and without convexity assumptions on the objective functions. Numerical experiments illustrating the practical behaviour of the methods are presented.
In this paper, we propose a generalized conditional gradient method for multiobjective optimization, where the objective function is the sum of a smooth function and a possibly nonsmooth function. The proposed method ...
详细信息
We analyze the conditional gradient method, also known as Frank-Wolfe method, for constrained multiobjective optimization. The constraint set is assumed to be convex and compact, and the objectives functions are assum...
详细信息
We analyze the conditional gradient method, also known as Frank-Wolfe method, for constrained multiobjective optimization. The constraint set is assumed to be convex and compact, and the objectives functions are assumed to be continuously differentiable. The method is considered with different strategies for obtaining the step sizes. Asymptotic convergence properties and iteration-complexity bounds with and without convexity assumptions on the objective functions are stablished. Numerical experiments are provided to illustrate the effectiveness of the method and certify the obtained theoretical results.
In this paper, we propose a conditional gradient method for solving constrained vector optimization problems with respect to a partial order induced by a closed, convex and pointed cone with nonempty interior. When th...
详细信息
In this paper, we propose a conditional gradient method for solving constrained vector optimization problems with respect to a partial order induced by a closed, convex and pointed cone with nonempty interior. When the partial order under consideration is the one induced by the non-negative orthant, we regain the method for multiobjective optimization recently proposed by Assuncao et al. (Comput Optim Appl 78(3):741-768, 2021). In our method, the construction of the auxiliary subproblem is based on the well-known oriented distance function. Three different types of step size strategies (Armijo, adaptative and nonmonotone) are considered. Without convexity assumption related to the objective function, we obtain the stationarity of accumulation points of the sequences produced by the proposed method equipped with the Armijo or the nonmonotone step size rule. To obtain the convergence result of the method with the adaptative step size strategy, we introduce a useful cone convexity condition which allows us to circumvent the intricate question of the Lipschitz continuity of Jocabian for the objective function. This condition helps us to generalize the classical descent lemma to the vector optimization case. Under convexity assumption for the objective function, it is proved that all accumulation points of any generated sequences obtained by our method are weakly efficient solutions. Numerical experiments illustrating the practical behavior of the methods are presented.
The conditional gradient method is generalized to nonconvex sets of constraints representing the set-theoretic intersection of a convex smooth surface and a convex compact set. Necessary optimality conditions are stud...
详细信息
The conditional gradient method is generalized to nonconvex sets of constraints representing the set-theoretic intersection of a convex smooth surface and a convex compact set. Necessary optimality conditions are studied, and the convergence of the method is analyzed.
We consider the problem of optimizing the ratio of two convex functions over a closed and convex set in the space of matrices. This problem appears in several applications and can be classified as a double-convex frac...
详细信息
We consider the problem of optimizing the ratio of two convex functions over a closed and convex set in the space of matrices. This problem appears in several applications and can be classified as a double-convex fractional programming problem. In general, the objective function is nonconvex but, nevertheless, the problem has some special features. Taking advantage of these features, a conditional gradient method is proposed and analyzed, which is suitable for matrix problems. The proposed scheme is applied to two different specific problems, including the well-known trace ratio optimization problem which arises in many engineering and data processing applications. Preliminary numerical experiments are presented to illustrate the properties of the proposed scheme.
We propose a simple rule for the step-size choice in the conditional gradient method, which does not require any line-search procedure. It takes into account the current behavior of the method. Its convergence is esta...
详细信息
We propose a simple rule for the step-size choice in the conditional gradient method, which does not require any line-search procedure. It takes into account the current behavior of the method. Its convergence is established under the same assumptions as those for the previously known methods.
The classical convex feasibility problem in a finite dimensional Euclidean space consists of finding a point in the intersection of two convex sets. In the present paper we are interested in two particular instances o...
详细信息
The classical convex feasibility problem in a finite dimensional Euclidean space consists of finding a point in the intersection of two convex sets. In the present paper we are interested in two particular instances of this problem. First, we assume to know how to compute an exact projection onto one of the sets involved and the other set is compact such that the conditionalgradient (CondG) method can be used for computing efficiently an inexact projection on it. Second, we assume that both sets involved are compact such that the CondG method can be used for computing efficiently inexact projections on them. We combine alternating projection method with CondG method to design a new method, which can be seen as an inexact feasible version of alternate projection method. The proposed method generates two different sequences belonging to each involved set, which converge to a point in the intersection of them whenever it is not empty. If the intersection is empty, then the sequences converge to points in the respective sets whose distance between them is equal to the distance between the sets in consideration. Numerical experiments are provided to illustrate the practical behavior of the method.
This article combines techniques from two fields of applied mathematics: optimization theory and inverse problems. We investigate a generalized conditional gradient method and its connection to an iterative shrinkage ...
详细信息
This article combines techniques from two fields of applied mathematics: optimization theory and inverse problems. We investigate a generalized conditional gradient method and its connection to an iterative shrinkage method, which has been recently proposed for solving inverse problems. The iterative shrinkage method aims at the solution of non-quadratic minimization problems where the solution is expected to have a sparse representation in a known basis. We show that it can be interpreted as a generalized conditional gradient method. We prove the convergence of this generalized method for general class of functionals, which includes non-convex functionals. This also gives a deeper understanding of the iterative shrinkage method.
暂无评论