In this article, we propose a new method for multiobjective optimization problems in which the objective functions are expressed as expectations of random functions. The present method is based on an extension of the ...
详细信息
In this article, we propose a new method for multiobjective optimization problems in which the objective functions are expressed as expectations of random functions. The present method is based on an extension of the classical stochastic gradientalgorithm and a deterministic multiobjective algorithm, the multiple gradient descent algorithm (MGDA). In MGDA a descent direction common to all specified objective functions is identified through a result of convex geometry. The use of this common descent vector and the Pareto stationarity definition into the stochastic gradientalgorithm makes the algorithm able to solve multiobjective problems. The mean square and almost sure convergence of this new algorithm are proven considering the classical stochastic gradientalgorithm hypothesis. The algorithm efficiency is illustrated on a set of benchmarks with diverse complexity and assessed in comparison with two classical algorithms (NSGA-II, DMS) coupled with a Monte Carlo expectation estimator. (C) 2018 Elsevier B.V. All rights reserved.
暂无评论