Sparse large scale multiobjective optimization problems (sparse LSMOPs) contain numerous decision variables, and their Pareto optimal solutions' decision variables are very sparse (i.e., the majority of these solu...
详细信息
Sparse large scale multiobjective optimization problems (sparse LSMOPs) contain numerous decision variables, and their Pareto optimal solutions' decision variables are very sparse (i.e., the majority of these solutions' decision variables are zero-valued). This poses grand challenges to an algorithm in converging to the Pareto set. Numerous evolutionary algorithms (EAs) tailored for sparse LSMOPs have been proposed in recent years. However, the final population generated by these EAs is not sparse enough because the location of the nonzero decision variables is difficult to locate accurately and there is insufficient interaction between the nonzero decision variables' locating process and the nonzero decision variables' optimizing process. To address this issue, we propose a dynamic sparse grouping evolutionary algorithm (DSGEA) that dynamically groups decision variables in the population that have a comparable amount of nonzero decision variables. Improved evolutionary operators are introduced to optimize the decision variables in groups. As a result, the population obtained by DSGEA can stably evolve towards the sparser Pareto optimal that has a precise location of nonzero decision variables. The proposed algorithm outperforms existing up-to-date EAs for sparse LSMOPs in experiments on three real-world problems and eight benchmark problems.
Multi-objective evolutionary algorithms (MOEAs) of the state of the art are created with the only purpose of dealing with the number of objective functions in a multi-objective optimization problem (MOP) and treat the...
详细信息
ISBN:
(纸本)9781728169293
Multi-objective evolutionary algorithms (MOEAs) of the state of the art are created with the only purpose of dealing with the number of objective functions in a multi-objective optimization problem (MOP) and treat the decision variables of a MOP as a whole. However, when dealing with MOPs with a large number of decision variables (more than 100) their efficacy decreases as the number of decision variables of the MOP increases. On the other hand, problem decomposition, in terms of decision variables, has been found to be extremely efficient and effective for solving largescaleoptimization problems. Nevertheless, most of the currently available approaches for largescaleoptimization rely on models based on cooperative coevolution or linkage learning methods that use multiple subpopulations or preliminary analysis, respectively, which is computationally expensive (in terms of function evaluations) when used within MOEAs. In this work, we study the effect of what we call operational decomposition, which is a novel framework based on coevolutionary concepts to apply MOEAs's crossover operator without adding any extra cost. We investigate the improvements that NSGA-III can achieve when combined with our proposed coevolutionary operators. This new scheme is capable of improving efficiency of a MOEA when dealing with largescale MOPs having from 200 up to 1200 decision variables.
暂无评论