Large-scale sparse multiobjective optimization problems (SMOPs) widely exist in academic research and engineering applications. The curse of dimensionality and the fact that most decision variables take zero values ma...
详细信息
Large-scale sparse multiobjective optimization problems (SMOPs) widely exist in academic research and engineering applications. The curse of dimensionality and the fact that most decision variables take zero values make optimization very difficult. sparse features are common to many practical complex problems currently, and using sparse features as a breakthrough point can enable many large-scale complex problems to be solved. We propose an efficient evolutionary algorithm based on deep reinforcement learning to solve large-scale SMOPs. Deep reinforcement learning networks are used for mining sparse variables to reduce the problem dimensionality, which is a challenge for large-scale multiobjectiveoptimization. Then the three-way decision concept is used to optimize decision variables. The emphasis is on optimizing deterministic nonzero variables and continuously mining uncertain decision variables. Experimental results on sparse benchmark problems and real-world application problems show that the proposed algorithm performs well on SMOPs while being highly efficient.
作者:
Tian, YeLu, ChangZhang, XingyiCheng, FanJin, YaochuAnhui Univ
Inst Phys Sci Minist Educ Key Lab Intelligent Comp & Signal Proc Hefei 230601 Peoples R China Anhui Univ
Inst Informat Technol Minist Educ Key Lab Intelligent Comp & Signal Proc Hefei 230601 Peoples R China Anhui Univ
Sch Comp Sci & Technol Minist Educ Key Lab Intelligent Comp & Signal Proc Hefei 230601 Peoples R China Univ Surrey
Dept Comp Sci Guildford GU2 7XH Surrey England
In real-world applications, there exist a lot of multiobjectiveoptimization problems whose Pareto-optimal solutions are sparse, that is, most variables of these solutions are 0. Generally, many sparsemultiobjective ...
详细信息
In real-world applications, there exist a lot of multiobjectiveoptimization problems whose Pareto-optimal solutions are sparse, that is, most variables of these solutions are 0. Generally, many sparse multiobjective optimization problems (SMOPs) contain a large number of variables, which pose grand challenges for evolutionary algorithms to find the optimal solutions efficiently. To address the curse of dimensionality, this article proposes an evolutionary algorithm for solving large-scale SMOPs, which aims to mine the sparse distribution of the Pareto-optimal solutions and, thus, considerably reduces the search space. More specifically, the proposed algorithm suggests an evolutionary pattern mining approach to detect the maximum and minimum candidate sets of the nonzero variables in the Pareto-optimal solutions, and uses them to limit the dimensions in generating offspring solutions. For further performance enhancement, a binary crossover operator and a binary mutation operator are designed to ensure the sparsity of solutions. According to the results on eight benchmark problems and four real-world problems, the proposed algorithm is superior over existing evolutionary algorithms in solving large-scale SMOPs.
sparse large scale multiobjectiveoptimization problems (sparse LSMOPs) contain numerous decision variables, and their Pareto optimal solutions' decision variables are very sparse (i.e., the majority of these solu...
详细信息
sparse large scale multiobjectiveoptimization problems (sparse LSMOPs) contain numerous decision variables, and their Pareto optimal solutions' decision variables are very sparse (i.e., the majority of these solutions' decision variables are zero-valued). This poses grand challenges to an algorithm in converging to the Pareto set. Numerous evolutionary algorithms (EAs) tailored for sparse LSMOPs have been proposed in recent years. However, the final population generated by these EAs is not sparse enough because the location of the nonzero decision variables is difficult to locate accurately and there is insufficient interaction between the nonzero decision variables' locating process and the nonzero decision variables' optimizing process. To address this issue, we propose a dynamic sparse grouping evolutionary algorithm (DSGEA) that dynamically groups decision variables in the population that have a comparable amount of nonzero decision variables. Improved evolutionary operators are introduced to optimize the decision variables in groups. As a result, the population obtained by DSGEA can stably evolve towards the sparser Pareto optimal that has a precise location of nonzero decision variables. The proposed algorithm outperforms existing up-to-date EAs for sparse LSMOPs in experiments on three real-world problems and eight benchmark problems.
暂无评论