咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Multifactorial evolutionary de... 收藏

Multifactorial evolutionary deep reinforcement learning for multitask node combinatorial optimization in complex networks

作     者:Ma, Lijia Xu, Long Fan, Xiaoqing Li, Lingjie Lin, Qiuzhen Li, Jianqiang Gong, Maoguo 

作者机构:Shenzhen Univ Coll Comp Sci & Software Engn Shenzhen 518060 Peoples R China Shenzhen Technol Univ Coll Big data & Internet Shenzhen 518118 Peoples R China Shenzhen Univ Natl Engn Lab Big Data Syst Comp Technol Shenzhen 518060 Peoples R China Xidian Univ Sch Elect Engn Key Lab Collaborat Intelligence Syst Minist Educ Xian 710071 Peoples R China 

出 版 物:《INFORMATION SCIENCES》 (Inf Sci)

年 卷 期:2025年第702卷

核心收录:

学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:National Natural Science Foundation of China Shenzhen Natural Science Foundation [JCYJ20240813141416022] 

主  题:Multifactorial learning Evolutionary algorithm Reinforcement learning Complex networks Combinatorial optimization 

摘      要:The node combinatorial optimization (NCO) tasks in complex networks aim to activate a set of influential nodes that can maximally affect the network performance under certain influence models, including influence maximization, robustness optimization, minimum node coverage, minimum dominant set, and maximum independent set, and they are usually nondeterministic polynomial (NP)-hard. The existing works mainly solve these tasks separately, and none of them can effectively solve all tasks due to their difference in influence models and NP-hard property. To tackle this issue, in this article, we first theoretically demonstrate the similarity among these NCO tasks, and model them as a multitask NCO problem. Then, we transform this multitask NCO problem into the weight optimization of a multi-depth Q network (multi-head DQN), which adopts a multi-head DQN to model the activation of influential nodes and uses the shared head and unshared output DQN layers to capture the similarity and difference among tasks, respectively. Finally, we propose a Multifactorial Evolutionary Deep Reinforcement Learning (MF-EDRL) for solving the multitask NCO problem under the multi-head DQN optimization framework, which enables to promote the implicit knowledge transfer between similar tasks. Extensive experiments on both benchmark and real-world networks show the clear advantages of the proposed MFEDRL over the state-of-the-art in tackling all NCO tasks. Most notably, the results also reflect the effectiveness of information transfer between tasks in accelerating optimization and improving performance.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分