咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Mind the Gap: Towards Generali... 收藏
arXiv

Mind the Gap: Towards Generalizable Autonomous Penetration Testing via Domain Randomization and Meta-Reinforcement Learning

作     者:Zhou, Shicheng Liu, Jingju Lu, Yuliang Yang, Jiahai Zhang, Yue Chen, Jie 

作者机构:College of Electronic Engineering National University of Defense Technology Hefei230037 China Key Laboratory of Cyberspace Security Situation Awareness and Evaluation Anhui Province Hefei230037 China Institute for Network Sciences and Cyberspace The Beijing National Research Center for Information Science and Technology Tsinghua University Beijing10084 China College of Computer Science and Technology National University of Defense Technology Changsha410073 China 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Reinforcement learning 

摘      要:With increasing numbers of vulnerabilities exposed on the internet, autonomous penetration testing (pentesting) has emerged as a promising research area. Reinforcement learning (RL) is a natural fit for studying this topic. However, two key challenges limit the applicability of RL-based autonomous pentesting in real-world scenarios: (a) training environment dilemma – training agents in simulated environments is sample-efficient while ensuring their realism remains challenging;(b) poor generalization ability – agents’ policies often perform poorly when transferred to unseen scenarios, with even slight changes potentially causing significant generalization gap. To this end, we propose GAP, a generalizable autonomous pentesting framework that aims to realizes efficient policy training in realistic environments and train generalizable agents capable of drawing inferences about other cases from one instance. GAP introduces a Real-to-Simto-Real pipeline that (a) enables end-to-end policy learning in unknown real environments while constructing realistic simulations;(b) improves agents’ generalization ability by leveraging domain randomization and meta-RL learning. Specially, we are among the first to apply domain randomization in autonomous pentesting and propose a large language model-powered domain randomization method for synthetic environment generation. We further apply meta-RL to improve agents’ generalization ability in unseen environments by leveraging synthetic environments. The combination of two methods effectively bridges the generalization gap and improves agents’ policy adaptation performance. Experiments are conducted on various vulnerable virtual machines, with results showing that GAP can enable policy learning in various realistic environments, achieve zero-shot policy transfer in similar environments, and realize rapid policy adaptation in dissimilar environments. The source code is available at https://***/Joe-zsc/GAP. Copyright © 2024, The Auth

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分