版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:College of Electronic Engineering National University of Defense Technology Hefei230037 China Key Laboratory of Cyberspace Security Situation Awareness and Evaluation Anhui Province Hefei230037 China Institute for Network Sciences and Cyberspace The Beijing National Research Center for Information Science and Technology Tsinghua University Beijing10084 China College of Computer Science and Technology National University of Defense Technology Changsha410073 China
出 版 物:《arXiv》 (arXiv)
年 卷 期:2024年
核心收录:
摘 要:With increasing numbers of vulnerabilities exposed on the internet, autonomous penetration testing (pentesting) has emerged as a promising research area. Reinforcement learning (RL) is a natural fit for studying this topic. However, two key challenges limit the applicability of RL-based autonomous pentesting in real-world scenarios: (a) training environment dilemma – training agents in simulated environments is sample-efficient while ensuring their realism remains challenging;(b) poor generalization ability – agents’ policies often perform poorly when transferred to unseen scenarios, with even slight changes potentially causing significant generalization gap. To this end, we propose GAP, a generalizable autonomous pentesting framework that aims to realizes efficient policy training in realistic environments and train generalizable agents capable of drawing inferences about other cases from one instance. GAP introduces a Real-to-Simto-Real pipeline that (a) enables end-to-end policy learning in unknown real environments while constructing realistic simulations;(b) improves agents’ generalization ability by leveraging domain randomization and meta-RL learning. Specially, we are among the first to apply domain randomization in autonomous pentesting and propose a large language model-powered domain randomization method for synthetic environment generation. We further apply meta-RL to improve agents’ generalization ability in unseen environments by leveraging synthetic environments. The combination of two methods effectively bridges the generalization gap and improves agents’ policy adaptation performance. Experiments are conducted on various vulnerable virtual machines, with results showing that GAP can enable policy learning in various realistic environments, achieve zero-shot policy transfer in similar environments, and realize rapid policy adaptation in dissimilar environments. The source code is available at https://***/Joe-zsc/GAP. Copyright © 2024, The Auth