咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Audit-LLM: Multi-Agent Collabo... 收藏
arXiv

Audit-LLM: Multi-Agent Collaboration for Log-based Insider Threat Detection

作     者:Song, Chengyu Ma, Linru Zheng, Jianming Liao, Jinzhi Kuang, Hongyu Yang, Lin 

作者机构:Systems Engineering Institute Academy of Military Sciences Beijing China State Key Laboratory of Mathematical Engineering and Advanced Computing Wuxi China National University of Defense Technology National Key Laboratory of Information Systems Engineering Changsha China 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Reusability 

摘      要:Log-based insider threat detection (ITD) detects malicious user activities by auditing log entries. Recently, Large Language Models (LLMs) with strong common sense knowledge are emerging in the domain of ITD. Nevertheless, diverse activity types and overlong log files pose a significant challenge for LLMs to directly discern malicious ones within myriads of normal activities. Furthermore, the faithfulness hallucination issue from LLMs aggravates its application difficulty in ITD, as the generated conclusion may not align with user commands and activity context. In response to these challenges, we introduce Audit-LLM, a multi-agent log-based insider threat detection framework comprising three collaborative agents: (i) the Decomposer agent, breaking down the complex ITD task into manageable sub-tasks using Chain-of-Thought (COT) reasoning;(ii) the Tool Builder agent, creating reusable tools for sub-tasks to overcome context length limitations in LLMs;and (iii) the Executor agent, generating the final detection conclusion by invoking constructed tools. To enhance conclusion accuracy, we propose a pair-wise Evidence-based Multi-agent Debate (EMAD) mechanism, where two independent Executors iteratively refine their conclusions through reasoning exchange to reach a consensus. Comprehensive experiments conducted on three publicly available ITD datasets—CERT r4.2, CERT r5.2, and PicoDomain—demonstrate the superiority of our method over existing baselines and show that the proposed EMAD significantly improves the faithfulness of explanations generated by LLMs. 1 2 Copyright © 2024, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分