咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Hallucination-aware Optimizati... 收藏
arXiv

Hallucination-aware Optimization for Large Language Model-empowered Communications

作     者:Liu, Yinqiu Liu, Guangyuan Zhang, Ruichen Niyato, Dusit Xiong, Zehui Kim, Dong In Huang, Kaibin Du, Hongyang 

作者机构:The College of Computing and Data Science Nanyang Technological University Singapore The Pillar of Information Systems Technology and Design Singapore University of Technology and Design Singapore The College of Information and Communication Engineering Sungkyunkwan University Korea Republic of The Department of Electrical and Electronic Engineering University of Hong Kong Hong Kong 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Telecommunication services 

摘      要:Large Language Models (LLMs) have significantly advanced communications fields, such as Telecom Q&A, mathematical modeling, and coding. However, LLMs encounter an inherent issue known as hallucination, i.e., generating fact-conflicting or irrelevant content. This problem critically undermines the applicability of LLMs in communication systems yet has not been systematically explored. Hence, this paper provides a comprehensive review of LLM applications in communications, with a particular emphasis on hallucination mitigation. Specifically, we analyze hallucination causes and summarize hallucination mitigation strategies from both model- and system-based perspectives. Afterward, we review representative LLM-empowered communication schemes, detailing potential hallucination scenarios and comparing the mitigation strategies they adopted. Finally, we present a case study of a Telecom-oriented LLM that utilizes a novel hybrid approach to enhance the hallucination-aware service experience. On the model side, we publish a Telecom hallucination dataset and apply direct preference optimization to fine-tune LLMs, resulting in a 20.6% correct rate improvement. Moreover, we construct a mobile-edge mixture-of-experts architecture for optimal LLM expert activation. Our research aims to propel the field of LLM-empowered communications forward by detecting and minimizing hallucination impacts. © 2024, CC BY.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分