咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >LITA: An Efficient LLM-assiste... 收藏
arXiv

LITA: An Efficient LLM-assisted Iterative Topic Augmentation Framework

作     者:Chang, Chia-Hsuan Tsai, Jui-Tse Tsai, Yi-Hang Hwang, San-Yih 

作者机构:Department of Biomedical Informatics & Data Science Yale University New Haven United States Department of Information Management National Sun Yat-sen University Kaohsiung Taiwan 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Modeling languages 

摘      要:Topic modeling is widely used for uncovering thematic structures within text corpora, yet traditional models often struggle with specificity and coherence in domain-focused applications. Guided approaches, such as SeededLDA and CorEx, incorporate user-provided seed words to improve relevance but remain labor-intensive and static. Large language models (LLMs) offer potential for dynamic topic refinement and discovery, yet their application often incurs high API costs. To address these challenges, we propose the LLM-assisted Iterative Topic Augmentation framework (LITA), an LLM-assisted approach that integrates user-provided seeds with embedding-based clustering and iterative refinement. LITA identifies a small number of ambiguous documents and employs an LLM to reassign them to existing or new topics, minimizing API costs while enhancing topic quality. Experiments on two datasets across topic quality and clustering performance metrics demonstrate that LITA outperforms five baseline models, including LDA, SeededLDA, CorEx, BERTopic, and PromptTopic. Our work offers an efficient and adaptable framework for advancing topic modeling and text clustering. © 2024, CC BY.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分