咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Reason from Fallacy: Enhancing... 收藏
arXiv

Reason from Fallacy: Enhancing Large Language Models’ Logical Reasoning through Logical Fallacy Understanding

作     者:Li, Yanda Wang, Dixuan Liang, Jiaqing Jiang, Guochao He, Qianyu Xiao, Yanghua Yang, Deqing 

作者机构:School of Data Science Fudan University Shanghai China Shanghai Key Laboratory of Data Science Shanghai China 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Computational linguistics 

摘      要:Large Language Models (LLMs) have demonstrated good performance in many reasoning tasks, but they still struggle with some complicated reasoning tasks including logical reasoning. One non-negligible reason for LLMs’ suboptimal performance on logical reasoning is their overlooking of understanding logical fallacies correctly. To evaluate LLMs’ capability of logical fallacy understanding (LFU), we propose five concrete tasks from three cognitive dimensions of WHAT, WHY, and HOW in this paper. Towards these LFU tasks, we have successfully constructed a new dataset LFUD based on GPT-4 accompanied by a little human effort. Our extensive experiments justify that our LFUD can be used not only to evaluate LLMs’ LFU capability, but also to fine-tune LLMs to obtain significantly enhanced performance on logical reasoning. © 2024, CC BY.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分