咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Causal Parrots: Large Language... 收藏
arXiv

Causal Parrots: Large Language Models May Talk Causality But Are Not Causal

作     者:Zečević, Matej Willig, Moritz Dhami, Devendra Singh Kersting, Kristian 

作者机构:Computer Science Department TU Darmstadt Germany  Germany Centre for Cognitive Science TU Darmstadt Germany  Germany 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2023年

核心收录:

主  题:Computational linguistics 

摘      要:Some argue scale is all what is needed to achieve AI, covering even causal models. We make it clear that large language models (LLMs) cannot be causal and give reason onto why sometimes we might feel otherwise. To this end, we define and exemplify a new subgroup of Structural Causal Model (SCM) that we call meta SCM which encode causal facts about other SCM within their variables. We conjecture that in the cases where LLM succeed in doing causal inference, underlying was a respective meta SCM that exposed correlations between causal facts in natural language on whose data the LLM was ultimately trained. If our hypothesis holds true, then this would imply that LLMs are like parrots in that they simply recite the causal knowledge embedded in the data. Our empirical analysis provides favoring evidence that current LLMs are even weak ‘causal parrots.’ Copyright © 2023, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分