咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Multi-View Incongruity Learnin... 收藏
arXiv

Multi-View Incongruity Learning for Multimodal Sarcasm Detection

作     者:Guo, Diandian Cao, Cong Yuan, Fangfang Liu, Yanbing Zeng, Guangjie Yu, Xiaoyan Peng, Hao Yu, Philip S. 

作者机构:Institute of Information Engineering Chinese Academy of Sciences China School of Cyber Security University of Chinese Academy of Sciences China State Key Laboratory of Software Development Environment Beihang University China School of Computer Science and Technology Beijing Institute of Technology China Department of Computer Science University of Illinois Chicago United States 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Contrastive Learning 

摘      要:Multimodal sarcasm detection (MSD) is essential for various downstream tasks. Existing MSD methods tend to rely on spurious correlations. These methods often mistakenly prioritize non-essential features yet still make correct predictions, demonstrating poor generalizability beyond training environments. Regarding this phenomenon, this paper undertakes several initiatives. Firstly, we identify two primary causes that lead to the reliance of spurious correlations. Secondly, we address these challenges by proposing a novel method that integrate Multimodal Incongruities via Contrastive Learning (MICL) for multimodal sarcasm detection. Specifically, we first leverage incongruity to drive multi-view learning from three views: token-patch, entity-object, and sentiment. Then, we introduce extensive data augmentation to mitigate the biased learning of the textual modality. Additionally, we construct a test set, SPMSD, which consists potential spurious correlations to evaluate the the model’s generalizability. Experimental results demonstrate the superiority of MICL on benchmark datasets, along with the analyses showcasing MICL’s advancement in mitigating the effect of spurious correlation. Copyright © 2024, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分