Using sarcasm on social media platforms to express negative opinions towards a person or object has become increasingly ***,detecting sarcasm in various forms of communication can be difficult due to conflicting *** t...
详细信息
Using sarcasm on social media platforms to express negative opinions towards a person or object has become increasingly ***,detecting sarcasm in various forms of communication can be difficult due to conflicting *** this paper,we introduce a contrasting sentiment-based model for multimodal sarcasm detection(CS4MSD),which identifies inconsistent emotions by leveraging the CLIP knowledge module to produce sentiment features in both text and ***,five external sentiments are introduced to prompt the model learning sentimental preferences among ***,we highlight the importance of verbal descriptions embedded in illustrations and incorporate additional knowledge-sharing modules to fuse such imagelike *** results demonstrate that our model achieves state-of-the-art performance on the public multimodal sarcasm dataset.
Emotion cause extraction(ECE)task that aims at extracting potential trigger events of certain emotions has attracted extensive attention ***,current work neglects the implicit emotion expressed without any explicit em...
详细信息
Emotion cause extraction(ECE)task that aims at extracting potential trigger events of certain emotions has attracted extensive attention ***,current work neglects the implicit emotion expressed without any explicit emotional keywords,which appears more frequently in application *** lack of explicit emotion information makes it extremely hard to extract emotion causes only with the local ***,an entire event is usually across multiple clauses,while existing work merely extracts cause events at clause level and cannot effectively capture complete cause event *** address these issues,the events are first redefined at the tuple level and a span-based tuple-level algorithm is proposed to extract events from different *** on it,a corpus for implicit emotion cause extraction that tries to extract causes of implicit emotions is *** authors propose a knowledge-enriched jointlearning model of implicit emotion recognition and implicit emotion cause extraction tasks(KJ-IECE),which leverages commonsense knowledge from ConceptNet and NRC_VAD to better capture connections between emotion and corresponding cause *** on both implicit and explicit emotion cause extraction datasets demonstrate the effectiveness of the proposed model.
language style is necessary for AI systems to understand and generate diverse human language ***, previous text style transfer primarily focused on sentence-level data-driven approaches, limiting exploration of potent...
详细信息
Previous studies consider little on using dependency-type messages in the E2E-ABSA task. Studies using dependency-type messages just contact the dependency-type message and word embedding vectors, which may not fully ...
详细信息
Diffusion models have become a powerful generative modeling paradigm, achieving great success in continuous data patterns. However, the discrete nature of text data results in compatibility issues between continuous d...
详细信息
Implicit Discourse Relation Recognition (IDRR), which infers discourse logical relations without explicit connectives, is one of the most challenging tasks in natural languageprocessing (NLP). Recently, pre-trained l...
详细信息
Relation Extraction (RE) requires the model to classify the correct relation from a set of relation candidates given the corresponding sentence and two entities. Recent work mainly studies how to utilize more data or ...
详细信息
Multi-turn conversation response selection aims to choose the best response from multiple candidates based on matching it with the dialogue context. Mostly, a response full of context-related information tends to be a...
详细信息
ISBN:
(纸本)9781665418683
Multi-turn conversation response selection aims to choose the best response from multiple candidates based on matching it with the dialogue context. Mostly, a response full of context-related information tends to be a proper ***, in some cases, a brief response like "ok" could be the more appropriate one. We find that it is a semantically ended conversation that a brief response usually comes after,so there is no need to provide any context-related information after that. Thus, in addition to match the response with context,it is also critical to recognize the state of whether a dialogue has ended, and learn how to get necessary information from context of different end states separately. To achieve this, we propose an end states guided matching network to determine and incorporate the end states by jointly consider the length of response and the local similarity between the response and last few utterances. In addition, we adopt multiple descriptive sequence representations for a more reliable matching *** results demonstrate that our model outperforms the state-of-the-art methods in multiple datasets.
Sentiment analysis is an important research area in Natural languageprocessing (NLP). With the explosion of multimodal data, Multimodal Sentiment Analysis (MSA) attracts more and more attention in recent years. How t...
Sentiment analysis is an important research area in Natural languageprocessing (NLP). With the explosion of multimodal data, Multimodal Sentiment Analysis (MSA) attracts more and more attention in recent years. How to Effectively harnessing the interplay between diverse modalities is paramount to achieving comprehensive fusion of MSA. However, current research predominantly emphasizes modality interaction, while overlooking unimodal information, thus neglecting the inherent disparities between modalities. To address these issues, we propose a novel model for multimodal sentiment analysis based on gated fusion and multi-task learning. The model adopts multi-task learning to concurrently address both multimodal and unimodal sentiment analysis tasks. Specifically, for the multimodal task, we leverage cross-modal Transformers with gating mechanisms to facilitate modality fusion. Subsequently, the fused representations are harnessed to generate sentiment labels for the unimodal tasks. Experiments on the CMU-MOSI and CMU-MOSEI datasets demonstrate that our model outperforms the existing methods and achieves the state-of-the art performance.
There are many challenges in machine reading comprehension when it comes to extracting semantically complete evidence for specific statement. Existing works on unsupervised evidence extraction can be mainly divided in...
详细信息
暂无评论