Lying is a prevalent concern impacting our daily lives and social interactions. The pattern to identify lying in text data can be improved by gaining a better understanding of individual behaviour. However, the study ...
详细信息
ISBN:
(纸本)9789819749843;9789819749850
Lying is a prevalent concern impacting our daily lives and social interactions. The pattern to identify lying in text data can be improved by gaining a better understanding of individual behaviour. However, the study of lie detection (LD) is a complicated procedure that typically incorporates many aspects contingent upon individual biases and the characteristics of the data, such as data that lacks legitimate terms or suspicious words. There has been a relatively limited focus on multimodalinteractivesystems for LD. Thus, this paper introduces a novel approach by presenting a multimodal visual interactive system, referred to as MVis4LD, designed to facilitate users in expressing complex analytical queries seamlessly. It integrates natural language interaction, a promising method for exploration through visualization, and explores the synergies among various modalities to address the limitations of individual modes. The users can provide input as text, click or touch, and voice commands to the system to generate corresponding results, offering a comprehensive chats or plots for LD. Furthermore, we collected an existing dataset by randomly altering the attribute values of 20% of the data in the test set. The findings of the experiment, conducted on 121 individual texts, show that Bidirectional Encoder Representations from Transformers (BERT), a deep learning (DL) model, logistic regression (LR), and K-Nearest Neighbors (KNN) from machine learning (ML) achieve the highest accuracy (84%) compared to other models. The live system with result visualization is available on our GitHub repository (https://amazing- ***/).
暂无评论