咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >GLCF: A Global-Local Multimoda... 收藏
arXiv

GLCF: A Global-Local Multimodal Coherence Analysis Framework for Talking Face Generation Detection

作     者:Chen, Xiaocan Yin, Qilin Liu, Jiarui Lu, Wei Luo, Xiangyang Zhou, Jiantao 

作者机构:School of Computer Science and Engineering MoE Key Laboratory of Information Technology Guangdong Province Key Laboratory of Information Security Technology Sun Yat-sen University Guangzhou510006 China Alibaba Group Hangzhou China State Key Laboratory of Mathematical Engineering and Advanced Computing Zhengzhou450002 China State Key Laboratory of Internet of Things for Smart City Department of Computer and Information Science Faculty of Science and Technology University of Macau 999078 China 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Semantics 

摘      要:Talking face generation (TFG) allows for producing lifelike talking videos of any character using only facial images and accompanying text. Abuse of this technology could pose significant risks to society, creating the urgent need for research into corresponding detection methods. However, research in this field has been hindered by the lack of public datasets. In this paper, we construct the first large-scale multi-scenario talking face dataset (MSTF), which contains 22 audio and video forgery techniques, filling the gap of datasets in this field. The dataset covers 11 generation scenarios and more than 20 semantic scenarios, closer to the practical application scenario of TFG. Besides, we also propose a TFG detection framework, which leverages the analysis of both global and local coherence in the multimodal content of TFG videos. Therefore, a region-focused smoothness detection module (RSFDM) and a discrepancy capture-time frame aggregation module (DCTAM) are introduced to evaluate the global temporal coherence of TFG videos, aggregating multi-grained spatial information. Additionally, a visual-audio fusion module (V-AFM) is designed to evaluate audiovisual coherence within a localized temporal perspective. Comprehensive experiments demonstrate the reasonableness and challenges of our datasets, while also indicating the superiority of our proposed method compared to the state-of-the-art deepfake detection approaches. © 2024, CC BY-NC-ND.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分