咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Syntax-Guided Hierarchical Att... 收藏

Syntax-Guided Hierarchical Attention Network for Video Captioning

为录像 Captioning 的指导句法的层次注意网络

作     者:Deng, Jincan Li, Liang Zhang, Beichen Wang, Shuhui Zha, Zhengjun Huang, Qingming 

作者机构:Chinese Acad Sci Key Lab Intelligent Informat Proc CAS Beijing 100190 Peoples R China Chinese Acad Sci Inst Comp Technol CAS Beijing 100190 Peoples R China Univ Chinese Acad Sci Sch Comp & Control Engn Beijing 101408 Peoples R China Univ Chinese Acad Sci Key Lab Big Data Min & Knowledge Management Beijing 101408 Peoples R China Univ Sci & Technol China Sch Informat Sci & Technol Hefei 230027 Peoples R China 

出 版 物:《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》 (IEEE视频技术用电路与系统汇刊)

年 卷 期:2022年第32卷第2期

页      面:880-892页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 

基  金:National Key Research and Development Program of China [2017YFB1300201] National Natural Science Foundation of China [61771457, 61732007, 61672497, U19B2038, 61620106009, U1636214, 61931008, 61772494, 62022083] 

主  题:Syntactics Feature extraction Visualization Generators Semantics Two dimensional displays Three-dimensional displays Video captioning syntax attention content attention global sentence-context 

摘      要:Video captioning is a challenging task that aims to generate linguistic description based on video content. Most methods only incorporate visual features (2D/3D) as input for generating visual and non-visual words in the caption. However, generating non-visual words usually depends more on sentence-context than visual features. The wrong non-visual words can reduce the sentence fluency and even change the meaning of sentence. In this paper, we propose a syntax-guided hierarchical attention network (SHAN), which leverages semantic and syntax cues to integrate visual and sentence-context features for captioning. First, a globally-dependent context encoder is designed to extract the global sentence-context feature that facilitates generating non-visual words. Then, we introduce hierarchical content attention and syntax attention to adaptively integrate features in terms of temporality and feature characteristics respectively. Content attention helps focus on time intervals related to the semantic of current word, while cross-modal syntax attention uses syntax information to model importance of different features for target word s generation. Moreover, such hierarchical attention can enhance the model interpretability for captioning. Experiments on MSVD and MSR-VTT datasets show the comparable performance of our method compared with current methods.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分