咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >GVA: guided visual attention a... 收藏

GVA: guided visual attention approach for automatic image caption generation

作     者:Hossen, Md. Bipul Ye, Zhongfu Abdussalam, Amr Hossain, Md. Imran 

作者机构:Univ Sci & Technol China Sch Informat Sci & Technol Hefei 230027 Anhui Peoples R China Pabna Univ Sci & Technol Dept ICE Pabna 6600 Bangladesh 

出 版 物:《MULTIMEDIA SYSTEMS》 (多媒体系统)

年 卷 期:2024年第30卷第1期

页      面:50-50页

核心收录:

学科分类:08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:CAS-TWAS President's Fellowship 

主  题:Image captioning Faster R-CNN LSTM Up-down model Encoder-decoder framework 

摘      要:Automated image caption generation with attention mechanisms focuses on visual features including objects, attributes, actions, and scenes of the image to understand and provide more detailed captions, which attains great attention in the multimedia field. However, deciding which aspects of an image to highlight for better captioning remains a challenge. Most advanced captioning models utilize only one attention module to assign attention weights to visual vectors, but this may not be enough to create an informative caption. To tackle this issue, we propose an innovative and well-designed Guided Visual Attention (GVA) approach, incorporating an additional attention mechanism to re-adjust the attentional weights on the visual feature vectors and feed the resulting context vector to the language LSTM. Utilizing the first-level attention module as guidance for the GVA module and re-weighting the attention weights significantly enhances the caption s quality. Recently, deep neural networks have allowed the encoder-decoder architecture to make use visual attention mechanism, where faster R-CNN is used for extracting features in the encoder and a visual attention-based LSTM is applied in the decoder. Extensive experiments have been implemented on both the MS-COCO and Flickr30k benchmark datasets. Compared with state-of-the-art methods, our approach achieved an average improvement of 2.4% on BLEU@1 and 13.24% on CIDEr for the MSCOCO dataset, as well as 4.6% on BLEU@1 and 12.48% on CIDEr score for the Flickr30K datasets, based on the cross-entropy optimization. These results demonstrate the clear superiority of our proposed approach in comparison to existing methods using standard evaluation metrics. The implementing code can be found here: (https://***/mdbipu/GVA).

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分