咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Visual grounding of remote sen... 收藏

Visual grounding of remote sensing images with multi-dimensional semantic-guidance

作     者:Ding, Yueli Wang, Di Li, Ke Zhao, Xiaohong Wang, Yifeng 

作者机构:Xidian Univ 2 South Taibai Rd Xian 710071 Shannxi Peoples R China 

出 版 物:《PATTERN RECOGNITION LETTERS》 (Pattern Recogn. Lett.)

年 卷 期:2025年第189卷

页      面:85-91页

核心收录:

学科分类:08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:National Science and Technology Major Project [2022ZD0117103] Basic and Applied Basic Research Program of Guangzhou City [2023A04J0400] Fundamental Research Funds for the Central Universities [QTZX23084] Natural Science Basic Research Program of Shaanxi [2024JC-YBQN-0732, 2024JC-YBQN-0340] Innovation Capability Support Program of Shaanxi [2023-CX-TD-08] 

主  题:Visual Grounding Remote Sensing Attention 

摘      要:Visual grounding in remote sensing images aims to accurately locate specified targets based on query expressions. Existing methods often use separate feature extractors to independently process visual and textual features. However, this approach results in initial features that lack correlation between the two modalities, hindering effective feature fusion and limiting localization precision. To address this challenge, we propose a novel framework called MSVG, which enhances visual grounding accuracy through a multidimensional text-image alignment module and a visual enhancement fusion module. The multi-dimensional text-image alignment module employs both channel-wise and spatial-wise alignment at various stages of visual feature extraction, guiding the generation of visual features in a manner that increases their relevance to the accompanying textual descriptions. Meanwhile, the visual enhancement fusion module refines feature relevance by learning contextual features, effectively excluding objects and backgrounds unrelated to the target. Experiments demonstrate that our approach achieves a remarkable accuracy of 83.61% on DIOR-RSVG dataset, representing a substantial 6.83% improvement over previous methods and setting a new benchmark.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分