版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:The Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education School of Electrical Engineering and Automation Anhui University Hefei230601 China The Belarusian State University of Informatics and Radioelectronics Minsk Belarus State Grid Anhui Electric Power Research Institute China
出 版 物:《SSRN》
年 卷 期:2024年
核心收录:
摘 要:Infrared and visible image fusion is to provide a more comprehensive image for downstream tasks by highlighting the main tar get and maintaining rich texture information. Image fusion methods based on deep learning suffer from insufficient multimodal information extraction and texture loss. In this paper, we propose a texture preserving progressive fusion network to extract comple mentary information from multimodal images (PTPFusion) to solve these issues. To reduce image texture loss, we design multiple consecutive texture-preserving blocks (TPB) to enhance fused texture. The TPB can enhance the features by using a parallel ar chitecture consisting with a residual block and a derivative operators. In addition, a novel cross-channel attention (CCA) fusion module is developed to obtain complementary information by modeling global feature interactions via cross-queries mechanism, followed by information fusion to highlight the feature of salient target. To avoid information loss, the extracted features at different stages are merged as the output of TPB. Finally, the fused image will be generated by the decoder. Extensive experiments on three datasets show that our proposed fusion algorithm is better than existing state-of-the-art methods. © 2024, The Authors. All rights reserved.