版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Univ Elect Sci & Technol China Sch Math Sci Chengdu 611731 Peoples R China Univ Grenoble Alpes Inria CNRS Grenoble INPLJK F-38000 Grenoble France Chinese Acad Sci Aerosp Informat Res Inst Beijing 100045 Peoples R China CNR IMAA Inst Methodol Environm Anal I-85050 Tito Italy Natl Biodivers Future Ctr NBFC I-90133 Palermo Italy
出 版 物:《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 (IEEE Trans Pattern Anal Mach Intell)
年 卷 期:2025年第47卷第3期
页 面:2071-2088页
核心收录:
学科分类:0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:NSFC [12171072, 12271083] Natural Science Foundation of Sichuan Province [2024NSFSC0038] National Key Research and Development Program of China [2020YFA0714001]
主 题:Algebra Imaging Pansharpening Image representation Transformers Mathematical models Computational efficiency Sensors Image fusion Optimization Transformer multilinear algebra model-driven neural network multi-source image fusion multispectral and hyperspectral image fusion remote sensing pansharpening visible and infrared image fusion
摘 要:Multi-source image fusion combines the information coming from multiple images into one data, thus improving imaging quality. This topic has aroused great interest in the community. How to integrate information from different sources is still a big challenge, although the existing self-attention based transformer methods can capture spatial and channel similarities. In this paper, we first discuss the mathematical concepts behind the proposed generalized self-attention mechanism, where the existing self-attentions are considered basic forms. The proposed mechanism employs multilinear algebra to drive the development of a novel fully-connected self-attention (FCSA) method to fully exploit local and non-local domain-specific correlations among multi-source images. Moreover, we propose a multi-source image representation embedding it into the FCSA framework as a non-local prior within an optimization problem. Some different fusion problems are unfolded into the proposed fully-connected transformer fusion network (FC-Former). More specifically, the concept of generalized self-attention can promote the potential development of self-attention. Hence, the FC-Former can be viewed as a network model unifying different fusion tasks. Compared with state-of-the-art methods, the proposed FC-Former method exhibits robust and superior performance, showing its capability of faithfully preserving information.