版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Yunnan Univ Sch Informat & Artificial Intelligence Kunming 650504 Peoples R China
出 版 物:《COMPUTERS IN BIOLOGY AND MEDICINE》 (生物学与医学中的计算机)
年 卷 期:2023年第159卷第1期
页 面:106923-106923页
核心收录:
学科分类:0831[工学-生物医学工程(可授工学、理学、医学学位)] 0710[理学-生物学] 07[理学] 09[农学] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:National Natural Science Foundation of China [62266049, 62066047] "Famous teacher of teaching" of Yunnan 10000 Talents Program Key project of Basic Research Program of Yunnan Province [202101AS070031] Open Project Program of Yunnan Key Laboratory of Intelligent Systems and Computing [202205AG070003, ISC22Y06] General project of national Natural Science Foundation of China
主 题:Medical image fusion Unsupervised learning Long-range dependencies Multi -scale features
摘 要:The main purpose of multimodal medical image fusion is to aggregate the significant information from different modalities and obtain an informative image, which provides comprehensive content and may help to boost other image processing tasks. Many existing methods based on deep learning neglect the extraction and retention of multi-scale features of medical images and the construction of long-distance relationships between depth feature blocks. Therefore, a robust multimodal medical image fusion network via the multi-receptive-field and multiscale feature (M4FNet) is proposed to achieve the purpose of preserving detailed textures and highlighting the structural characteristics. Specifically, the dual-branch dense hybrid dilated convolution blocks (DHDCB) is proposed to extract the depth features from multi-modalities by expanding the receptive field of the convolution kernel as well as reusing features, and establish long-range dependencies. In order to make full use of the semantic features of the source images, the depth features are decomposed into multi-scale domain by combining the 2-D scale function and wavelet function. Subsequently, the down-sampling depth features are fused by the proposed attention-aware fusion strategy and inversed to the feature space with equal size of source images. Ultimately, the fusion result is reconstructed by a deconvolution block. To force the fusion network balancing information preservation, a local standard deviation-driven structural similarity is proposed as the loss function. Extensive experiments prove that the performance of the proposed fusion network outperforms six state-of-theart methods, which SD, MI, QFAB and QEP are about 12.8%, 4.1%, 8.5% and 9.7% gains, respectively.