版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Nanjing Tech Univ Coll Elect Engn & Control Sci Nanjing Jiangsu Peoples R China Shandong Univ Sci & Technol Coll Ocean Sci & Engn Qingdao Shandong Peoples R China Univ Surrey Ctr Vis Speech & Signal Proc Guildford GU2 7XH England
出 版 物:《SIGNAL PROCESSING》 (Signal Process)
年 卷 期:2025年第231卷
核心收录:
基 金:National Natural Science Foundation of China Key Research and Development Program of Heilongjiang Province of China [2022ZX01A15] Cul-tivation Plan Project of Qingdao Science and Technology Planning Park of China
主 题:Deep unfolding network Dictionary learning Transformer networks Image denoising
摘 要:Deep unfolding attempts to leverage the interpretability of traditional model-based algorithms and the learning ability of deep neural networks by unrolling model-based algorithms as neural networks. Following the framework of deep unfolding, some conventional dictionary learning algorithms have been expanded as networks. However, existing deep unfolding networks for dictionary learning are developed based on formulations with pre-defined priors, e.g., l1-norm, or learn priors using convolutional neural networks with limited receptive fields. To address these issues, we propose a transformer-based deep unfolding network for dictionary learning (TDU-DLNet). The network is developed by unrolling a general formulation of dictionary learning with an implicit prior of representation coefficients. The prior is learned by a transformer-based network where an inter-stage feature fusion module is introduced to decrease information loss among stages. The effectiveness and superiority of the proposed method are validated on image denoising. Experiments based on widely used datasets demonstrate that the proposed method achieves competitive results with fewer parameters as compared with deep learning and other deep unfolding methods.