版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Yanshan Univ Sch Informat Sci & Engn Qinhuangdao 066004 Peoples R China Beijing Wuzi Univ Sch Informat Beijing Peoples R China Yanshan Univ Hebei Key Lab Informat Transmiss & Signal Proc Qinhuangdao 066004 Peoples R China
出 版 物:《SIGNAL PROCESSING》 (信号处理)
年 卷 期:2023年第202卷
核心收录:
基 金:National Natural Science Foun- dation of China [61471313, 61901406] Natural Science Foundation of Hebei Province [F2022203030, F2020203025] Hebei Key Laboratory Project
主 题:Compressed sensing Deep learning Deep unfolding method Local and global priors
摘 要:Block compressed sensing (BCS) is effective to process high-dimensional images or videos. Due to the block-wise sampling, most BCS methods only exploit local block priors and neglect inherent global image priors, thus resulting in blocky artifacts. To ameliorate this issue, this paper formulates a novel regu-larized optimization BCS model named BCS-LG to effectively characterize the complementarity of local and global priors. To be tractable, the data-fidelity and regularization terms in BCS-LG are flexibly de -coupled with the aid of half quadratic splitting algorithm. Taking the merits of the interpretability of traditional iterative optimization methods and the powerful representation ability of deep learning based ones, the corresponding iterative algorithm of BCS-LG is further unfolded into an interpretable optimiza-tion inspired multi-stage progressive reconstruction network abbreviated as LG-Net. In the block-wise and image-level manners, an accelerated proximal gradient inspired sub-network and a global prior induced tiny U-type sub-network are alternately designed. In addition, a single model is trained to address the CS reconstruction with several measurement ratios. Extensive experiments on four benchmark datasets indicate that the proposed approach can effectively eliminate blocky artifacts. Meanwhile, it substantially outperforms existing CS reconstruction methods in terms of Peak Signal to Noise Ratio, Structural SIMi-larity and visual effect. (c) 2022 Elsevier B.V. All rights reserved.