Recently, deep unfolding methods have achieved remarkable success in the realm of Snapshot Compressive Imaging (SCI) reconstruction. However, the existing methods all follow the iterative framework of a single image p...
详细信息
ISBN:
(纸本)9798350353006
Recently, deep unfolding methods have achieved remarkable success in the realm of Snapshot Compressive Imaging (SCI) reconstruction. However, the existing methods all follow the iterative framework of a single image prior, which limits the efficiency of the unfoldingmethods and makes it a problem to use other priors simply and effectively. To break out of the box, we derive an effective Dual Prior unfolding (DPU), which achieves the joint utilization of multiple deep priors and greatly improves iteration efficiency. Our unfoldingmethod is implemented through two parts, i.e., Dual Prior Framework (DPF) and Focused Attention (FA). In brief, in addition to the normal image prior, DPF introduces a residual into the iteration formula and constructs a degraded prior for the residual by considering various degradations to establish the unfolding framework. To improve the effectiveness of the image prior based on self-attention, FA adopts a novel mechanism inspired by PCA denoising to scale and filter attention, which lets the attention focus more on effective features with little computation cost. Besides, an asymmetric backbone is proposed to further improve the efficiency of hierarchical self-attention. Remarkably, our 5-stage DPU achieves state-of-the-art (SOTA) performance with the least FLOPs and parameters compared to previous methods, while our 9-stage DPU significantly outperforms other unfoldingmethods with less computational requirement. https://***/ZhangJC-2k/DPU
This paper proposes an unrolling learnable approximate message passing recurrent neural network (called ULAMP-Net) for lensless image reconstruction. By unrolling the optimization iterations, key modules and parameter...
详细信息
This paper proposes an unrolling learnable approximate message passing recurrent neural network (called ULAMP-Net) for lensless image reconstruction. By unrolling the optimization iterations, key modules and parameters are made learnable to achieve high reconstruction quality. Specifically, observation matrices are rectified on the fly through network learning to suppress systematic errors in the measurement of the point spread function. We devise a domain transformation structure to achieve a more powerful representation and propose a learnable multistage threshold function to accommodate a much richer family of priors with only a small amount of parameters. Finally, we introduce a multi-layer perceptron (MLP) module to enhance the input and an attention mechanism as an output module to refine the final results. Experimental results on display captured dataset and real scene data demonstrate that, compared with the state-of-the-art methods, our method achieves the best reconstruction quality with low computational complexity and the tiny model size on the testing set. Our code will be released in https://***/Xiangjun-TJU/ULAMP-NET.
In this paper, we propose a novel memory-augmented model-driven deepunfolding network for pan-sharpening. First, we devise the maximal a posterior estimation (MAP) model with two well-designed priors on the latent mu...
详细信息
ISBN:
(数字)9783031198007
ISBN:
(纸本)9783031197994;9783031198007
In this paper, we propose a novel memory-augmented model-driven deepunfolding network for pan-sharpening. First, we devise the maximal a posterior estimation (MAP) model with two well-designed priors on the latent multi-spectral (MS) image, i.e., global and local implicit priors to explore the intrinsic knowledge across the modalities of MS and panchromatic (PAN) images. Second, we design an effective alternating minimization algorithm to solve this MAP model, and then unfold the proposed algorithm into a deep network, where each stage corresponds to one iteration. Third, to facilitate the signal flow across adjacent iterations, the persistent memory mechanism is introduced to augment the information representation by exploiting the Long short-term memory unit in the image and feature spaces. With this method, both the interpretability and representation ability of the deep network are improved. Extensive experiments demonstrate the superiority of our method to the existing state-of-the-art approaches. The source code is released at https://***/Keyu- Yan/MMNet.
Block compressed sensing (BCS) is effective to process high-dimensional images or videos. Due to the block-wise sampling, most BCS methods only exploit local block priors and neglect inherent global image priors, thus...
详细信息
Block compressed sensing (BCS) is effective to process high-dimensional images or videos. Due to the block-wise sampling, most BCS methods only exploit local block priors and neglect inherent global image priors, thus resulting in blocky artifacts. To ameliorate this issue, this paper formulates a novel regu-larized optimization BCS model named BCS-LG to effectively characterize the complementarity of local and global priors. To be tractable, the data-fidelity and regularization terms in BCS-LG are flexibly de -coupled with the aid of half quadratic splitting algorithm. Taking the merits of the interpretability of traditional iterative optimization methods and the powerful representation ability of deep learning based ones, the corresponding iterative algorithm of BCS-LG is further unfolded into an interpretable optimiza-tion inspired multi-stage progressive reconstruction network abbreviated as LG-Net. In the block-wise and image-level manners, an accelerated proximal gradient inspired sub-network and a global prior induced tiny U-type sub-network are alternately designed. In addition, a single model is trained to address the CS reconstruction with several measurement ratios. Extensive experiments on four benchmark datasets indicate that the proposed approach can effectively eliminate blocky artifacts. Meanwhile, it substantially outperforms existing CS reconstruction methods in terms of Peak Signal to Noise Ratio, Structural SIMi-larity and visual effect. (c) 2022 Elsevier B.V. All rights reserved.
暂无评论