版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Dalhousie Univ Dept Comp Sci Halifax NS B3H 4R2 Canada Ahmedabad Univ Dept Comp Sci & Engn Ahmadabad 380009 Gujarat India Nirma Univ Inst Technol Dept Comp Sci & Engn Ahmadabad 382481 Gujarat India Cape Peninsula Univ Technol Dept Elect Elect & Comp Engn ZA-7535 Bellville South Africa Durban Univ Technol Dept Elect Power Engn ZA-4000 Durban South Africa Univ Petr & Energy Studies Ctr Interdisciplinary Res & Innovat Dehra Dun 248001 India
出 版 物:《IEEE ACCESS》 (IEEE Access)
年 卷 期:2023年第11卷
页 面:21811-21830页
核心收录:
主 题:Superresolution Deep learning Generative adversarial networks Interpolation Mathematical models Videos Feature extraction Image super-resolution deep learning convolutional neural network generative adversarial network
摘 要:High-fidelity information, such as 4K quality videos and photographs, is increasing as high-speed internet access becomes more widespread and less expensive. Even though camera sensors performance is constantly improving, artificially enhanced photos and videos created by intelligent image processing algorithms have significantly improved image fidelity in recent years. Single image super-resolution is a class of algorithms that produces a high-resolution image from a given low-resolution image. Since the advent of deep learning a decade ago, this field has made significant strides. This paper presents a comprehensive review of the deep learning assisted single image super-resolution domain including generative adversarial network (GAN) models that discusses the prominent architectures, models used, and their merits and demerits. The reason behind covering the GAN models is that it is been known to perform better than the conventional deep learning methods given the resources and the time. For real-world applications with noise and other issues that can cause low-fidelity super resolution (SR) images, we examine another solution based on GAN model. This GAN model-based technique popularly known as blind super resolution is more resilient. We examined the various super-resolution techniques by varying image scaling factors (i.e., 2x, 3x, 4x) to measure PSNR and SSIM metrics for the different datasets. PSNR across the different datasets covered in the experimental Section shows an average of 14-17 % decrease in the score as we move up the image resolution scale from 2x to 4x. This is observed across all the datasets and for every model mentioned in the experimental Section of the paper. The results also show that blind super-resolution outperforms the conventional deep learning methods and the more complex GAN models. GAN models are complex and preferred when the upscale factor is high, while residual and dense models are recommended for smaller upscaling factors. Th