版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Natl Univ Sci & Technol Islamabad Pakistan
出 版 物:《MULTIMEDIA TOOLS AND APPLICATIONS》 (多媒体工具和应用)
年 卷 期:2024年第83卷第31期
页 面:75603-75625页
核心收录:
学科分类:0808[工学-电气工程] 08[工学] 0835[工学-软件工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
主 题:Tokenization Feature extraction Image enhancement Vision transformers
摘 要:Deep learning-based models have recently shown a strong potential in Underwater Image Enhancement (UIE) that are satisfying and have the right colors and details, but these methods significantly increase the parameters and complexity of the image processing models and therefore cannot be deployed directly to the edge devices. Vision Transformers (ViT) based architectures have recently produced amazing results in many vision tasks such as image classification, super-resolution, and image restoration. In this study, we introduced a lightweight Context-Aware Vision Transformer (CAViT), based on the Mean Head tokenization strategy and uses a self-attention mechanism in a single branch module that is effective at simulating long-distance dependencies and global features. To further improve the image quality we proposed an efficient variant of our model which derived results by applying White Balancing and Gamma Correction methods. We evaluated our model on two standard datasets, i.e., Large-Scale Underwater Image (LSUI) and Underwater Image Enhancement Benchmark Dataset (UIEB), which subsequently contributed towards more generalized results. Overall findings indicate that our real-time UIE model outperforms other Deep Learning based models by reducing the model complexity and improving the image quality (i.e., 0.6 dB PSNR improvement while using only 0.3% parameters and 0.4% float operations).