版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:School of Computer Science and Engineering Macau University of Science and Technology China College of Information Engineering Hangzhou Vocational & Technical College Hangzhou China Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology School of Physics and Optoelectronic Engineering Foshan University Foshan China Sydney Australia
出 版 物:《SSRN》
年 卷 期:2024年
核心收录:
摘 要:Recently, remote sensing images have become popular in various tasks, including resource exploration. However, limited by hardware conditions and formation processes, the obtained remote sensing images often suffer from low-resolution problems. Unlike the high cost of hardware to acquire high-resolution images, the super-resolution software methods are good alternatives to restore low-resolution images. In addition, remote sensing images have a common nature that similar visual patterns repeatedly appear across distant locations. To fully capture these long-range satellite image contexts, we first introduce the Global Attention Network super-resolution method to reconstruct the images. This network improves the performance but introduces unvital information while significantly increasing the computational effort. To address these problems, we propose an innovative method named the Global Sparse Attention Network (GSAN), which integrates both Sparsity Constraints and Global Attention. Specifically, our method applies Spherical Locality Sensitive Hashing (SLSH) to convert feature elements into hash codes, constructs attention groups based on the hash codes, and computes each element of the attention matrix according to similar elements in the attention group. Our method captures valid and useful global information and reduces the computational effort from quadratic to asymptotic linear regarding the spatial size. Extensive qualitative and quantitative experiments demonstrate that our GSAN has significant competitive advantages compared to other state-of-the-art networks in terms of various metrics and visual quality. © 2024, The Authors. All rights reserved.