版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Xian Univ Technol Sch Comp Sci & Engn Xian 710048 Shaanxi Peoples R China Chinese Acad Sci Quanzhou Inst Equipment Mfg Haixi Inst Quanzhou 362216 Fujian Peoples R China Xidian Univ Sch Comp Sci & Technol Xian 710071 Shaanxi Peoples R China
出 版 物:《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》 (IEEE Trans Geosci Remote Sens)
年 卷 期:2025年第63卷
核心收录:
学科分类:0808[工学-电气工程] 1002[医学-临床医学] 08[工学] 0708[理学-地球物理学] 0816[工学-测绘科学与技术]
基 金:National Natural Science Foundation of China [61902313, 61973250, 42101359] Provincial Key Research and Development Program of Shaanxi [2024GH-ZDXM-47]
主 题:Remote sensing Contrastive learning Semantics Semantic segmentation Feature extraction Data augmentation Brightness Adaptation models Sensors Perturbation methods remote sensing image segmentation view generation
摘 要:Self-supervised contrastive learning is a powerful pretraining framework for learning the invariant features from the different views of remote sensing images, therefore, the performance of contrastive learning heavily depends on the generation of views. Current view generation is primarily accomplished through different transformations, and the types and parameters of the transformations are required hand-crafted. Hence, the diversity and discriminability of generated views cannot be guaranteed. To address this, we propose a multitype views optimization method to optimize these transformations. We formulate contrastive learning as a min-max optimization problem, and transformation parameters are optimized by maximizing the contrastive loss. The optimized transformations encourage the negative sample pairs to be close and the positive sample pairs to be far apart. Different from the current adversarial view generation methods, our method can optimize both photometric transformations and geometric transformations. For remote sensing images, the geometric transformation is more critical for view generation, while the existing view optimization methods fail to achieve this. We consider the hue, saturation, brightness, contrast, and geometric rotation transformations in contrastive learning, and evaluate the optimized views on the downstream remote sensing images semantic segmentation task. Extensive experiments are carried out on the three remote sensing image segmentation datasets, including the ISPRS Potsdam dataset, the ISPRS Vaihingen dataset, and the LoveDA dataset. Results show that the learned views obtain high advantages compared to the hand-crafted views and other optimized views. The code associated with this article has been released and can be accessed at https://***/AAAA-CS/AMView.