remotesensing information processing plays an important role in many fields such as environmental protection, urban planning, military reconnaissance and so on. However, there are problems in remotesensingimages su...
详细信息
Contemporary cross-domain remotesensing (RS) image segmentation has been successful in recent years. When the target domain data becomes scarce in some realistic scenarios, the performance of traditional domain adapt...
详细信息
Contemporary cross-domain remotesensing (RS) image segmentation has been successful in recent years. When the target domain data becomes scarce in some realistic scenarios, the performance of traditional domain adaptation (DA) methods significantly drops. In this paper, we tackle the problem of fast cross-domain adaptation by observing only one unlabeled target data. To deal with dynamic domain shift efficiently, this paper introduces a novel framework named Minimax One-shot AdapTation (MOAT) to perform cross-domain feature alignment in semantic segmentation. Specifically, MOAT alternately maximizes the cross-entropy to select the most informative source samples and minimizes the cross-entropy of obtained samples to make the model fit the target data. The selected source samples can effectively describe the target data distribution using the proposed uncertainty-based distribution estimation technique. We propose a memory-based feature enhancement strategy to learn domain-invariant decision boundaries to accomplish semantic alignment. Generally, we empirically demonstrate the effectiveness of the proposed MOAT. It achieves anew state-of-theart performance on cross-domain RS image segmentation for conventional unsupervised domain adaptation and one-shot domain adaptation scenarios.
Due to the large intra-class differences between the same categories and the scale imbalance between different categories in the remotesensingimage dataset, the semantic segmentation task presents the problem of sma...
详细信息
Due to the large intra-class differences between the same categories and the scale imbalance between different categories in the remotesensingimage dataset, the semantic segmentation task presents the problem of small-scale object information loss, the imbalance between foreground and background, and simultaneously the background dominates, which seriously affects the performance of the network model. To solve the above problems, this paper proposes an efficient bilateral branch depth neural network model based on the U-Net depth neural network, named BBU-Net. Firstly, one branch of the network learns the distribution characteristics of the original data, and the other focuses on difficult samples. Then the two branches improve the representation and classification ability of the neural network by accumulating learning strategies. Finally, considering the geometric diversity of remotesensingimages, this paper adopts test time augmentation and reflection padding strategies and proposes a balanced weighted loss function named CombineLoss to alleviate the imbalance in the training process. The depth neural network proposed in this paper was first tested on the Inria Aerial image Labeling Dataset, and 87.53% of mean intersection over union and 97.4% of mean pixel accuracy were obtained, respectively. At the same time, to verify the model's complexity, the model proposed in this paper is compared with the neural network based on integrated learning. The comparison results show that the spatial complexity of the network proposed in this paper is much lower than the neural network obtained by integrated learning, and the parameters are also much smaller than the neural network based on integrated learning. Then use the satellite building dataset I in the WHU Building Dataset and mainstream semantic segmentation methods for multiple groups of comparative experiments. The experimental results show that the method proposed in this paper can effectively extract the semanti
The remotesensingimage analysis, classification, and patternrecognition processes all depend on image segmentation. In this research, a search-based convolutional neural network (SBCNN) is used to identification me...
详细信息
In recent years, remotesensingimageprocessing based on deep learning has been widely applied in many scenes, but the involved deep learning technology requires large-scale labeled data, which has been a practical p...
详细信息
In recent years, remotesensingimageprocessing based on deep learning has been widely applied in many scenes, but the involved deep learning technology requires large-scale labeled data, which has been a practical problem in the remotesensing field. In this study, we proposed a novel data information quality assessment method, called K-nearest neighbor (KNN) distance entropy, to screen the remotesensingimages. The evaluation metric was used to assess unlabeled data and assign the pseudolabel, which further constitutes the proposed semisupervised few-shot classification method in this article. The metatask setting was adopted to verify the validity and stability of experimental results. Specifically, the KNN distance entropy metric can be used to distinguish the samples in the core set or boundary set. Experimental results show that the core set samples are more suitable under the few-shot condition, for instance, the metatask average accuracy trained by the core set samples outperforms that by boundary samples by about 18% in the case of 45-ways and 5-shot. The proposed semisupervised few-shot method based on KNN distance entropy achieves significant improvement under different experimental conditions. The visualization of the feature distribution of screened data is shown to provide an intuitive interpretation. This article lays a meaningful foundation for screening and evaluating remotesensingimages under few-shot conditions, and inspires the data-efficient few-shot learning based on high-quality data in the remotesensing field.
The recent advancement in the patternrecognition technique has demonstrated the superiority of remotesensing technology. Deep neural networks use spatial feature representations such as convolution neural networks (...
详细信息
The recent advancement in the patternrecognition technique has demonstrated the superiority of remotesensing technology. Deep neural networks use spatial feature representations such as convolution neural networks (CNN) to provide better generalisation capability. Our aim is two-fold: firstly, increase the reliability feature by performing the Dual-scale fusion via a modified Markov random field known as DuCNN-MMRF. Secondly, an integration framework was introduced to combine the multispectral image classification produced by DuCNN-MMRF and Normalised-Digital Surface Model (nDSM) information, using a novel approach known as constraint-based Dempster Shafer theory (C-DST). C-DST targeted DuCNN-MMRF's uncertain information (ambiguous information) and rectified it with complementary information.
For remotesensingimage unsupervised domain adaptation, there are differences in resolution except for feature differences between source and target domains. An end-to-end unsupervised domain adaptation segmentation ...
详细信息
ISBN:
(纸本)9789819984619;9789819984626
For remotesensingimage unsupervised domain adaptation, there are differences in resolution except for feature differences between source and target domains. An end-to-end unsupervised domain adaptation segmentation model for remotesensingimages is proposed to reduce the image style and resolution differences between the source and target domains. First, a generative adversarial-based style transfer network with residual connection, scale consistency module, and perceptual loss with class balance weights is proposed. It reduces the image style and resolution differences between the two domains and maintains the original structural information while transferring. Second, the visual attention network (VAN) that considers both spatial and channel attention is used as the feature extraction backbone network to improve the feature extraction capability. Finally, the style transfer and segmentation tasks are unified in an end-to-end network. Experimental results show that the proposed model effectively alleviates the performance degradation caused by different features and resolutions. The segmentation performance is significantly improved compared to advanced domain adaptation segmentation methods.
remotesensingimage segmentation plays an important role in realizing intelligent city construction. The current mainstream segmentation networks effectively improve the segmentation effect of remotesensingimages b...
详细信息
remotesensingimage segmentation plays an important role in realizing intelligent city construction. The current mainstream segmentation networks effectively improve the segmentation effect of remotesensingimages by deeply mining the rich texture and semantic features of images. But there are still some problems such as rough results of small target region segmentation and poor edge contour segmentation. To overcome these three challenges, we propose an improved semantic segmentation model, referred to as MRU-Net, which adopts the U-Net architecture as its backbone. Firstly, the convolutional layer is replaced by BasicBlock structure in U-Net network to extract features, then the activation function is replaced to reduce the computational load of model in the network. Secondly, a hybrid multi scale recognition module is added in the encoder to improve the accuracy of image segmentation of small targets and edge parts. Finally, test on Massachusetts Buildings Dataset and WHU Dataset the experimental results show that compared with the original network the ACC,mIoU and F1 value are improved, and the imposed network shows good robustness and portability in different datasets.
As remotesensing technology continues to advance, the accuracy and quantity of remotesensingimages have significantly improved. The generation of a vast amount of available data has facilitated the widespread appli...
详细信息
ISBN:
(数字)9781510675001
ISBN:
(纸本)9781510674998
As remotesensing technology continues to advance, the accuracy and quantity of remotesensingimages have significantly improved. The generation of a vast amount of available data has facilitated the widespread application of various deep learning methods in the field of remotesensing data processing, such as object detection, semantic segmentation, and change detection. In the aforementioned tasks, Change detection is used to identify alterations occurring on the Earth's surface by utilizing remotesensing (RS) data. In recent years, deep learning based methods have exhibited significantly superior performance compared to traditional change detection techniques. The fundamental strategy enabling these advancements involves extracting appropriate deep learning features from input remotesensingimages through various backbone networks such as VGG, ResNet, DenseNet etc. Nevertheless, the features extracted by the aforementioned backbone networks may not fully cater to the specific requirements of remotesensingimage change detection tasks. Consequently, our goal is to explore the influence of features extracted by different backbone networks on change detection tasks and introduce a specialized backbone network tailored for change detection. This endeavor aims to produce features that are better suited for the of change detection. The experimental results indicate that our specifically designed feature extraction network for remotesensingimage change detection outperforms traditional networks in extracting task-specific features. These features are better suited for subsequent decoder modules, enhancing the generation of image-based change detection results. At the same time, we found that when using general backbone networks for change detection, ResNet achieves the highest metric accuracy, while DenseNet has the lowest memory usage and the fastest training and testing speed. Depending on the specific task, we can choose the appropriate backbone network as need
image inpainting is the process of filling in missing or damaged areas of images. In recent years, this area has received significant development, mainly owing to machine learning methods. Generative adversarial netwo...
详细信息
image inpainting is the process of filling in missing or damaged areas of images. In recent years, this area has received significant development, mainly owing to machine learning methods. Generative adversarial networks are a powerful tool for creating synthetic images. They are trained to create images similar to the original dataset. The use of such neural networks is not limited to creating realistic images. In areas where privacy is important, such as healthcare or finance, they help generate synthetic data that preserves the overall structure and statistical characteristics, but does not contain the sensitive information of individuals. However, direct use of this architecture will result in the generation of a completely new image. In the case where it is possible to indicate the location of confidential information on an image, it is advisable to use image inpainting in order to replace only the secret information with synthetic information. This paper discusses key approaches to solving this problem, as well as corresponding neural network architectures. Questions are also raised about the use of these algorithms to protect confidential image information, as well as the possibility of using these models when developing new applications.
暂无评论