Earth observations from remotesensingimagery play an important role in many environmental applications ranging from natural resource (e.g., crops, forests) monitoring to man-made object (e.g., builds, factories) rec...
详细信息
ISBN:
(数字)9781665490627
ISBN:
(纸本)9781665490627
Earth observations from remotesensingimagery play an important role in many environmental applications ranging from natural resource (e.g., crops, forests) monitoring to man-made object (e.g., builds, factories) recognition. Most widely used optical remotesensing data however is often contaminated by clouds making it hard to identify the objects underneath. Fortunately, with the recent advances and increased operational satellites, the spatial and temporal density of image collections have significantly increased. In this paper, we present a novel deep learning-based imputation technique for inferring spectral values under the clouds using nearby cloud-free satellite image observations. The proposed deep learning architecture, extended contextual attention (ECA), exploits similar properties from the cloud-free areas to tackle clouds of different sizes occurring at arbitrary locations in the image. A contextual attention mechanism is incorporated to utilize the useful cloud-free information from multiple images. To maximize the imputation performance of the model on the cloudy patches instead of the entire image, a two-phase custom loss function is deployed to guide the model. To study the performance of our model, we trained our model on a benchmark Sentinel-2 dataset by superimposing real-world cloud patterns. Extensive experiments and comparisons against the state-of-the-art methods using pixel-wise and structural metrics show the improved performance of our model. Our experiments demonstrated that the ECA method is consistently better than all other methods, it is 28.4% better on MSE and 31.7% better on cloudy MSE as compared to the state-of-the-art EDSR network.
image registration is a basic problem in image analysis and imageprocessing. image registration has important applications in aerial image fusion, patternrecognition, three-dimensional reconstruction and other field...
详细信息
Despite the remarkable progress has made in deep compressed sensing (DCS), how to improve the reconstruction quality is still a major challenge. The existing DCS model generally still has some issues, especially in re...
详细信息
In the era of foundational image segmentation models, there is a pressing need to leverage the outputs of these models and enhance the boundary accuracy of domain-specific segmentation results using lightweight post-p...
详细信息
In previous research, the identification of the Bragg region, which is often achieved using imagerecognition, was easily affected by ionospheric interference and different manual parameter settings. Strong disturbanc...
详细信息
In previous research, the identification of the Bragg region, which is often achieved using imagerecognition, was easily affected by ionospheric interference and different manual parameter settings. Strong disturbances from the ionosphere and other environmental noise may interfere with high-frequency radar (HFR) systems when determining first-order Bragg regions and thus may directly influence the surface current mapping performance. To avoid human intervention in first-order Bragg-region recognition, deep learning methods were used to extract multiple levels of feature abstractions of the first-order Bragg region. In this study, to assist in developing sufficient training data, a procedure integrating a morphological approach and Otsu's method was proposed to provide manual labelled data for a U-Net deep learning model. Additionally, several sets of activation functions and optimizers were used to achieve optimal deep learning model performance. The best combination of model parameters results in an accuracy of over 90%, an F1 score of over 80%, and an intersection over union (IoU) reaching 60%. The identification time of one image is approximately 70 ms. These results demonstrate that this deep learning model can predict the position of first-order Bragg regions under ionospheric interference and avoid being affected by strong noise that could cause prediction errors. Hence, deep learning and image fusion processing can effectively recognize the first-order Bragg regions under strong interference and noise and can thereby improve the surface current mapping accuracy.
The proceedings contain 49 papers. The special focus in this conference is on patternrecognition. The topics include: Virtualization and 3D Visualization of Historical Costume Replicas: Accessibility, Inclu...
ISBN:
(纸本)9783031377440
The proceedings contain 49 papers. The special focus in this conference is on patternrecognition. The topics include: Virtualization and 3D Visualization of Historical Costume Replicas: Accessibility, Inclusivity, Virtuality;datafication of an Ancient Greek City: Multi-sensorial remotesensing of Heloros (Sicily);geolocation of Cultural Heritage Using Multi-view Knowledge Graph Embedding;masonry Structure Analysis, Completion and Style Transfer Using a Deep Neural Network;PARTICUL: Part Identification with Confidence Measure Using Unsupervised Learning;Explaining Classifications to Non-experts: An XAI User Study of Post-Hoc Explanations for a Classifier When People Lack Expertise;motif-Guided Time Series Counterfactual Explanations;comparison of Attention Models and Post-hoc Explanation Methods for Embryo Stage Identification: A Case Study;graph-Based Analysis of Hierarchical Embedding Generated by Deep Neural Network;Explainability of image Semantic Segmentation Through SHAP Values;CIELab Color Measurement Through RGB-D images;comparing Feature Importance and Rule Extraction for Interpretability on Text Data;physics-Informed Neural Networks for Solar Wind Prediction;less Labels, More Modalities: A Self-Training Framework to Reuse Pretrained Networks;feature Transformation for Cross-domain Few-Shot remotesensing Scene Classification;a Novel Methodology for High Resolution Sea Ice Motion Estimation;Deep Learning-Based Sea Ice Lead Detection from WorldView and Sentinel SAR imagery;Uncertainty Analysis of Sea Ice and Open Water Classification on SAR imagery Using a Bayesian CNN;a Two-Stage Road Segmentation Approach for remotesensingimages;YUTO Tree5000: A Large-Scale Airborne LiDAR Dataset for Single Tree Detection;spatial Layout Consistency for 3D Semantic Segmentation;reconstruction of Cultural Heritage 3D Models from Sparse Point Clouds Using Implicit Neural Representations;From TrashCan to UNO: Deriving an Underwater image Dataset to Get a More Consistent and
The proceedings contain 49 papers. The special focus in this conference is on patternrecognition. The topics include: Virtualization and 3D Visualization of Historical Costume Replicas: Accessibility, Inclu...
ISBN:
(纸本)9783031377303
The proceedings contain 49 papers. The special focus in this conference is on patternrecognition. The topics include: Virtualization and 3D Visualization of Historical Costume Replicas: Accessibility, Inclusivity, Virtuality;datafication of an Ancient Greek City: Multi-sensorial remotesensing of Heloros (Sicily);geolocation of Cultural Heritage Using Multi-view Knowledge Graph Embedding;masonry Structure Analysis, Completion and Style Transfer Using a Deep Neural Network;PARTICUL: Part Identification with Confidence Measure Using Unsupervised Learning;Explaining Classifications to Non-experts: An XAI User Study of Post-Hoc Explanations for a Classifier When People Lack Expertise;motif-Guided Time Series Counterfactual Explanations;comparison of Attention Models and Post-hoc Explanation Methods for Embryo Stage Identification: A Case Study;graph-Based Analysis of Hierarchical Embedding Generated by Deep Neural Network;Explainability of image Semantic Segmentation Through SHAP Values;CIELab Color Measurement Through RGB-D images;comparing Feature Importance and Rule Extraction for Interpretability on Text Data;physics-Informed Neural Networks for Solar Wind Prediction;less Labels, More Modalities: A Self-Training Framework to Reuse Pretrained Networks;feature Transformation for Cross-domain Few-Shot remotesensing Scene Classification;a Novel Methodology for High Resolution Sea Ice Motion Estimation;Deep Learning-Based Sea Ice Lead Detection from WorldView and Sentinel SAR imagery;Uncertainty Analysis of Sea Ice and Open Water Classification on SAR imagery Using a Bayesian CNN;a Two-Stage Road Segmentation Approach for remotesensingimages;YUTO Tree5000: A Large-Scale Airborne LiDAR Dataset for Single Tree Detection;spatial Layout Consistency for 3D Semantic Segmentation;reconstruction of Cultural Heritage 3D Models from Sparse Point Clouds Using Implicit Neural Representations;From TrashCan to UNO: Deriving an Underwater image Dataset to Get a More Consistent and
Drone-based Unmanned Aerial Systems (UAS) provide an efficient means for early detection and monitoring of remote wildland fires due to their rapid deployment, low flight altitudes, high 3D maneuverability, and ever-e...
详细信息
ISBN:
(纸本)9781665477291
Drone-based Unmanned Aerial Systems (UAS) provide an efficient means for early detection and monitoring of remote wildland fires due to their rapid deployment, low flight altitudes, high 3D maneuverability, and ever-expanding sensor capabilities. Recent sensor advancements have made side-by-side RGB/IR sensing feasible for UASs. The aggregation of optical and thermal images enables robust environmental observation, as the thermal feed provides information that would otherwise be obscured in a purely RGB setup, effectively "seeing through" thick smoke and tree occlusion. In this work, we present Fire detection and modeling: Aerial Multi-spectral image dataset (FLAME 2) [1], the first ever labeled collection of UAS-collected side-by-side RGB/IR aerial imagery of prescribed burns. Using FLAME 2, we then present two image-processing methodologies with Multi-modal Learning on our new dataset: (1) Deep Learning (DL)-based benchmarks for detecting fire and smoke frames with Transfer Learning and Feature Fusion. (2) an exemplary image-processing system cascaded in the DL-based classifier to perform fire localization. We show these two techniques achieve reasonable gains than either single-domain video inputs or training models from scratch in the fire detection task.
Multi-view clustering (MVC) is a popular approach used in data mining and patternrecognition to enhance clustering performance by leveraging complementary information from different data sources or features. However,...
详细信息
Multi-view clustering (MVC) is a popular approach used in data mining and patternrecognition to enhance clustering performance by leveraging complementary information from different data sources or features. However, real-world multi-view data often suffer from incomplete views, where some samples lack representation on one or more views due to various faults during data collection or processing. Efficiently exploiting the connections between multiple views from such incomplete data presents a significant challenge compared to complete-view clustering. Although several methods have been proposed in recent years to address the problem of incomplete multi-view clustering (IMVC), existing approaches have not fully considered: (1) the differential negative effects associated with varying missing data rates across different views, and (2) the impact of intrinsic noise arising from the diverse quality of these views. To address this problem, we propose a novel subspace clustering framework based on robust matrix completion for incomplete and unlabeled multi-view data. The proposed model consists of two modules. The first is a regularized missing data completion module from an information theoretic perspective, and the second is a centralized subspace model, combining low rank properties. In addition, we develop an efficient iterative algorithm using half-quadratic (HQ) and linearized alternating direction method (ADM). Finally, we conduct experiments on five benchmark multi-view datasets to demonstrate that our proposed method outperforms state-of-the-art methods inmost datasets and metrics.
暂无评论