In this paper, we consider change detection in the longwave infrared (LWIR) domain. Because thermal emission is the dominant radiation source in this domain, differences in temperature may appear as material changes a...
详细信息
ISBN:
(纸本)9781510626386
In this paper, we consider change detection in the longwave infrared (LWIR) domain. Because thermal emission is the dominant radiation source in this domain, differences in temperature may appear as material changes and introduce false alarms in change imagery. Existing methods, such as temperature-emissivity separation and alpha residuals, attempt to extract temperature-independent LWIR spectral information. However, both methods remain susceptible to residual temperature effects which degrade change detection performance. Here, we develop temperature-robust versions of these algorithms that project the spectra into approximately temperature-invariant subspaces. The complete error covariance matrix for each method is also derived so that Mahalanobis distance may be used to quantify spectral differences in the temperature-invariant domain. Examples using synthetic and measured data demonstrate substantial performance improvement relative to the baseline algorithms.
Deep Convolutional Neural Networks (DCNN) have proven to be an exceptional tool for object recognition in various computer vision applications. However, recent findings have shown that such state of the art models can...
详细信息
ISBN:
(纸本)9781510626423
Deep Convolutional Neural Networks (DCNN) have proven to be an exceptional tool for object recognition in various computer vision applications. However, recent findings have shown that such state of the art models can be easily deceived by inserting slight imperceptible perturbations to key pixels in the input image. In this paper, we focus on deceiving Automatic Target Recognition(ATR) classiers. These classiers are built to recognize specified targets in a scene and also simultaneously identify their class types. In our work, we explore the vulnerabilities of DCNN-based target classifiers. We demonstrate signicant progress in developing infrared adversarial target by adding small perturbations to the input image such that the image perturbation cannot be easily detected. The algorithm is built to adapt to both targeted and non-targeted adversarial attacks. Our findings reveal promising results that reect serious implications of adversarial attacks.
Cloud detection is a critical issue for satellite optical remote sensing, since potential errors in cloud masking can be translated directly into significant uncertainty in the retrieved downstream geophysical product...
详细信息
ISBN:
(纸本)9781510630079;9781510630086
Cloud detection is a critical issue for satellite optical remote sensing, since potential errors in cloud masking can be translated directly into significant uncertainty in the retrieved downstream geophysical products. The problem is particularly challenging when only of a limited number of spectral bands is available, and thermal infrared bands are lacking. This is the case of Proba-V instrument, for which the European Space Agency (ESA) carried out a dedicated Round Robin exercise, aimed at intercomparing several cloud detectionalgorithms to better understand their advantages and drawbacks for various clouds and surface conditions, and to learn lessons on cloud detection in the VNIR and SWIR domain for land and coastal water remote sensing. The present contribution is aimed at a thorough quality assessment of the results of the cloud detection approach we proposed, based on Cumulative Discriminant Analysis. Such a statistical method relies on the empirical cumulative distribution function of the measured reflectance in clear and cloudy conditions to produce a decision rule. It can be adapted to the user's requirements in terms of preferred levels for both type I and type II errors. In order to obtain a fully automatic procedure, we choose as a training dataset a subset of the full Proba-V scenes for which a cloud mask is estimated by a consolidated algorithm (silver standard), that is from either SEVIRI, MODIS or both sensors. Within this training set, different subsets have been setup according to the different types of surface underlying scenes (water, vegetation, bare land, urban, and snow/ice). We present the analysis of the cloud classification errors for a range of such test scenes to yield important inferences on the efficiency and accuracy of the proposed methodology when applied to different types of surfaces.
Numerous methods exist to perform hyperspectral target detection. Application of these algorithms often requires the data to be atmospherically corrected. detection for longwave infrared data typically requires surfac...
详细信息
ISBN:
(纸本)9781510626386
Numerous methods exist to perform hyperspectral target detection. Application of these algorithms often requires the data to be atmospherically corrected. detection for longwave infrared data typically requires surface temperature estimates as well. This work compares the relative robustness of various target detectionalgorithms with respect to atmospheric compensation and target temperature uncertainty. Specifically, the adaptive coherence estimator and spectral matched filter will be compared with subspace detectors for various methods of atmospheric compensation and temperature-emissivity separation. Comparison is performed using both daytime and nighttime longwave infrared hyperspectral data collected at various altitudes for various target materials.
The currently increasing demand for photo-masks in the regime of the 14nm technology drives many initiatives towards capacity and throughput increase of existing production lines. Such improvements are facilitated by ...
详细信息
ISBN:
(数字)9781510630741
ISBN:
(纸本)9781510630741
The currently increasing demand for photo-masks in the regime of the 14nm technology drives many initiatives towards capacity and throughput increase of existing production lines. Such improvements are facilitated by improved control mechanisms of the tools and processes used within a production line. While process control of long range parameters such as the average CD behavior is demanding yet conceptually well understood, other parameters such as the small scale CD properties are quite often elusive to process control. These properties often require a dedicated test mask to be processed in order to be validated. In this paper we introduce a systematic approach towards a product based monitoring of small scale CD behavior which uses a CD characteristic extracted from the defect inspection process. This characteristic represents the influence of CD relevant processes starting from 200 10(-6)m up to 4000 10-6m. Large variations in the scale and magnitude of the CD characteristic are induced by layout specific design variations. However, the shape of these distinct curves is remarkably similar, which enables their use for monitoring as well as controlling the mask processes on the above stated spatial scales. In this paper it is demonstrated, that a meaningful monitoring of the CD characteristic can be enabled through the use of machine learning methods. A classical monitoring scheme is typically based on measuring the deviation of each curve from the average behavior. However, the monitoring of a curve and deviations thereof often requires the evaluation of the overall shape of the curve. Thus we propose a monitoring concept which uses a support vector machine in order to learn the shapes of the CD characteristics. It is demonstrated that a statistical model of the CD characteristics can be trained and used in order to monitor single excursions (see Figure 1) as well as overall process changes.
Proton beam therapy has recently been proposed as a noninvasive approach for treating ventricular tachycardia (VT), where target regions are identified in the myocardium and treated using external beam therapy. Effect...
详细信息
ISBN:
(纸本)9781510625501
Proton beam therapy has recently been proposed as a noninvasive approach for treating ventricular tachycardia (VT), where target regions are identified in the myocardium and treated using external beam therapy. Effective treatment requires that lesions develop at target sites of myocardial tissue in order to stop arrhythmic pathways. Precise characterization of the dose required for lesion creation is required for determining appropriate dose levels in future clinical treatement of VT patients. In this work, we use a deformable registration algorithm to align proton beam delivery isodose lines planned from baseline computed-tomography scans to follow-up delayed contrast-enhanced magnetic resonance imaging scans in three swine studies. The relationship between myocardial lesion formation and delivered dose from external proton beam ablation therapy is then quantitatively assessed. The current study demonstrates that myocardial tissue receiving a dose of 20Gy or higher tends to develop into lesion, while tissue exposed to less than 10Gy of dose tends to remain healthy. Overall, this study quantifies the relationship between external proton beam therapy dose and myocardial lesion formation which is important for determining dose levels in future clinical treatment of VT patients.
In this article we discuss the differences and similarities in useful information, extracted from images, between several features that operate on pixel imagery and their analogues which act on voxel imagery. First we...
详细信息
ISBN:
(数字)9781510626904
ISBN:
(纸本)9781510626904
In this article we discuss the differences and similarities in useful information, extracted from images, between several features that operate on pixel imagery and their analogues which act on voxel imagery. First we train voxel and, separately, pixel feature-based classifiers to distinguish targets from clutter using a probabilistic classification algorithm. The relative usefulness of information of these features is then measured by comparing the performance of the classifiers. Our experiments utilize voxel imagery constructed from ultra-wideband synthetic aperture radar;the pixel images analyzed are two-dimensional (2D) subimages of the voxel images. The work primarily uses features commonly employed, such as image intensity statistics, elements of an images fourier decomposition and various energy measures. The cost of different features is also discussed.
detection of buried explosive objects has been studied extensively and several sensors have been developed. In particular, ground penetrating radar (GPR) has proved to be one of the most successful modalities and many...
详细信息
ISBN:
(数字)9781510626904
ISBN:
(纸本)9781510626904
detection of buried explosive objects has been studied extensively and several sensors have been developed. In particular, ground penetrating radar (GPR) has proved to be one of the most successful modalities and many machine learning algorithms have been developed for buried threat detection using this sensor. Large scale experiments that involved multiple detectionalgorithms and very large data collections have indicated that the relative performance of different algorithms can vary significantly depending on the explosive objects, geographical site, soil and weather conditions, and burial depth. In fact, it is possible for an algorithm that performs well on training data to have low probability of target detection (PD), or high false alarm rate (FAR), on new data collected in a different environment. In this paper, we investigate the possibility of developing an algorithm that can predict the performance of a discrimination algorithm on GPR data collected in different environments. This can be used to select the optimal sensor/algorithm for a given location. It can also be used to select the optimal parameters of a given discriminator for a given site. Our approach combines predictive analysis with adequate feature selection methods to boost PD modeling and improve its prediction accuracy. Starting from raw GPR data, we extract and investigate a large set of potential descriptors that can quantify noise, surface roughness, and (implicit) soil properties. Our objectives are to: (i) Identify the optimal subset of features that can affect the target PDs of a given discriminator;and (ii) Learn a regression model for PD prediction. To validate our approach, we use data collected by a GPR sensor mounted on a vehicle. We extract over 50 different features from background regions and investigate feature selection and regression algorithms to learn a model that can predict the targets PD of a given discrimination algorithm for a given lane segment. We validate our result
Material decompositionl for Imaging multiple contrast agents in a single acquisition has been made possible by spectral CT: a modality which incorporates multiple photon energy spectral sensitivities into a single dat...
详细信息
ISBN:
(纸本)9781510625440
Material decompositionl for Imaging multiple contrast agents in a single acquisition has been made possible by spectral CT: a modality which incorporates multiple photon energy spectral sensitivities into a single data, collection. This work presents an investigation of a new approach to spectral CT which does not rely on energy-discriminating detectors or multiple X-ray Sources. Instead, a tiled pattern of K-edge filters are placed in front of the x-ray to create spatially encoded spectra data. For improved sampling, the spatial-spectral filter is moved continuously with respect to the source. A model-based material decomposition algorithm is adopted to directly reconstruct multiple material densities from projection data that is sparse in each spectral channel. Physical effects associated with the x-ray focal spot size and motion blur for the moving filter are expected to impact overall performance. In this work, those physical effects are modeled and a performance analysis is conducted. Specifically, experiments are presented with simulated focal spot widths between 0.2 mm and 4.0 mm. Additionally, filter motion blur is simulated for a linear translation speeds between 50 mm/s and 450 mm/s. The performance differential between a 0.2 mm and a 1.0 mm focal spot is less than 15% suggesting feasibility of the approach with realistic x-ray tubes. Moreover, for reasonable filter actuation speeds, higher speeds are shown to decrease (due to improved sampling) despite motion based spectral blur.
作者:
Languirand, Eric R.Emge, Darren K.Leidos
1504 Quarry Dr Edgewood MD 21040 USA US Army
Combat Capabil Dev Command Chem Biol Ctr 8510 Ricketts Point Rd Aberdeen Proving Ground MD 21010 USA
Hyperspectral imaging (HSI) has become increasingly popular for sensing in defense, commercial, and academic research for its ability to acquire vast amounts of information, relatively quickly, at stand-off distances....
详细信息
ISBN:
(数字)9781510626867
ISBN:
(纸本)9781510626867
Hyperspectral imaging (HSI) has become increasingly popular for sensing in defense, commercial, and academic research for its ability to acquire vast amounts of information, relatively quickly, at stand-off distances. As such, the need for rapid or near-real time data reduction is becoming more evident especially when immediate knowledge of the area under investigation is required such as in contested areas, the scene of natural disasters, and other similar scenarios. While analysis of the underlying spectral information may provide specific information about materials present, in HSI determining an anomaly can be just as informative in scenarios such as CB detection for avoidance. Therefore, a rapid, real-time HSI anomaly detection algorithm is merited. In this paper, we present work towards an algorithm for near-real time anomaly detection utilizing higher-order statistics and, in particular, implications due to changes in skewness and kurtosis, the 3rd and 4th central moments. We demonstrate using a visible-SWIR hyperspectral line scanner that anomalies (thiodiglycol and acetaminophen) can be detected in data that is updated to simulate real-time analysis. Changing spectral features result in changes in the probability density function, and can be specifically realized with comparisons of higher order statistics (i.e. skewness and kurtosis), thereby reducing a full spectral analysis at each voxel to a comparison of two values at each pixel. This paper explores utilizing this concept as a means for anomaly detection, evaluating different surfaces that an analyte may be present on, and lastly presents work towards automated background updates for anomaly detection on dynamic surfaces.
暂无评论