This paper addresses an approach for the classification of hyperspectral imagery (HSI). In remote sensing, the HSI sensor acquires hundreds of images with narrow and continuous spectral width in visible and near-infra...
详细信息
This paper addresses an approach for the classification of hyperspectral imagery (HSI). In remote sensing, the HSI sensor acquires hundreds of images with narrow and continuous spectral width in visible and near-infrared regions of the electromagnetic (EM) spectrum. Such nature of data acquisition is very useful in the classification and/or the identification of different objects present in the HSI data. However, the low-spatial resolution and large volume of HS images make it more challenging. In the proposed approach, we use an autoencoder with convolutional neural network (AECNN) for the classification of HS images. Pre-processing with autoencoder enhances the features in the HS images which helps to obtain optimized weights in the initial layers of the CNN model. Hence, shallow CNN architecture can be utilized to extract features from the pre-processed HSI data which are used further for the classification of the same. The potential of the proposed approach has been verified by conducting many experiments on various datasets. The classification results obtained using the proposed method are compared with many state-of-the-art deep learning based methods including the winner of the geoscience and remote sensing society (GRSS) Image Fusion Contest-2018 on HSI classification held at IEEE International Geoscience and Remote Sensing Symposium (IGARSS)-2018 and it shows superiority over those methods.
Learning on surfaces is a difficult task: the data being non-Euclidean makes the transfer of known techniques such as convolutions and pooling non trivial. Common methods deploy processes to apply deep learning operat...
详细信息
Learning on surfaces is a difficult task: the data being non-Euclidean makes the transfer of known techniques such as convolutions and pooling non trivial. Common methods deploy processes to apply deep learning operations to triangular meshes either in the spatial domain by defining weights between nodes, or in the spectral domain using first order Chebyshev polynomials followed by a return in the spatial domain. In this study, we present a Spectral autoencoder (SAE) enabling the application of deep learning techniques to 3D meshes by directly giving spectral coefficients obtained with a spectral transform as inputs. With a dataset composed of surfaces having the same connectivity, it is possible with the Graph Laplacian to express the geometry of all samples in the frequency domain. Then, by using an autoencoder architecture, we are able to extract important features from spectral coefficients without going back to the spatial domain. Finally, a latent space is built from which reconstruction and interpolation is possible. This method allows the treatment of meshes with more vertices by keeping the same architecture, and allows to learn on big datasets with short computation times. Through experiments, we demonstrate that this architecture is able to give better results than state of the art methods in a faster way. (C) 2022 Elsevier Ltd. All rights reserved.
autoencoder (AE) is an unsupervised neural network framework for efficient and effective feature extraction. Most AE-based methods do not consider spatial information and band correlations for hyperspectral image (HSI...
详细信息
autoencoder (AE) is an unsupervised neural network framework for efficient and effective feature extraction. Most AE-based methods do not consider spatial information and band correlations for hyperspectral image (HSI) analysis. In addition, graph-based AE methods often learn discriminative representations with the assumption that connected samples share the same label and they cannot directly embed the geometric structure into feature extraction. To address these issues, in this paper, we propose a dual graph autoencoder (DGAE) to learn discriminative representations for HSIs. Utilizing the relationships of pair-wise pixels within homogenous regions and pair-wise spectral bands, DGAE first constructs the superpixel-based similarity graph with spatial information and band-based similarity graph to characterize the geometric structures of HSIs. With the developed dual graph convolution, more discriminative feature representations are learnt from the hidden layer via the encoder-decoder structure of DGAE. The main advantage of DGAE is that it fully exploits both the geometric structures of pixels with spatial information and spectral bands to promote nonlinear feature extraction of HSIs. Experiments on HSI datasets show the superiority of the proposed DGAE over the state-of-the-art methods.
In the past decades, personalized recommendation systems have attracted a vast amount of attention and researches from multiple disciplines. Recently, for the powerful ability of feature representation learning, deep ...
详细信息
In the past decades, personalized recommendation systems have attracted a vast amount of attention and researches from multiple disciplines. Recently, for the powerful ability of feature representation learning, deep neural networks have achieved sound performance in the recommendation. However, most of the existing deep recommendation approaches require a large number of labeled data, which is often expensive and labor-some in applications. Meanwhile, the side information of users and items that can extend the feature space effectively is usually scarce. To address these problems, we propose a Personalized Recommendation method, which extends items' feature representations with Knowledge Graph via dual-autoencoder (short for PRKG). More specifically, we first extract items' side information from open knowledge graph like DBpedia as items' feature extension. Secondly, we learn the low-dimensional representations of additional features collected from DBpedia via the autoencoder module and then integrate the processed features into the original feature space. Finally, the reconstructed features is incorporated into the semi-autoencoder for personalized recommendations. Extensive experiments conducted on several real-world datasets validate the effectiveness of our proposed methods compared to several state-of-the-art models.
A key challenge in building machine learning models for time series prediction is the incompleteness of the datasets. Missing data can arise for a variety of reasons, including sensor failure and n...
详细信息
A key challenge in building machine learning models for time series prediction is the incompleteness of the datasets. Missing data can arise for a variety of reasons, including sensor failure and network outages, resulting in datasets that can be missing significant periods of measurements. Models built using these datasets can therefore be biased. Although various methods have been proposed to handle missing data in many application areas, more air quality missing data prediction requires additional investigation. This study proposes an autoencoder model with spatiotemporal considerations to estimate missing values in air quality data. The model consists of one-dimensional convolution layers, making it flexible to cover spatial and temporal behaviours of air contaminants. This model exploits data from nearby stations to enhance predictions at the target station with missing data. This method does not require additional external features, such as weather and climate data. The results show that the proposed method effectively imputes missing data for discontinuous and long-interval interrupted datasets. Compared to univariate imputation techniques (most frequent, median and mean imputations), our model achieves up to 65% RMSE improvement and 20–40% against multivariate imputation techniques (decision tree, extra-trees, k-nearest neighbours and Bayesian ridge regressors). Imputation performance degrades when neighbouring stations are negatively correlated or weakly correlated.
Unsupervised signal modulation clustering is becoming increasingly important due to its application in the dynamic spectrum access process of 5G wireless communication and threat detection at the physical layer of Int...
详细信息
Unsupervised signal modulation clustering is becoming increasingly important due to its application in the dynamic spectrum access process of 5G wireless communication and threat detection at the physical layer of Internet of Things. The need for better clustering results makes it a challenge to avoid feature drift and improve feature separability. This article proposes a novel separable loss function to address the issue. Besides, the high-level semantic properties of modulation types make it difficult for networks to extract their features. An autoencoder structure based on the random Fourier feature (RffAe) is proposed to simulate the demodulation process of unknown signals. Combined with the separable loss of RffAe (RffAe-S), it has excellent feature extraction ability. Great experiments were carried out on RADIOML 2016.10 A and RADIOML 2016.10 B. Experimental evaluations on these datasets show that our approach RffAe-S achieves state-of-the-art results compared to classical and the most relevant deep clustering methods.
Visible light communication (VLC) is a relatively new wireless communication technology that allows for high data rate transfer. Because of its capability to enable high-speed transmission and eliminate inter-symbol i...
详细信息
Visible light communication (VLC) is a relatively new wireless communication technology that allows for high data rate transfer. Because of its capability to enable high-speed transmission and eliminate inter-symbol interference, orthogonal frequency division multiplexing (OFDM) is widely employed in VLC. Peak to average power ratio (PAPR) is an issue that impacts the effectiveness of OFDM systems, particularly in VLC systems, because the signal is distorted by the nonlinearity of light-emitting diodes (LEDs). The proposed method Long Short Term Memory-autoencoder (LSTM-AE) uses an autoencoder as well as an LSTM to learn a compact representation of an input, allowing the model to handle variable length input sequences as well as predict or produce variable length output sequences. This study compares the suggested model with various PAPR reduction strategies to demonstrate that it offers a superior improvement in PAPR reduction of the transmitted signal while maintaining BER. Also, this model provides a flexible compromisation between PAPR and BER.
Ransomware attacks are hazardous cyber-attacks that use cryptographic methods to hold victims' data until the ransom is paid. Zero-day ransomware attacks try to exploit new vulnerabilities and are considered a sev...
详细信息
Ransomware attacks are hazardous cyber-attacks that use cryptographic methods to hold victims' data until the ransom is paid. Zero-day ransomware attacks try to exploit new vulnerabilities and are considered a severe threat to existing security solutions and internet resources. In the case of zero-day attacks, training data is not available before the attack takes place. Therefore, we exploit Zero-shot Learning (ZSL) capabilities that can effectively deal with unseen classes compared to the traditional machine learning techniques. ZSL is a two-stage process comprising of: Attribute Learning (AL) and Inference Stage (IS). In this regard, this work presents a new Deep Contractive autoencoder based Attribute Learning (DCAE-ZSL) technique as well as an IS method based on Heterogeneous Voting Ensemble (DCAE-ZSL-HVE). In the proposed DCAE-ZSL approach, Contractive autoencoder (CAE) is employed to extract core features of known and unknown ransomware. The regularization term of CAE helps in penalizing the classifier's sensitivity against the small dissimilarities in the latent space. On the other hand, in case of the IS, four combination rules Global Majority (GM), Local Majority (LM), Cumulative Vote-against based Global Majority (CVAGM), Cumulative Vote-for based Global Majority (CVFGM) are utilized to find the final prediction. It is empirically shown that in comparison to conventional machine learning techniques, models trained on contractive embedding show reasonable performance against zero-day attacks. Furthermore, it is shown that the exploitation of these core features through the proposed voting based ensemble (DCAE-ZSL-HVE) has demonstrated significant improvement in detecting zero-day attacks (recall = 0.95) and reducing False Negative (FN = 6).
Non-destructive testing & evaluation techniques play an essential role in ensuring safety of materials in operation at various industry sectors. Pulse compressed favourable thermal wave imaging is one of the widel...
详细信息
Non-destructive testing & evaluation techniques play an essential role in ensuring safety of materials in operation at various industry sectors. Pulse compressed favourable thermal wave imaging is one of the widely used non-destructive testing techniques due to its excellent noise rejection capabilities. However, the high dimensional thermal imaging data needs to be encoded into lossless compressed form to highlight the hidden defects inside the materials. This paper proposes a novel constrained and regularized autoencoder based thermography approach for sub-surface defect detection in a mild steel specimen. Certain properties such as non-correlation of encoded data, weight orthogonality, and weights with unit norm length have been highlighted which are non-existent in linear autoencoders but are responsible for better defect detection inside the materials inspected by frequency modulated thermal wave imaging. Novel constraints are formulated for autoencoder cost function to incorporate these significant properties. The proposed approach is able to provide better defect detection, in terms of signal to noise ratio of defects, than linear autoencoder as well as traditional principal component thermography approach. Also, non-correlation of encoded data is found to be the most significant factor in achieving better defect detection followed by properties ensuring weight orthogonality and weights with unit norm length.
In this study, we propose a deep learning related framework to analyze S&P500 stocks using bi-dimensional histogram and autoencoder. The bi-dimensional histogram consisting of daily returns of stock price and stoc...
详细信息
In this study, we propose a deep learning related framework to analyze S&P500 stocks using bi-dimensional histogram and autoencoder. The bi-dimensional histogram consisting of daily returns of stock price and stock trading volume is plotted for each stock. autoencoder is applied to the bi-dimensional histogram to reduce data dimension and extract meaningful features of a stock. The histogram distance matrix for stocks are made of the extracted features of stocks, and stock market network is built by applying Planar Maximally Filtered Graph(PMFG) algorithm to the histogram distance matrix. The constructed stock market network represents the latent space of bi-dimensional histogram, and network analysis is performed to investigate the structural properties of the stock market. we discover that the structural properties of stock market network are related to the dispersion of bi-dimensional histogram. Also, we confirm that the autoencoder is effective in extracting the latent feature of the bi-dimensional histogram. Portfolios using the features of bi-dimensional histogram network are constructed and their investment performance is evaluated in comparison with other benchmark portfolios. We observe that the portfolio consisting of stocks corresponding to the peripheral nodes of bi-dimensional histogram network shows better investment performance than other benchmark stock portfolios.
暂无评论