In 2020, the world was attacked by a virus known as the COVID-19 virus. Restrictions on people’s activities were conducted in various countries to prevent the spread of the virus. However, since people were vaccinate...
详细信息
ISBN:
(纸本)9781450397117
In 2020, the world was attacked by a virus known as the COVID-19 virus. Restrictions on people’s activities were conducted in various countries to prevent the spread of the virus. However, since people were vaccinated, restriction levels have been reduced or eliminated, although the new cases of COVID-19 worldwide have not ended. People’s responses to restriction policies vary, including sentiment and human mobility. The possibility of sentiment is either support or resistance, while mobility is staying at home or not. This study analyzes the proportion between the two responses through two types of data: text for sentiment and time series for mobility. Sentiment text data is taken from Twitter and mobility time series data is taken from Google Mobility for February 2020 to April 2022. Twitter and Google Mobility data are collected from several countries using English and implementing restrictions: Australia, Canada, Singapore, the United Kingdom (UK), and the United States (US). The unsupervised autoencoder model is leveraged to find clusters. Two autoencoder architectures are proposed for each data type. Before being used in Multilayer autoencoder, text data is converted to vector data by Word2Vec. On the other hand, LSTM-autoencoder is used for time series data. Finally, hypothesis tests are performed to determine the mean between the clusters formed is the same or different, out of five countries, only Canada has a null hypothesis is accepted, that means people in Canada tend to be neutral in response to COVID-19 while mobilities are dynamics, it reveals that people in Canada obey the government’s decision on restrictions during the rise of COVID-19 cases.
Spectral unmixing is a major technique for the further development of hyperspectral analysis. It aims to determine the corresponding proportion (fractional abundance) of the basic spectral signatures (endmembers) blin...
详细信息
Spectral unmixing is a major technique for the further development of hyperspectral analysis. It aims to determine the corresponding proportion (fractional abundance) of the basic spectral signatures (endmembers) blindly at the subpixel level. Recently, the learning-based method has received much attention in hyperspectral unmixing, and autoencoders have been effectively designed to solve the unsupervised scenarios of unmixing. However, their ability to extract physically meaningful endmembers remains limited, and the performance has not been satisfactory. In this article, we propose a novel two-stream network, termed TANet, to address the above problems. The network consists of a two-stream architecture. First, superpixel segmentation is adopted as preprocessing to extract the endmember bundles from the image. Then, the first stream learns a mapping from the pseudopure pixels to their corresponding abundances. The second stream is conducting the same untied-weighted autoencoder to minimize reconstruction errors from the original pixel data. By learning from the pure or nearly pure candidate pixels to correct the weights of unmixing, the proposed TANet exhibits a more accurate and interpretable unmixing performance. Extensive experiments on both synthetic and real hyperspectral data demonstrate that the proposed TANet can outperform the other state-of-the-art approaches.
Credit card fraud has become a serious issue for banks and businesses as more and more transactions take place online. A number of machine learning methods have been developed to learn patterns in credit card frauds. ...
详细信息
ISBN:
(纸本)9781450397148
Credit card fraud has become a serious issue for banks and businesses as more and more transactions take place online. A number of machine learning methods have been developed to learn patterns in credit card frauds. These methods usually depend on sophisticated feature engineering because gaining representative features can improve their performances. However, manual feature engineering is impossible due to the ever-increasing amount of data and complicated relationships between transactions. In this work, a Sparse autoencoder-Support Vector Machine (SAE-SVM) model is proposed to solve the issue of feature engineering. The SAE is an unsupervised machine learning method that learns representative features from the raw data. The SVM model later uses these features to predict whether the transaction is fraudulent. This SAE-SVM method achieves 0.80 F2 score on the Credit Card Fraud Detection dataset on Kaggle, compared to only 0.71 F2 score by the SVM method alone. In addition, the SAE-SVM model outperforms other autoencoder-based models regarding the F2 value. It is shown that the SAE can extract useful and low-dimensional features without much loss of information. The deployment of the SAE may relax supervised machine learning from complicated feature engineering. Furthermore, the SAE model is robust toward the concept drift issue, making it preferable over manual feature engineering.
Recently, data-driven fault detection and diagnosis (FDD) technologies have been studied extensively to detect the fault status early and maintain the health of building automation systems (BASs). Among the various al...
详细信息
Recently, data-driven fault detection and diagnosis (FDD) technologies have been studied extensively to detect the fault status early and maintain the health of building automation systems (BASs). Among the various algorithms for building FDD systems, an autoencoder (AE) is widely used as an unsupervised deep-learning method. Conventional AE-based FDD methods can use two types of information generated from the novel structure of the AE: (1) residual matrix (REM) and (2) latent space matrix (LSM). However, fundamental discussions about AE structures are rare, and the uses of the REM and LSM for building FDD models have seldom been compared. In this study, AE-based FDD methods are suggested. Quantitative comparisons were conducted under the designed fault conditions and real operational faults (hunting). AE-based fault detection models were designed using the AE latent space dimensionality. For fault diagnosis models, REM- and LSM-based models were used. Each model was then subdivided by the AE latent space dimensions. The detection model performances showed no meaningful differences according to the designed cases. However, for the diagnosis models, the performance of the LSM-based models was 14.4% better than that of the REM-based models. Additionally, the dimensions of the latent space caused the model performance to vary as much as 21.5%. Two main issues-training data dependency and latent space dimensionality-were found and investigated to improve the performance of AE-based FDD. Modeling guidelines are suggested based on the findings. These are valuable for successful FDD application with limited working sensors and datasets in real BASs.
Anomaly detection in real-time using autoencoders implemented on edge devices is exceedingly challenging due to limited hardware, energy, and computational resources. We show that these limitations can be addressed by...
详细信息
Anomaly detection in real-time using autoencoders implemented on edge devices is exceedingly challenging due to limited hardware, energy, and computational resources. We show that these limitations can be addressed by designing an autoencoder with low-resolution non-volatile memory-based synapses and employing an effective quantized neural network learning algorithm. We further propose nanoscale ferromagnetic racetracks with engineered notches hosting magnetic domain walls (DW) as exemplary non-volatile memory-based autoencoder synapses, where limited state (5-state) synaptic weights are manipulated by spin orbit torque (SOT) current pulses to write different magnetoresistance states. The performance of anomaly detection of the proposed autoencoder model is evaluated on the NSL-KDD dataset. Limited resolution and DW device stochasticity aware training of the autoencoder is performed, which yields comparable anomaly detection performance to the autoencoder having floating-point precision weights. While the limited number of quantized states and the inherent stochastic nature of DW synaptic weights in nanoscale devices are typically known to negatively impact the performance, our hardware-aware training algorithm is shown to leverage these imperfect device characteristics to generate an improvement in anomaly detection accuracy (90.98%) compared to accuracy obtained with floating-point synaptic weights that are extremely memory intensive. Furthermore, our DW-based approach demonstrates a remarkable reduction of at least three orders of magnitude in weight updates during training compared to the floating-point approach, implying significant reduction in operation energy for our method. This work could stimulate the development of extremely energy efficient non-volatile multi-state synapse-based processors that can perform real-time training and inference on the edge with unsupervised data.
Real-time electricity market data is highly volatile and very noisy. The properties of such data make forecasting models difficult to develop, with traditional statistical models in particular affected by the "cu...
详细信息
Real-time electricity market data is highly volatile and very noisy. The properties of such data make forecasting models difficult to develop, with traditional statistical models in particular affected by the "curse of dimensionality"for such data. However, autoencoders, or neural networks specifically designed to reduce the noise and dimensions of input data, may prove useful to advance the accuracy of real-time price forecasting models. This paper studies the optimal design of such an autoencoder, developing a quadruple branch, CNN-based autoencoder (QCAE) which is pre-trained and then directly linked to a forecasting model. The QCAE compresses the input data in both time and feature directions. Ablation analyses verify the architecture of the QCAE, and its integration with the forecasting model is tested and validated on fifty generators in the New York Independent System Operator (NYISO) power grid. The QCAE forecasting framework outperforms benchmark and state-of-the-art models with an average improvement of 6.3% in sMAPE and 3.10% in MAE.
Abnormal detection plays an important role in video surveillance. LSTM encoder-decoder is used to learn representation of video sequences and applied for detecting abnormal event in complex environment. The learned re...
详细信息
Abnormal detection plays an important role in video surveillance. LSTM encoder-decoder is used to learn representation of video sequences and applied for detecting abnormal event in complex environment. The learned representation of LSTM encoder-decoder is learned from encoder, and it is crucial for decoder. However, LSTM encoder-decoder generally fails to account for the global context of the learned representation with a fixed dimension representation. In this paper, we explore a hybrid autoencoder architecture, which not only extracts better spatio-temporal context, but also improves the extrapolate capability of the corresponding decoder by the shortcut connection. The experiment shows that the hybrid model performs better than the state-of-the-art anomaly detection methods in both qualitative and quantitative ways on benchmark datasets.
The recent advancement in deep learning-based approaches vastly outperforms the traditional image descriptors. Deep learning models, such as residual networks (ResNet), are well known for finding salient features. Alt...
详细信息
The recent advancement in deep learning-based approaches vastly outperforms the traditional image descriptors. Deep learning models, such as residual networks (ResNet), are well known for finding salient features. Although effective, high-level description often has a high dimensionality that increases computational overhead. The autoencoders find the useful approximation of the input data without losing critical information. Considering this, we propose a content-based image retrieval system for natural color images using a deep stacked sparse autoencoder (DSSA). The DSSA model learns latent features in an unsupervised way from the high-level description obtained using ResNet. The DSSA model achieves a nearly 50% reduction in size compared with the full-length features for the simple distance-based retrieval approach while increasing accuracy. The image retrieval efficacy of the learned latent features is also evaluated for two classifier-based methods using a Softmax classifier. Further, this study investigates the impact of unsupervised feature learning on retrieval using three benchmark natural color image databases of varying complexities, viz., Corel-1K, Corel-10K, and Canadian Institute for Advanced Research (CIFAR)-10. The latent features learned by the DSSA model with the fuzzy class membership-based retrieval method achieve promising improvements and yield a highly competitive retrieval performance with the large-size CIFAR-10 database. (C) 2022 SPIE and IS&T
Objective: To introduce an MRI in-plane resolution enhancement method that estimates High-Resolution (HR) MRIs from Low-Resolution (LR) MRIs. Method & Materials: Previous CNN-based MRI super-resolution methods cau...
详细信息
Objective: To introduce an MRI in-plane resolution enhancement method that estimates High-Resolution (HR) MRIs from Low-Resolution (LR) MRIs. Method & Materials: Previous CNN-based MRI super-resolution methods cause loss of input image information due to the pooling layer. An autoencoder-inspired Convolutional Network-based Super-resolution (ACNS) method was developed with the deconvolution layer that extrapolates the missing spatial information by the convolutional neural network-based nonlinear mapping between LR and HR features of MRI. Simulation experiments were conducted with virtual phantom images and thoracic MRIs from four volunteers. The Peak Signal-to-Noise Ratio (PSNR), Structure SIMilarity index (SSIM), Information Fidelity Criterion (IFC), and computational time were compared among: ACNS;Super-Resolution Convolutional Neural Network (SRCNN);Fast Super-Resolution Convolutional Neural Network (FSRCNN);Deeply-Recursive Convolutional Network (DRCN). Results: ACNS achieved comparable PSNR, SSIM, and IFC results to SRCNN, FSRCNN, and DRCN. However, the average computation speed of ACNS was 6, 4, and 35 times faster than SRCNN, FSRCNN, and DRCN, respectively under the computer setup used with the actual average computation time of 0.15 s per 100 x 100 pixels. Conclusion: The result of this study implies the potential application of ACNS to real-time resolution enhancement of 4D MRI in MRI guided radiation therapy.
Electromagnetic (EM) metasurfaces have attracted great attention from both engineers and researchers due to their unique physical responses. With the rapid development of complex metasurfaces, the design and optimizat...
详细信息
Electromagnetic (EM) metasurfaces have attracted great attention from both engineers and researchers due to their unique physical responses. With the rapid development of complex metasurfaces, the design and optimization processes have also become extremely time-consuming and computational resource-consuming. Here we proposed a deep learning model (DLM) based on a convolutional autoencoder network and inverse design network, which can help to establish the complex relationships between the geometries of metasurfaces and their EM responses. As a typical example, a metasurface absorber consisting of polymethacrylimide foam/metal ring alternating multilayers is chosen to demonstrate the capability of the DLM. The relative spectral error of the two desired spectra is only 5.80 and 5.49, respectively. Our model shows great predictive power and may be used as an effective tool to accelerate the design and optimization of metasurfaces.
暂无评论