In manufacturing industries, monitoring the complicated devices often necessitates automated methods that can leverage the multivariate time series data produced by the machines. However, analyzing this data can be ch...
详细信息
In manufacturing industries, monitoring the complicated devices often necessitates automated methods that can leverage the multivariate time series data produced by the machines. However, analyzing this data can be challenging due to varying noise levels in the data and possible nonlinear relations between the process variables, requiring appropriate tools to deal with such properties.
This thesis proposes a deep learning-based approach to detect anomalies and interpret their root causes from multivariate time series data, which can be applied in a near real-time setting. The proposed approach extends an existing model from the literature, which employs a variational autoencoder architecture and recurrent neural networks to capture both stochasticity and temporal relations of the data.
The anomaly detection and root cause interpretation performance of the proposed method is compared against five baseline algorithms previously proposed in the literature using real-world data collected from plastic injection molding machines and artificially generated multivariate time series data.
The results of this thesis show that the proposed method performs well on the evaluated multivariate time series datasets, mostly outperforming the baseline methods. Additionally, the approach had the best performance among the selected methods on providing root cause interpretation of the detected anomalies. The experiments conducted in this thesis suggest that deep learning-based algorithms are beneficial for anomaly detection in scenarios where the problem is too complicated for traditional methods, and enough training data is available. However, the amount of real-world injection molding machine data used in the experiments is relatively small, and therefore further experiments should be performed with larger datasets to obtain more generalizable results.
Background: One of the main concerns of public health surveillance is to preserve the physical and mental health of older adults while supporting their independence and privacy. On the other hand, to better assist tho...
详细信息
Background: One of the main concerns of public health surveillance is to preserve the physical and mental health of older adults while supporting their independence and privacy. On the other hand, to better assist those individuals with essential health care services in the event of an emergency, their regular activities should be monitored. Internet of Things (IoT) sensors may be employed to track the sequence of activities of individuals via ambient sensors, providing real-time insights on daily activity patterns and easy access to the data through the connected ecosystem. Previous surveys to identify the regular activity patterns of older adults were deficient in the limited number of participants, short period of activity tracking, and high reliance on predefined normal activity. Objective: The objective of this study was to overcome the aforementioned challenges by performing a pilot study to evaluate the utilization of large-scale data from smart home thermostats that collect the motion status of individuals for every 5-minute interval over a long period of time. Methods: From a large-scale dataset, we selected a group of 30 households who met the inclusion criteria (having at least 8 sensors, being connected to the system for at least 355 days in 2018, and having up to 4 occupants). The indoor activity patterns were captured through motion sensors. We used the unsupervised, time-based, deep neural-network architecture long short-term memory-variational autoencoder to identify the regular activity pattern for each household on 2 time scales: annual and weekday. The results were validated using 2019 records. The area under the curve as well as loss in 2018 were compatible with the 2019 schedule. Daily abnormal behaviors were identified based on deviation from the regular activity model. Results: The utilization of this approach not only enabled us to identify the regular activity pattern for each household but also provided other insights by assessing sleep beh
Missing data values and differing sampling rates, particularly for important parameters such as particle size and stream composition, are a common problem in minerals processing plants. Missing data imputation is used...
详细信息
Missing data values and differing sampling rates, particularly for important parameters such as particle size and stream composition, are a common problem in minerals processing plants. Missing data imputation is used to avoid information loss (due to downsampling or discarding incomplete records). A recent deep -learning technique, variational autoencoders (VAEs), has been used for missing data imputation in image data, and was compared here to imputation by mean replacement and by principal component analysis (PCA) imputation. The techniques were compared using a synthetic, nonlinear dataset, and a simulated milling circuit dataset, which included process disturbances, measurement noise, and feedback control. Each dataset was corrupted with missing values in 20% of records (lightly corrupted) and in 90% of records (heavily corrupted). For both lightly and heavily corrupted datasets, the root mean squared error of prediction for VAE imputation was lower than the traditional methods. Possibilities for the extension of missing data imputation to inferential sensing are discussed. (C) 2018, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.
Machine learning techniques can help to represent and solve quantum systems. Learning measurement outcome distribution of quantum ansatz is useful for characterization of near-term quantum computing devices. In this w...
详细信息
Machine learning techniques can help to represent and solve quantum systems. Learning measurement outcome distribution of quantum ansatz is useful for characterization of near-term quantum computing devices. In this work, we use the popular unsupervised machine learning model, variational autoencoder (VAE), to reconstruct the measurement outcome distribution of quantum ansatz. The number of parameters in the VAE are compared with the number of measurement outcomes. The numerical results show that VAE can efficiently learn the measurement outcome distribution with few parameters. The influence of entanglement on the task is also revealed.
Here we report a method of finding multiple crystal structures similar to the known crystal structures of materials on database through machine learning. The radial distribution function is used to represent the gener...
详细信息
Here we report a method of finding multiple crystal structures similar to the known crystal structures of materials on database through machine learning. The radial distribution function is used to represent the general characteristics of the known crystal structures, and then the variational autoencoder is employed to generate a set of representative crystal replicas defined in a two-dimensional optimal continuous space. For given chemical compositions and crystal volume, we generate random crystal structures using constraints for crystal symmetry and atomic positions and directly compare their radial distribution functions with those of the known and/or replicated crystals. For selected crystal structures, energy minimization is subsequently performed through firstprinciples electronic structure calculations. This approach enables us to predict a set of new low-energy crystal structures using only the information on the radial distribution functions of the known structures.
Designers increasingly rely on parametric design studies to explore and improve structural concepts based on quantifiable metrics, generally either by generating design variations manually or using optimization method...
详细信息
Designers increasingly rely on parametric design studies to explore and improve structural concepts based on quantifiable metrics, generally either by generating design variations manually or using optimization methods. Unfortunately, both of these approaches have important shortcomings: effectively searching a large design space manually is infeasible, and design optimization overlooks qualitative aspects important in architectural and structural design. There is a need for methods that take advantage of computing intelligence to augment a designer's creativity while guiding-not forcing-their search for better-performing solutions. This research addresses this need by integrating conditional variational autoencoders in a performance-driven design exploration framework. First, a sampling algorithm generates a dataset of meaningful design options from an unwieldy design space. Second, a performance-conditioned variational autoencoder with a low-dimensional latent space is trained using the collected data. This latent space is intuitive to explore by designers even as it offers a diversity of high-performing design options.
Most existing image dehazing methods based learning are less able to perform well to real hazy *** important reason is that they are trained on synthetic hazy images whose distribution is different from real hazy *** ...
详细信息
Most existing image dehazing methods based learning are less able to perform well to real hazy *** important reason is that they are trained on synthetic hazy images whose distribution is different from real hazy *** relieve this issue, this paper proposes a new hazy scene generation model based on domain adaptation, which uses a variational autoencoder to encode the synthetic hazy image pairs and the real hazy images into the latent space to *** synthetic hazy image pairs guide the model to learn the mapping of clear images to hazy images, the real hazy images are used to adapt the synthetic hazy images' latent space to real hazy images through generative adversarial loss, so as to make the generative hazy images' distribution as close to the real hazy images' distribution as *** comparing the results of the model with traditional physical scattering models and Adobe Lightroom CC software, the hazy images generated in this paper is more *** end-to-end domain adaptation model is also very convenient to synthesize hazy images without depth *** traditional method to dehaze the synthetic hazy images generated by this paper, both SSIM and PSNR have been improved, proved that the effectiveness of our *** non-reference haze density evaluation algorithm and other quantitative evaluation also illustrate the advantages of our method in synthetic hazy images.
Constrained optimization problems can be difficult because their search spaces have properties not conducive to search, e.g., multimodality, discontinuities, or deception. To address such difficulties, considerable re...
详细信息
ISBN:
(纸本)9781450392686
Constrained optimization problems can be difficult because their search spaces have properties not conducive to search, e.g., multimodality, discontinuities, or deception. To address such difficulties, considerable research has been performed on creating novel evolutionary algorithms or specialized genetic operators. However, if the representation that defined the search space could be altered such that it only permitted valid solutions that satisfied the constraints, the task of finding the optimal would be made more feasible without any need for specialized optimization algorithms. We propose Constrained Optimization in Latent Space (COIL), which uses a VAE to generate a learned latent representation from a dataset comprising samples from the valid region of the search space according to a constraint, thus enabling the optimizer to find the objective in the new space defined by the learned representation. Preliminary experiments show promise: compared to an identical GA using a standard representation that cannot meet the constraints or find fit solutions, COIL with its learned latent representation can perfectly satisfy different types of constraints while finding high-fitness solutions.
Learning network representations is a fundamental task for many graph applications such as link prediction, node classification, graph clustering, and graph visualization. Many real-world networks are interpreted as d...
详细信息
ISBN:
(纸本)9783030438234;9783030438227
Learning network representations is a fundamental task for many graph applications such as link prediction, node classification, graph clustering, and graph visualization. Many real-world networks are interpreted as dynamic networks and evolve over time. Most existing graph embedding algorithms were developed for static graphs mainly and cannot capture the evolution of a large dynamic network. In this paper, we propose Dynamic joint variational Graph autoencoders (Dyn-VGAE) that can learn both local structures and temporal evolutionary patterns in a dynamic network. Dyn-VGAE provides a joint learning framework for computing temporal representations of all graph snapshots simultaneously. Each auto-encoder embeds a graph snapshot based on its local structure and can also learn temporal dependencies by collaborating with other autoencoders. We conduct experimental studies on dynamic real-world graph datasets and the results demonstrate the effectiveness of the proposed method.
暂无评论