In this paper, we analyze the capabilities of several multiscale convolutional autoencoder architectures for reduced-order modeling of two-dimensional unsteady turbulent flow over a cylinder and the collapse of the wa...
详细信息
In this paper, we analyze the capabilities of several multiscale convolutional autoencoder architectures for reduced-order modeling of two-dimensional unsteady turbulent flow over a cylinder and the collapse of the water column. The results demonstrate the significance of multiscale convolution design for precision. Multiscale convolution, on the other hand, leads to the creation of millions of training parameters that require a large amount of memory. This results in an increase in the computational cost of the system. As a solution to this problem, we propose using modified convolution to reduce the number of training parameters within the model. As far as accuracy and computational efficiency are concerned, separable convolution yields the most efficient results in terms of accuracy. Moreover, the encoder component of the architecture is responsible for encoding high-dimensional data into a latent space with a low dimension. The latent spaces are transferred to the recurrent neural-type network and decoder section for the temporal evolution of the latent spaces and reconstruction of the flow field. In addition, the results demonstrate that GRU has fewer parameters than LSTM while maintaining the same accuracy.
In this paper, a neural network model is presented to identify fraudulent visits to a website, which are significantly different from visits of human users. Such unusual visits are most often made by automated softwar...
详细信息
ISBN:
(纸本)9783031234798;9783031234804
In this paper, a neural network model is presented to identify fraudulent visits to a website, which are significantly different from visits of human users. Such unusual visits are most often made by automated software, i.e. bots. Bots are used to perform advertising scams or to do scraping, i.e., automatic scanning of website content frequently not in line with the intentions of website authors. The model proposed in this paper works on data extracted directly from a web browser when a user or a bot visits a website. This data is acquired by way of using JavaScript. When bots appear on the website, collected parameter values are significantly different from the values collected during usual visits made by human website users. However, just knowing what values these parameters have is simply not enough to identify bots as they are being constantly modified and new values that have not yet been accounted for appear. Thus, it is not possible to know all the data generated by bots. Therefore, this paper proposes a neural network with an autoencoder structure that makes it possible to detect deviations in parameter values that depart from the learned data from usual users. This enables detection of anomalies, i.e., data generated by bots. The effectiveness of the presented model is demonstrated on authentic data extracted from several online stores.
As regional economy develops rapidly, the ecological environment exposes a series of problems, with the characteristics of ecological resistance surfaces being obviously differentiated. In the context of the increasin...
详细信息
As regional economy develops rapidly, the ecological environment exposes a series of problems, with the characteristics of ecological resistance surfaces being obviously differentiated. In the context of the increasing advance in computer science, this research introduced a deep learning algorithm, autoencoder, to analyze the ecological resistance surfaces over a 20-year period in 23 counties and districts in the Three Gorges Reservoir Area. The main results are as follows: (1) the overall resistance surfaces were at a low level, with high values concentrated in the core urban areas and along the Yangtze River. (2) The spatial distributions of the autoencoder results were smoother and more focused compared to the results of analytic hierarchy process. (3) The XGBoost detection results indicated land cover types as the highest contributing factor to the resistance. (4) Habitat quality analysis revealed a high degree of spatial heterogeneity, with the northeastern regions exhibiting more favorable conditions and the densely populated urban centers in the southwest displaying degraded quality. (5) The regionalization analysis outlined key conservation area, key restoration area and moderate development area, proposing a strategic framework of "One Axis, Two Zones and Four Protection Screens" for guiding the long-term ecological development in the research region.
Image anomaly detection (AD) is widely researched on many occasions in computer vision tasks. High -dimensional data, such as image data, with noise and complex background is still challenging to detect anomalies unde...
详细信息
Image anomaly detection (AD) is widely researched on many occasions in computer vision tasks. High -dimensional data, such as image data, with noise and complex background is still challenging to detect anomalies under the situation that imbalanced or incomplete data are available. Some deep learning methods can be trained in an unsupervised way and map the original input into low-dimensional manifolds to predict larger differences in anomalies according to normal ones by dimension reduction. However, training a single low-dimension latent space is limited to present the low-dimensional features due to the fact that the noise and irreverent features are mapped into this space, resulting in that the manifolds are not discriminative for detecting anomalies. To address this problem, a new autoencoder framework is proposed in this study with two trainable mutually orthogonal complementary subspaces in the latent space, by latent subspace projection (LSP) mechanism, which is named as LSP-CAE. Specifically, latent subspace projection is used to train the latent image subspace (LIS) and the latent kernel subspace (LKS) in the latent space of the autoencoder-like model respectively, which can enhance learning power of different features from the input instance. The features of normal data are projected into the latent image subspace, while the latent kernel subspace is trained to extract the irrelevant information according to normal features by end-to-end training. To verify the generality and effectiveness of the proposed method, we replace the convolutional network with the fully-connected network contucted in the real-world medical datasets. The anomaly score based on projection norms in two subspace is used to evaluate the anomalies in the testing. Consequently, our proposed method can achieve the best performance according to four public datasets in comparison of the state-of-the-art methods.
Purpose: The objective assessment of image quality (IQ) has been advocated for the analysis and optimization of medical imaging systems. One method of computing such IQ metrics is through a numerical observer. The Hot...
详细信息
Purpose: The objective assessment of image quality (IQ) has been advocated for the analysis and optimization of medical imaging systems. One method of computing such IQ metrics is through a numerical observer. The Hotelling observer (HO) is the optimal linear observer, but conventional methods for obtaining the HO can become intractable due to large image sizes or insufficient data. Channelized methods are sometimes employed in such circumstances to approximate the HO. The performance of such channelized methods varies, with different methods obtaining superior performance to others depending on the imaging conditions and detection task. A channelized HO method using an AE is presented and implemented across several tasks to characterize its performance. Approach: The process for training an AE is demonstrated to be equivalent to developing a set of channels for approximating the HO. The efficiency of the learned AE-channels is increased by modifying the conventional AE loss function to incorporate task-relevant information. Multiple binary detection tasks involving lumpy and breast phantom backgrounds across varying dataset sizes are considered to evaluate the performance of the proposed method and compare to current state-of-the-art channelized methods. Additionally, the ability of the channelized methods to generalize to images outside of the training dataset is investigated. Results: AE-learned channels are demonstrated to have comparable performance with other state-of-the-art channel methods in the detection studies and superior performance in the generalization studies. Incorporating a cleaner estimate of the signal for the detection task is also demonstrated to significantly improve the performance of the proposed method, particularly in datasets with fewer images. Conclusions: AEs are demonstrated to be capable of learning efficient channels for the HO. The resulting significant increase in detection performance for small dataset sizes when incorporating a
Innumerable approaches of deep learning-based COVID-19 detection systems have been suggested by researchers in the recent past, due to their ability to process high-dimensional, complex data, leading to more accurate ...
详细信息
By using an autoencoder as a dimension reduction tool, an autoencoder-embedded Teaching-Learning Based Optimization (ATLBO) has been proved to be effective in solving high-dimensional computationally expensive problem...
详细信息
Many worldwide changing events, including meteorology, weather forecasting, disaster response, and environmental monitoring, are tracked by states or companies via satellite imagery. Early response to disasters is cri...
详细信息
Many worldwide changing events, including meteorology, weather forecasting, disaster response, and environmental monitoring, are tracked by states or companies via satellite imagery. Early response to disasters is critical for human life. In these cases, artificial intelligence applications are also used to make rapid determinations about large geographical region. In this study, satellite images of flooded and undamaged structures in Hurricane Harvey were used. An autoencoder process has been applied to this dataset to reduce the noise in satellite imagery. AlexNet and VGG16 deep learning (DL) models are used to extract features from both datasets. The most effective features selected by the Boruta feature selection algorithm were classified with the support vector machine, and the highest classification accuracy of 99.35% was obtained. Since disasters involve the evaluation of very big datasets from large geographic areas, presenting the data with the smallest possible feature will facilitate the process. For this reason, by applying dimensionality reduction to the selected attributes, a 98.29% success was achieved in the classification with only 90 attributes. The proposed approach shows that DL and feature engineering are very effective methods to quickly respond to disaster areas using satellite imagery.
Adversarial attacks on artificial neural network systems for image recognition are considered. To improve the security of image recognition systems against adversarial attacks (evasion attacks), the use of autoencoder...
详细信息
Adversarial attacks on artificial neural network systems for image recognition are considered. To improve the security of image recognition systems against adversarial attacks (evasion attacks), the use of autoencoders is proposed. Various attacks are considered and software prototypes of autoencoders of full-link and convolutional architectures are developed as means of defense against evasion attacks. The possibility of using developed prototypes as a basis for designing autoencoders more complex architectures is substantiated.
Single -cell RNA -sequencing technology (scRNA-seq) brings research to single -cell resolution. However, a major drawback of scRNA-seq is large sparsity, i.e. expressed genes with no reads due to technical noise or li...
详细信息
Single -cell RNA -sequencing technology (scRNA-seq) brings research to single -cell resolution. However, a major drawback of scRNA-seq is large sparsity, i.e. expressed genes with no reads due to technical noise or limited sequence depth during the scRNA-seq protocol. This phenomenon is also called 'dropout' events, which likely affect downstream analyses such as differential expression analysis, the clustering and visualization of cell subpopulations, cellular trajectory inference, etc. Therefore, there is a need to develop a method to identify and impute these dropout events. We propose Bubble, which first identifies dropout events from all zeros based on expression rate and coefficient of variation of genes within cell subpopulation, and then leverages an autoencoder constrained by bulk RNA-seq data to only impute those values. Unlike other deep learning -based imputation methods, Bubble fuses the matched bulk RNA-seq data as a constraint to reduce the introduction of false positive signals. Using simulated and several real scRNA-seq datasets, we demonstrate that Bubble enhances the recovery of missing values, gene -to -gene and cell -to -cell correlations, and reduces the introduction of false positive signals. Regarding some crucial downstream analyses of scRNA-seq data, Bubble facilitates the identification of differentially expressed genes, improves the performance of clustering and visualization, and aids the construction of cellular trajectory. More importantly, Bubble provides fast and scalable imputation with minimal memory usage.
暂无评论