Iterative learning control (ILC) can yield superior performance for repetitive tasks while only requiring approximate models, making this control strategy very appealing for industry. However, applying it to non-linea...
详细信息
Iterative learning control (ILC) can yield superior performance for repetitive tasks while only requiring approximate models, making this control strategy very appealing for industry. However, applying it to non-linear systems involves solving of optimization problems, which limits the industrial uptake, especially for learning online to compensate for variations throughout the system’s lifetime. Industry tackles this by designing simple rule-based learning controllers. However, these are often designed in an ad-hoc manner, which potentially limits performance. In this paper, we will couple a low-dimensional parametrized learning control algorithm with a generic signal parametrization method on the basis of machine learning, and specifically using autoencoders. This will allow high control performance, while limiting implementational complexity and maintaining interpretability, paving the way for a higher industrial uptake of learning control for non-linear systems. We will illustrate the parametrized approach in simulation on a non-linear slider-crank system, and provide an example of using the learning approach to perform a tracking task for this system.
Distinguishing and classifying different types of malware is important to better understanding how they can infect computers and devices, the threat level they pose and how to protect against them. In this paper, a sy...
详细信息
ISBN:
(纸本)9781538632000
Distinguishing and classifying different types of malware is important to better understanding how they can infect computers and devices, the threat level they pose and how to protect against them. In this paper, a system for classifying malware programs is presented. The paper describes the architecture of the system and assesses its performance on a publicly available database (provided by Microsoft for the Microsoft Malware Classification Challenge BIG2015) to serve as a benchmark for future research efforts. First, the malicious programs are preprocessed such that they are visualized as gray scale images. We then make use of an architecture comprised of multiple layers (multiple levels of encoding) to carry out the classification process of those images/programs. We compare the performance of this approach against traditional machine learning and pattern recognition algorithms. Our experimental results show that the deep learning architecture yields a boost in performance over those conventional/standard algorithms. A hold-out validation analysis using the superior architecture shows an accuracy in the order of 99.15%.
Flexibility is often a key determinant of protein function. To elucidate the link between their molecular structure and role in an organism, computational techniques such as molecular dynamics can be leveraged to char...
详细信息
Flexibility is often a key determinant of protein function. To elucidate the link between their molecular structure and role in an organism, computational techniques such as molecular dynamics can be leveraged to characterize their conformational space. Extensive sampling is, however, required to obtain reliable results, useful to rationalize experimental data or predict outcomes before experiments are carried out. We demonstrate that a generative neural network trained on protein structures produced by molecular simulation can be used to obtain new, plausible conformations complementing preexisting ones. To demonstrate this, we show that a trained neural network can be exploited in a protein-protein docking scenario to account for broad hinge motions taking place upon binding. Overall, this work shows that neural networks can be used as an exploratory tool for the study of molecular conformational space.
Alleles of human leukocyte antigen (HLA)-A DNAs are classified and expressed graphically by using artificial intelligence "Deep Learning (Stacked autoencoder)". Nucleotide sequence data corresponding to the ...
详细信息
Alleles of human leukocyte antigen (HLA)-A DNAs are classified and expressed graphically by using artificial intelligence "Deep Learning (Stacked autoencoder)". Nucleotide sequence data corresponding to the length of 822 bp, collected from the Immuno Polymorphism Database, were compressed to 2-dimensional representation and were plotted. Profiles of the two-dimensional plots indicate that the alleles can be classified as clusters are formed. The two-dimensional plot of HLA-A DNAs gives a clear outlook for characterizing the various alleles.
We propose a novel lossy block-based image compression approach. Our approach builds on non-linear autoencoders that can, when properly trained, explore non-linear statistical dependencies in the image blocks for redu...
详细信息
ISBN:
(纸本)9781538641606
We propose a novel lossy block-based image compression approach. Our approach builds on non-linear autoencoders that can, when properly trained, explore non-linear statistical dependencies in the image blocks for redundancy reduction. In contrast the DCT employed in JPEG is inherently restricted to exploration of linear dependencies using a second-order statistics framework. The coder is based on pre-trained class-specific Restricted Boltzmann Machines (RBM). These machines are statistical variants of neural network autoencoders that directly map pixel values in image blocks into coded bits. Decoders can be implemented with low computational complexity in a codebook design. Experimental results show that our RBM-codec outperforms JPEG at high compression rates, both in terms of PSNR, SSIM and subjective results.
Deep learning (DL) techniques have the potential of making communication systems more efficient and solving many problems in the physical layer. In this paper, an optical wireless communications (OWC) system based on ...
详细信息
ISBN:
(纸本)9781538661543
Deep learning (DL) techniques have the potential of making communication systems more efficient and solving many problems in the physical layer. In this paper, an optical wireless communications (OWC) system based on visible light communications (VLC) technology is implemented using an autoencoder (AE). The proposed system is tested in different scenarios using various AE parameters and applied on an indoor VLC model. Bit error rate (BER) is evaluated with respect to the signal-to-noise-ratio (SNR) values at different locations within the room. To validate the proposed system, theoretical results are compared to the simulated values. The bit-error performance demonstrates the viability of DL techniques in VLC systems.
Recent work has shown that it is possible for two wearable devices worn by the same user to generate a common key for secure pairing by exploiting gait as a common secret. A key challenge for such device pairing lies ...
详细信息
ISBN:
(纸本)9781450359528
Recent work has shown that it is possible for two wearable devices worn by the same user to generate a common key for secure pairing by exploiting gait as a common secret. A key challenge for such device pairing lies in matching the bits of the keys generated by two independent devices despite the noisy on-board sensor measurements. We propose a novel machine learning framework that uses an autoencoder to help one device predict the sensor observations at another device and generate the key using the predicted sensor data. We prototype the proposed method and evaluate it using real subjects. Our results show that the proposed method achieves a 10% increase in bit agreement rate between two keys generated independently by two different wearable devices.
Aiming at the difficulty of semantic gap in content-based image search (CBIR), inspired by the convolutional neural network (CNN) in image classification and detection, this paper proposes a simple and effective hybri...
详细信息
ISBN:
(纸本)9781538674161
Aiming at the difficulty of semantic gap in content-based image search (CBIR), inspired by the convolutional neural network (CNN) in image classification and detection, this paper proposes a simple and effective hybrid model of deep convolutional network and autoencoder network. This model uses the CNN network to extract the high-level semantic features of the image, then uses the depth autoencoder network to reduce the dimension of the extracted image features, and compresses the features into a 128-bit vector representation. Nearest Neighbor Search (ANN) is an effective strategy for large-scale image retrieval. This paper uses the annoy algorithm to calculate the similarity between the query image and the index tree, and outputs them in descending order of similarity. Experimental results show that the proposed method outperforms some of the latest deep-network image retrieval algorithms on the CIFAR-10 and MNIST datasets. In the TOP10 image search, the MNSIT dataset can obtain 100% accuracy. In the CIFAR dataset experiment, the accuracy and recall rate of the CIFAR4 dataset are as high as 99.9%, and the accuracy and recall rate of the CIF'AR10 dataset reach respectively 97.2% and 98.1% In addition, the size of the convolutional network's parameters and the size of the index are optimized compared to the previous model, so that the effect of second-level real-time response can be achieved in the 10,000-level image search.
Radar signals are time series that have pulse repetition interval, pulse width and pulse amplitude as their features. After reception of them by electronic warfare systems, their features are classified and kept in a ...
详细信息
ISBN:
(纸本)9781538615010
Radar signals are time series that have pulse repetition interval, pulse width and pulse amplitude as their features. After reception of them by electronic warfare systems, their features are classified and kept in a database. This procedure brings vision for the system user if the same signal is received again in the future. For this classification purpose, three algorithms were implemented. The first one is a combined network consisting of a Convolutional Neural Network (CNN) and a Long-Short Time Memory (LSTM), the second is a Hybrid Network including the first network and a CNN which enables parallel training having histogram data as its input. The last algorithm is Stacked autoencoder. Performance analysis was made on real radar data. The best performer is Hybrid Network which reached 99.6% accuracy as it involves histogram data usage and an LSTM for the time series problem. Convolutional LSTM reached 98.3% while Stacked autoencoder had 87% accuracy.
In this paper,we describe a intrusion detection algorithm based on deep learning for industrial control networks,aiming at the security problem of industrial control *** learning is a kind of intelligent algorithm and...
详细信息
In this paper,we describe a intrusion detection algorithm based on deep learning for industrial control networks,aiming at the security problem of industrial control *** learning is a kind of intelligent algorithm and has the ability of automatically *** use self-learning to enhance the experience and dynamic classification *** ideology of deep learning is similar to the idea of intrusion detection to improve the detection rate and reduce the rate of false through learning,a sparse auto-encoder-extreme learning machine intrusion detection model is proposed for the problem of intrusion detection *** uses deep learning autoencoder to combine the coefficient penalty and reconstruction loss of the encode layer to extract the features of high-dimensional data during the training model,and then uses the extreme learning machine to quickly and effectively classify the extracted *** accuracy of the algorithm is verified by the industrial control intrusion detection standard data *** experimental results verify that the method can effectively improve the performance of the intrusion detection system and reduce the false alarm rate.
暂无评论