Turbofan engines are known as the heart of the aircraft,as important equipment of the aircraft,the health state of the engine determines the aircraft’s operational ***,the equipment monitoring and maintenance of the ...
详细信息
Turbofan engines are known as the heart of the aircraft,as important equipment of the aircraft,the health state of the engine determines the aircraft’s operational ***,the equipment monitoring and maintenance of the engine is an important part of ensuring the healthy and stable operation of the aircraft,and the remaining useful life(RUL) prediction of the engine is an important part of *** monitoring data of turbofan engines have a high dimension and a long time span,which brings difficulties to predicting the remaining useful life of the *** paper proposes a residual life prediction model based on autoencoder and temporal convolutional network(TCN).Among them,autoencoder is used to reduce the dimension of the data and extract features from the engine monitoring *** obtained low-dimensional data is trained in the TCN network to predict the remaining useful *** model mentioned in this article is verified on the NASA public dataset(C-MAPSS)and compared with common machine learning methods and other deep neural *** experimental results show that the model proposed in this paper performs best in the evaluation methods,and this conclusion has important implications for engine health.
Thanks to its hierarchical and generative nature,Deep Belief Network(DBN) is effective to feature representation and extraction in signal *** this paper,DBN is investigated and implemented to monaural speech ***,two...
详细信息
Thanks to its hierarchical and generative nature,Deep Belief Network(DBN) is effective to feature representation and extraction in signal *** this paper,DBN is investigated and implemented to monaural speech ***,two separate DBNs are trained to extract features from mixed noisy signals and target clean speech ***,the two types of extracted features are associated together by training a BP neural network to obtain a mapping from the features of mixed signals to the features of target ***,by performing DBN and the above mapping neural network,target speech can be estimated from the input mixed *** are conducted on different kinds of mixed signals including female/male speech mixtures,human-speech/Gaussian-noise audio mixtures,and human-speech/music audio *** PESQ scores of the extracted speech are 3.32,2.59,and 3.42 respectively,which illustrates that the model performs well on speech separation tasks,especially on the mixed signals where the inference signals have obvious spectral structures.
Nowadays, due to the development of network technology, the Internet becomes the main resource for people to obtain information. The openness of the network makes the network abound of all kinds of information, so it ...
详细信息
ISBN:
(纸本)9781632660015
Nowadays, due to the development of network technology, the Internet becomes the main resource for people to obtain information. The openness of the network makes the network abound of all kinds of information, so it becomes more and more important that using network text classification techniques enable people to get the information they are interested in quickly from the mixed and disorderly network information. Since network text classification technology is the basis of information filtering, search engines, and other fields, it has gradually become a research focus. The traditional text classification technology can't effectively support the Chinese web page text classification because of the large scale of Chinese web page text. An important way to learn the data feature from massive data is to use deep learning neural network structure. Deep learning network has excellent feature learning ability. It can combine objects of low-level features to form advanced abstract representations of the object which will be more suitable for classification. This paper proposes a new deep learning based text classification model to solve the problem of Chinese web text categorization of dimension reduction. Moreover we verify the feasibility of this method through the experiment.
As one of the most rapidly developing artificial intelligence techniques, deep learning has been applied in various machine learning tasks and has received great attention in data science and statistics. Regardless of...
详细信息
As one of the most rapidly developing artificial intelligence techniques, deep learning has been applied in various machine learning tasks and has received great attention in data science and statistics. Regardless of the complex model structure, deep neural networks can be viewed as a nonlinear and nonparametric generalization of existing statistical models. In this review, we introduce several popular deep learning models including convolutional neural networks, generative adversarial networks, recurrent neural networks, and autoencoders, with their applications in image data, sequential data and recommender systems. We review the architecture of each model and highlight their connections and differences compared with conventional statistical models. In particular, we provide a brief survey of the recent works on the unique overparameterization phenomenon, which explains the strengths and advantages of using an extremely large number of parameters in deep learning. In addition, we provide a practical guidance on optimization algorithms, hyperparameter tuning, and computing resources.
This paper describes application of Artificial Intelligence using machine learning and deep learning at our laser diode module manufacturing facility. Implementing A.I. into data analysis and classification problems, ...
详细信息
ISBN:
(纸本)9781728133379
This paper describes application of Artificial Intelligence using machine learning and deep learning at our laser diode module manufacturing facility. Implementing A.I. into data analysis and classification problems, various benefits such as quality control, human work reduction and efficient usage of big data have been obtained.
While Word2Vec represents words (in text) as vectors carrying semantic information, audio Word2Vec was shown to be able to represent signal segments of spoken words as vectors carrying phonetic structure information. ...
详细信息
ISBN:
(纸本)9781538646595
While Word2Vec represents words (in text) as vectors carrying semantic information, audio Word2Vec was shown to be able to represent signal segments of spoken words as vectors carrying phonetic structure information. Audio Word2Vec can be trained in an unsupervised way from an unlabeled corpus, except the word boundaries are needed. In this paper, we extend audio Word2Vec from word-level to utterance-level by proposing a new segmental audio Word2Vec, in which unsupervised spoken word boundary segmentation and audio Word2Vec are jointly learned and mutually enhanced, so an utterance can be directly represented as a sequence of vectors carrying phonetic structure information. This is achieved by a segmental sequence-to-sequence autoencoder (SSAE), in which a segmentation gate trained with reinforcement learning is inserted in the encoder. Experiments on English, Czech, French and German show very good performance in both unsupervised spoken word segmentation and spoken term detection applications (significantly better than frame-based DTW).
Recently, a nonlinear dimension reduction technique, called autoencoder, had been *** can efficiently carry out mappings in both directions between the original data and low-dimensional code ***, a single autoencoder ...
详细信息
Recently, a nonlinear dimension reduction technique, called autoencoder, had been *** can efficiently carry out mappings in both directions between the original data and low-dimensional code ***, a single autoencoder commonly maps all data into a single *** the original data set have remarkable different categories (for example, characters and handwritten digits), then only one autoencoder will not be efficient To deal with the data of remarkable different categories, this paper proposes an Auto-Associative Neural Network System (AANNS) based on multiple *** novel technique has the functions of auto-association, incremental learning and local ***, these functions are the foundations of cognitive *** results on benchmark MNIST digit dataset and handwritten character-digit dataset show the advantages of the proposed model.
We propose to use a feature representation obtained by pairwise learning in a low-resource language for query-by-example spoken term detection (QbE-STD). We assume that word pairs identified by humans are available in...
详细信息
ISBN:
(纸本)9781509041183
We propose to use a feature representation obtained by pairwise learning in a low-resource language for query-by-example spoken term detection (QbE-STD). We assume that word pairs identified by humans are available in the low-resource target language. The word pairs are parameterized by a multi-lingual bottleneck feature (BNF) extractor that is trained using transcribed data in high-resource languages. The multi-lingual BNFs of the word pairs are used as an initial feature representation to train an autoencoder (AE). We extract features from an internal hidden layer of the pairwise trained AE to perform acoustic pattern matching for QbE-STD. Our experiments on the TIMIT and Switchboard corpora show that the pairwise learning brings 7.61% and 8.75% relative improvements in mean average precision (MAP) respectively over the initial feature representation.
Many success stories involving deep neural networks are instances of supervised learning, where available labels power gradient-based learning methods. Creating such labels, however, can be expensive and thus there is...
详细信息
ISBN:
(纸本)9781509041183
Many success stories involving deep neural networks are instances of supervised learning, where available labels power gradient-based learning methods. Creating such labels, however, can be expensive and thus there is increasing interest in weak labels which only provide coarse information, with uncertainty regarding time, location or value. Using such labels often leads to considerable challenges for the learning process. Current methods for weak-label training often employ standard supervised approaches that additionally reassign or prune labels during the learning process. The information gain, however, is often limited as only the importance of labels where the network already yields reasonable results is boosted. We propose treating weak-label training as an unsupervised problem and use the labels to guide the representation learning to induce structure. To this end, we propose two autoencoder extensions: class activity penalties and structured dropout. We demonstrate the capabilities of our approach in the context of score-informed source separation of music.
A variety of techniques based on numerical characteristics are currently presented for mining time-series data. However, we find that time-series data generally contain curves sharing some set of visual characteristic...
详细信息
A variety of techniques based on numerical characteristics are currently presented for mining time-series data. However, we find that time-series data generally contain curves sharing some set of visual characteristics and *** characteristics offer a deeper understanding of time-series data, and open up a potential new technique for time-series analysis. Particularly beneficial from recent advances in deep neural networks, representations and features can be automatically learnt by deep learning architectures such as autoencoders. Based on that, our work proposes a novel method, named time-series visualization(TSV), to efficiently detect visual characteristics from curves of time-series data and use these characteristics for intelligent analysis. Architecture and algorithm of TSV based on stacked autoencoders are introduced in this paper. Further, important factors affecting the performance of TSV are discussed based on empirical results. Through empirical evaluation, it is demonstrated that TSV has better efficiency and higher classification accuracy on analyzing the datasets with significant curve feature.
暂无评论