This paper presents a Remaining Useful Life (RUL) prediction method for a speed reducer based on denoising auto-encoder (DAE). Constructing the efficiency map of the reducer is an important process for predicting the ...
详细信息
ISBN:
(纸本)9781538695715
This paper presents a Remaining Useful Life (RUL) prediction method for a speed reducer based on denoising auto-encoder (DAE). Constructing the efficiency map of the reducer is an important process for predicting the life span. However, due to the situational constraints that occur, unmeasured intervals hinder the completion of the efficiency map. to solve this problem, we propose a method that can predict and reconstruct an unmeasured interval effectively and reliably by using DAE. In addition, we examine the applicability of the proposed algorithm through experiments that assume various situations.
For speech recognition in noisy environments, we propose a multi-task autoencoder which estimates not only clean speech features but also noise features from noisy speech. We introduce the deSpeeching autoencoder, whi...
详细信息
ISBN:
(纸本)9781538646588
For speech recognition in noisy environments, we propose a multi-task autoencoder which estimates not only clean speech features but also noise features from noisy speech. We introduce the deSpeeching autoencoder, which excludes speech signals from noisy speech, and combine it with the conventional denoising autoencoder to form a unified multi-task autoencoder (MTAE). We evaluate it using the Aurora 2 dataset and CHIME 3 dataset. It reduced WER by 15.7% from the conventional denoising autoencoder in the Aurora 2 test set A.
We present a new method to generate fake data in unknown classes in generative adversarial networks (GANs) framework. The generator in GANs is trained to generate somewhat similar to data in known classes but the diff...
详细信息
ISBN:
(纸本)9781538646588
We present a new method to generate fake data in unknown classes in generative adversarial networks (GANs) framework. The generator in GANs is trained to generate somewhat similar to data in known classes but the different one by modelling noisy distribution on feature space of a classifier using proposed marginal denoising autoencoder. The generated data are treated as fake instances in unknown classes and given to the classifier to make it be robust to the real unknown classes. Our results show that synthetic data can act as fake unknown classes and keep down the certainty of the classifier on real unknown classes meanwhile the classification capability of known classes is not degenerated, even improved.
Cervical cancer remains a significant cause of mortality all around the world, even if it can be prevented and cured by removing affected tissues in early stages. Providing universal and efficient access to cervical s...
详细信息
Cervical cancer remains a significant cause of mortality all around the world, even if it can be prevented and cured by removing affected tissues in early stages. Providing universal and efficient access to cervical screening programs is a challenge that requires identifying vulnerable individuals in the population, among other steps. In this work, we present a computationally automated strategy for predicting the outcome of the patient biopsy, given risk patterns from individual medical records. We propose a machine learning technique that allows a joint and fully supervised optimization of dimensionality reduction and classification models. We also build a model able to highlight relevant properties in the low dimensional space, to ease the classification of patients. We instantiated the proposed approach with deep learning architectures, and achieved accurate prediction results (top area under the curve AUC = 0.6875) which outperform previously developed methods, such as denoising autoencoders. Additionally, we explored some clinical findings from the embedding spaces, and we validated them through the medical literature, making them reliable for physicians and biomedical researchers.
Unsupervised outlier detection is a vital task and has high impact on a wide variety of applications domains, such as image analysis and video surveillance. It also gains longstanding attentions and has been extensive...
详细信息
Unsupervised outlier detection is a vital task and has high impact on a wide variety of applications domains, such as image analysis and video surveillance. It also gains longstanding attentions and has been extensively studied in multiple research areas. Detecting and taking action on outliers as quickly as possible are imperative in order to protect network and related stakeholders or to maintain the reliability of critical systems. However, outlier detection is difficult due to the one class nature and challenges in feature construction. Sequential anomaly detection is even harder with more challenges from temporal correlation in data, as well as the presence of noise and high dimensionality. In this paper, we introduce a novel deep structured framework to solve the challenging sequential outlier detection problem. We use autoencoder models to capture the intrinsic difference between outliers and normal instances and integrate the models to recurrent neural networks that allow the learning to make use of previous context as well as make the learners more robust to warp along the time axis. Furthermore, we propose to use a layerwise training procedure, which significantly simplifies the training procedure and hence helps achieve efficient and scalable training. In addition, we investigate a fine-tuning step to update all parameters set by incorporating the temporal correlation in the sequence. We further apply our proposed models to conduct systematic experiments on five real-world benchmark data sets. Experimental results demonstrate the effectiveness of our model, compared with other state-of-the-art approaches.
When a fracturing vehicle is working, it generally needs to bear high loads, media corrosion and erosion. For this special working environment, this study proposes a rolling bearing fault diagnosis method based on sta...
详细信息
When a fracturing vehicle is working, it generally needs to bear high loads, media corrosion and erosion. For this special working environment, this study proposes a rolling bearing fault diagnosis method based on stack marginalised sparse denoising auto-encoder (SDAE). This method combines the sparse auto-encoder (SAE) and the denoising auto-encoder (DAE) and combines the characteristics of dimensionality reduction and robustness. The method adds marginalisation to optimise the SDAE. Finally, it uses a two-layer stacking method. The output results of the second marginalised SDAE are used as input to the softmax classifier for learning training and classification testing. This improved method (stack SDAE) improves the denoising ability, reduces the computational complexity, solves the problems of difficult parameter adjustment and slows training convergence. The experimental tests were carried out on the failure of pitting corrosion of the outer ring of the bearing, pitting failure of the inner ring, and cracking of the rolling element. The results show that the algorithm can effectively improve the accuracy of fault diagnosis of rolling bearings, and it has greatly improved than the algorithms of SAEs and DAE.
Background: While representation learning techniques have shown great promise in application to a number of different NLP tasks, they have had little impact on the problem of ontology matching. Unlike past work that h...
详细信息
Background: While representation learning techniques have shown great promise in application to a number of different NLP tasks, they have had little impact on the problem of ontology matching. Unlike past work that has focused on feature engineering, we present a novel representation learning approach that is tailored to the ontology matching task. Our approach is based on embedding ontological terms in a high-dimensional Euclidean space. This embedding is derived on the basis of a novel phrase retrofitting strategy through which semantic similarity information becomes inscribed onto fields of pre-trained word vectors. The resulting framework also incorporates a novel outlier detection mechanism based on a denoising autoencoder that is shown to improve performance. Results: An ontology matching system derived using the proposed framework achieved an F-score of 94% on an alignment scenario involving the Adult Mouse Anatomical Dictionary and the Foundational Model of Anatomy ontology (FMA) as targets. This compares favorably with the best performing systems on the Ontology Alignment Evaluation Initiative anatomy challenge. We performed additional experiments on aligning FMA to NCI Thesaurus and to SNOMED CT based on a reference alignment extracted from the UMLS Metathesaurus. Our system obtained overall F-scores of 93.2% and 89.2% for these experiments, thus achieving state-of-the-art results. Conclusions: Our proposed representation learning approach leverages terminological embeddings to capture semantic similarity. Our results provide evidence that the approach produces embeddings that are especially well tailored to the ontology matching task, demonstrating a novel pathway for the problem.
This paper addresses the task of Automatic Speech Recognition (ASR) with music in the background. We consider two different situations: 1) scenarios with very small amount of labeled training utterances (duration 1 ho...
详细信息
ISBN:
(纸本)9781538646595
This paper addresses the task of Automatic Speech Recognition (ASR) with music in the background. We consider two different situations: 1) scenarios with very small amount of labeled training utterances (duration 1 hour) and 2) scenarios with large amount of labeled training utterances (duration 132 hours). In these situations, we aim to achieve robust recognition. To this end we investigate the following techniques: a) multi-condition training of the acoustic model, b) denoising autoencoders for feature enhancement and c) joint training of both above mentioned techniques. We demonstrate that the considered methods can be successfully trained with the small amount of labeled acoustic data. We present substantially improved performance compared to acoustic models trained on clean speech. Further, we show a significant increase of accuracy in the under-resourced scenario, when utilizing additional amount of non-labeled data. Here, the non-labeled dataset is used to improve the accuracy of the feature enhancement via autoencoders. Subsequently, the autoencoders are jointly fine-tuned along with the acoustic model using the small amount of labeled utterances.
We present a new method to generate fake data in unknown classes in generative adversarial networks (GANs) framework. The generator in GANs is trained to generate somewhat similar to data in known classes but the diff...
详细信息
ISBN:
(纸本)9781538646595
We present a new method to generate fake data in unknown classes in generative adversarial networks (GANs) framework. The generator in GANs is trained to generate somewhat similar to data in known classes but the different one by modelling noisy distribution on feature space of a classifier using proposed marginal denoising autoencoder. The generated data are treated as fake instances in unknown classes and given to the classifier to make it be robust to the real unknown classes. Our results show that synthetic data can act as fake unknown classes and keep down the certainty of the classifier on real unknown classes meanwhile the classification capability of known classes is not degenerated, even improved.
Image denoising is an important pre-processing step in medical image analysis. Different algorithms have been proposed in past three decades with varying denoising performances. More recently, having outperformed all ...
详细信息
ISBN:
(纸本)9781509059102
Image denoising is an important pre-processing step in medical image analysis. Different algorithms have been proposed in past three decades with varying denoising performances. More recently, having outperformed all conventional methods, deep learning based models have shown a great promise. These methods are however limited for requirement of large training sample size and high computational costs. In this paper we show that using small sample size, denoising autoencoders constructed using convolutional layers can be used for efficient denoising of medical images. Heterogeneous images can be combined to boost sample size for increased denoising performance. Simplest of networks can reconstruct images with corruption levels so high that noise and signal are not differentiable to human eye.
暂无评论