Wearable sensors can capture continuous, high resolution physiological and behavioral data that can be utilized to develop early health and wellbeing detection and lead to early warning, intervention, and recommendati...
详细信息
ISBN:
(纸本)9781728138916
Wearable sensors can capture continuous, high resolution physiological and behavioral data that can be utilized to develop early health and wellbeing detection and lead to early warning, intervention, and recommendation systems to improve health and wellbeing. We have built and evaluated an end-to-end wellbeing prediction framework that pipelines raw wearable sensor data into an unsupervised autoencoder-based representation learning model and a supervised wellbeing regression model. We trained and evaluated the framework using the wearable sensor dataset and wellbeing labels collected from college students (total 6391 days from N=252). Wearable data include skin temperature, skin conductance, and acceleration;the wellbeing labels include self-reported alertness, happiness, energy, health, and calmness scored 0 - 100. We compared the performance of our framework with the performance of wellbeing regression models based on hand-crafted features. Our results showed that the proposed framework can automatically extract features from the current day's 24-hour multi-channel data and predict wellbeing scores for next day with mean absolute errors of 14-16. This result shows the possibility of predicting wellbeing accurately using an end-to-end framework, ultimately for developing real-time health and wellbeing monitoring and intervention systems.
Along with the high-speed growth of Internet, cyber-attack is becoming more and more frequent, so the detection of network intrusions is particularly important for keeping network in normal work. In modern big data en...
详细信息
ISBN:
(纸本)9781728124582
Along with the high-speed growth of Internet, cyber-attack is becoming more and more frequent, so the detection of network intrusions is particularly important for keeping network in normal work. In modern big data environment, however, traditional methods do not meet requirement of the network in the aspects of adaptability and efficiency. A approach based on deep learning for intrusion detection was proposed in this paper which can be applied to deal with the problem to certain extent. A utoencoder, as a popular technology of deep learning, was used in the proposed solution. The encoder of deep autoencoder was taken to compress the less important katures and extract key features without decoder. With proposed approach one can build the network and identify attacks faster, the benchmark NSL-KDD dataset can be evaluated with proposed model.
Objective: Lung cancer is proving to be one of the deadliest diseases that is haunting mankind in recent years. Timely detection of the lung nodules would surely enhance the survival rate. This paper focusses on the c...
详细信息
Feature representation based on the high resolution range profile(HRRP) is important in radar automatic target recognition(RATR). Traditional algorithms of feature extraction utilize hallow architectures and rarely ad...
详细信息
ISBN:
(纸本)9781728109602
Feature representation based on the high resolution range profile(HRRP) is important in radar automatic target recognition(RATR). Traditional algorithms of feature extraction utilize hallow architectures and rarely address the challenges of high-noise and unknown-noise distribution. The capability of RATA is restricted by these challenges. In this paper, a novel blind-denoising network(BDNet) is proposed to implement denoising and automatically extract features. As an extension of deep autoencoder, BDNet is based on fully convolutional architecture and employs fusion layers to transfer input features to high dimensional space. Trained with noise-to-noise, BDNet can implement blind-denoising and doesn't rely on noise distribution. Then the output of BDNet is used to classify the targets. In the experiment, we use the measured HRRP signals of four aircrafts to show the effectiveness of our methods. The results prove that BDNet can achieve blind-denoising in high-noise environment and significantly improve the performance of recognition. And our proposed BDNet-AlexNet outperforms other recognition methods.
A method is shown to improve the accuracy of dose rate measurements for handheld instrumentation over the range of 50keV to 3MeV. Dose rate estimation in handheld devices is usually done by summing up the channel data...
详细信息
ISBN:
(纸本)9781728141640
A method is shown to improve the accuracy of dose rate measurements for handheld instrumentation over the range of 50keV to 3MeV. Dose rate estimation in handheld devices is usually done by summing up the channel data of the radiation spectrum. In this sum, weights are applied to correct for the sensitivity of the used detector and allowing for a de-facto calibration of the predicted dose rate. Typically, measurements of handheld radiation detection instruments are calibrated to some very specific source: often Cesium-137, representing a medium energy range. Nevertheless, for sources with significant different line energies e.g. Americium-241 with a strong low energy signature or Cobalt-60 with two prominent photo peaks at higher energies, such instruments have lower accuracy for their dose rate measurement. Based on a dose rate measurement campaign, a dataset of (spectrum, dose rate) pairs was established in a NoSQL database. The associated spectra were measured with 2048 channels and are reduced in size. Then, the database is used to train a combination of autoencoder and a main neural network to predict correction factors for the dose rate depending on the input spectrum. With this approach, the whole relevant energy range is covered and the learning algorithms are compared with respect to their results, not only considering the accuracy but also with respect to evaluation speed and memory footprint. The prediction parts are sparsified and reduced to reasonable tradeoff between accuracy, precision and rapid evaluation. Then, they are applied on the real computational platform of the handheld instrument and tested with respect to performance.
Iterative learning control (ILC) can yield superior performance for repetitive tasks while only requiring approximate models, making this control strategy very appealing for industry. However, applying it to non-linea...
详细信息
Iterative learning control (ILC) can yield superior performance for repetitive tasks while only requiring approximate models, making this control strategy very appealing for industry. However, applying it to non-linear systems involves solving of optimization problems, which limits the industrial uptake, especially for learning online to compensate for variations throughout the system's lifetime. Industry tackles this by designing simple rule-based learning controllers. However, these are often designed in an ad-hoc manner, which potentially limits performance. In this paper, we will couple a low-dimensional parametrized learning control algorithm with a generic signal parametrization method on the basis of machine learning, and specifically using autoencoders. This will allow high control performance, while limiting implementational complexity and maintaining interpretability, paving the way for a higher industrial uptake of learning control for non-linear systems. We will illustrate the parametrized approach in simulation on a non-linear slider-crank system, and provide an example of using the learning approach to perform a tracking task for this system. (C) 2019, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.
Background: MicroRNAs (miRNAs) are small, non-coding RNA that regulate gene expression through post-transcriptional silencing. Differential expression observed in miRNAs, combined with advancements in deep learning (D...
详细信息
ISBN:
(纸本)9789813279827;9789813279810
Background: MicroRNAs (miRNAs) are small, non-coding RNA that regulate gene expression through post-transcriptional silencing. Differential expression observed in miRNAs, combined with advancements in deep learning (DL), have the potential to improve cancer classification by modelling non-linear miRNA-phenotype associations. We propose a novel miRNA-based deep cancer classifier (DCC) incorporating genomic and hierarchical tissue annotation, capable of accurately predicting the presence of cancer in wide range of human tissues. Methods: miRNA expression profiles were analyzed for 1746 neoplastic and 3871 normal samples, across 26 types of cancer involving six organ sub-structures and 68 cell types. miRNAs were ranked and filtered using a specificity score representing their information content in relation to neoplasticity, incorporating 3 levels of hierarchical biological annotation. A DL architecture composed of stacked autoencoders (AE) and a multi-layer perceptron (MLP) was trained to predict neoplasticity using 497 abundant and informative miRNAs. Additional DCCs were trained using expression of miRNA cistrons and sequence families, and combined as a diagnostic ensemble. Important miRNAs were identified using backpropagation, and analyzed in Cytoscape using iCTNet and BiNGO. Results: Nested four-fold cross-validation was used to assess the performance of the DL model. The model achieved an accuracy, AUC/ROC, sensitivity, and specificity of 94.73%, 98.6%, 95.1%, and 94.3%, respectively. Conclusion: Deep autoencoder networks are a powerful tool for modelling complex miRNA-phenotype associations in cancer. The proposed DCC improves classification accuracy by learning from the biological context of both samples and miRNAs, using anatomical and genomic annotation. Analyzing the deep structure of DCCs with backpropagation can also facilitate biological discovery, by performing gene ontology searches on the most highly significant features.
Manifold learning can only be successful if enough data is available. If the data is too sparse, the geometrical and topological structure of the manifold extracted from the data cannot be recognised and the manifold ...
详细信息
ISBN:
(纸本)9781728119854
Manifold learning can only be successful if enough data is available. If the data is too sparse, the geometrical and topological structure of the manifold extracted from the data cannot be recognised and the manifold collapses. In this paper we used data from a simulated two-dimensional double pendulum and tested how well several manifold learning methods could extract the expected manifold, a two-dimensional torus. The experiments were repeated while the data was downsampled in several ways to test the robustness of the different manifold learning methods. We also developed a neural network-based deep autoencoder for manifold learning and demonstrated that it performed in most of our test cases similarly or better than traditional methods such as principal component analysis and isomap.
Rapid esophageal radiation treatment planning is often obstructed by manually adjusting optimization parameters. The adjustment process is commonly guided by the dose-volume histogram (DVH), which evaluates dosimetry ...
详细信息
ISBN:
(纸本)9781538613115
Rapid esophageal radiation treatment planning is often obstructed by manually adjusting optimization parameters. The adjustment process is commonly guided by the dose-volume histogram (DVH), which evaluates dosimetry at planning target volume (PTV) and organs at risk (OARs). DVH is highly correlated with the geometrical relationship between PTV and OARs, which motivates us to explore deep learning techniques to model such correlation and predict DVHs of different OARs. Distance to target histogram (DTH) is chosen to measure the geometrical relationship between PTV and OARs. DTH and DVH features are then undergone dimension reduction by autoencoder. The reduced feature vectors are finally imported into deep belief network to model the correlation between DTH and DVH. This correlation can be used to predict DVH of the corresponding OAR for new patients. Validation results revealed that the relative dose difference of the predicted and clinical DVHs on four different OARs were less than 3 %. These promising results suggested that the predicted DVH could provide near-optimal parameters to significantly reduce the planning time.
In many data analysis tasks, it is beneficial to learn representations where each dimension is statistically independent and thus disentangled from the others. If data generating factors are also statistically indepen...
详细信息
ISBN:
(纸本)9781728143002
In many data analysis tasks, it is beneficial to learn representations where each dimension is statistically independent and thus disentangled from the others. If data generating factors are also statistically independent, disentangled representations can be formed by Bayesian inference of latent variables. We examine a generalization of the Variational autoencoder (VAE), beta-VAE, for learning such representations using variational inference. beta-VAE enforces conditional independence of its bottleneck neurons controlled by its hyperparameter beta. This condition is in general not compatible with the statistical independence of latents. By providing analytical and numerical arguments, we show that this incompatibility leads to a non-monotonic inference performance in beta-VAE with a finite optimal beta.
暂无评论