Short-echo-time (TE) proton magnetic resonance spectroscopic imaging (MRSI) allows for simultaneously mapping a number of molecules in the brain, and has been recognized as an important tool for studying in vivo bioch...
详细信息
Short-echo-time (TE) proton magnetic resonance spectroscopic imaging (MRSI) allows for simultaneously mapping a number of molecules in the brain, and has been recognized as an important tool for studying in vivo biochemistry in various neuroscience and disease applications. However, separation of the metabolite and macromolecule (MM) signals present in the short-TE data with significant spectral overlaps remains a major technical challenge. This work introduces a new approach to solve this problem by integrating imaging physics and representation learning. Specifically, a mixed unsupervised and supervised learning-based strategy was developed to learn the metabolite and MM-specific low-dimensional representations using deep autoencoders. A constrained reconstruction formulation is proposed to integrate the MRSI spatiospectral encoding model and the learned representations as effective constraints for signal separation. An efficient algorithm was developed to solve the resulting optimization problem with provable convergence. Simulation and experimental results have been obtained to demonstrate the component-specific representation power of the learned models and the capability of the proposed method in separating metabolite and MM signals for practical short-TE H-1-MRSI data.
We present a comprehensive study on the use of autoencoders for modelling text data, in which (differently from previous studies) we focus our attention on the various issues. We explore the suitability of two differe...
详细信息
We present a comprehensive study on the use of autoencoders for modelling text data, in which (differently from previous studies) we focus our attention on the various issues. We explore the suitability of two different models binary deep autencoders (bDA) and replicated-softmax deep autencoders (rsDA) for constructing deep autoencoders for text data at the sentence level. We propose and evaluate two novel metrics for better assessing the text-reconstruction capabilities of autoencoders. We propose an automatic method to find the critical bottleneck dimensionality for text representations (below which structural information is lost);and finally we conduct a comparative evaluation across different languages, exploring the regions of critical bottleneck dimensionality and its relationship to language perplexity. (C) 2015 Elsevier B.V. All rights reserved.
Black goji berry (Lycium ruthenicum Murr.) has great commercial and nutritional values. Near-infrared hyper-spectral imaging (NIR-HSI) was used to determine total phenolics, total flavonoids and total anthocyanins in ...
详细信息
Black goji berry (Lycium ruthenicum Murr.) has great commercial and nutritional values. Near-infrared hyper-spectral imaging (NIR-HSI) was used to determine total phenolics, total flavonoids and total anthocyanins in dry black goji berries. Convolutional neural networks (CNN) were designed and developed to predict the chemical compositions. These CNN models and deep autoencoder were used as supervised and unsupervised feature extraction methods, respectively. Partial least squares (PLS) and least-squares support vector machine (LS-SVM) as modelling methods, successive projections algorithm and competitive adaptive reweighted sampling (CARS) as wavelength selection methods, and principal component analysis (PCA) and wavelet transform (WT) as feature extraction methods were studied as conventional approaches for comparison. deep learning approaches as modelling methods and feature extraction methods obtained good and equivalent performances to the conventional methods. The results illustrated that deep learning had great potential as modelling and feature extraction methods for chemical compositions determination in NIR-HSI.
Recently, it has become progressively more evident that classic diagnostic labels are unable to accurately and reliably describe the complexity and variability of several clinical phenotypes. This is particularly true...
详细信息
Recently, it has become progressively more evident that classic diagnostic labels are unable to accurately and reliably describe the complexity and variability of several clinical phenotypes. This is particularly true for a broad range of neuropsychiatric illnesses such as depression and anxiety disorders or behavioural phenotypes such as aggression and antisocial personality. Patient heterogeneity can be better described and conceptualized by grouping individuals into novel categories, which are based on empirically-derived sections of intersecting continua that span both across and beyond traditional categorical borders. In this context, neuroimaging data (i.e. the set of images which result from functional/metabolic (e.g. functional magnetic resonance imaging, functional near-infrared spectroscopy, or positron emission tomography) and structural (e.g. computed tomography, T1-, T2- PD- or diffusion weighted magnetic resonance imaging) carry a wealth of spatiotemporally resolved information about each patient's brain. However, they are usually heavily collapsed a priori through procedures which are not learned as part of model training, and consequently not optimized for the downstream prediction task. This is due to the fact that every individual participant usually comes with multiple whole-brain 3D imaging modalities often accompanied by a deep genotypic and phenotypic characterization, hence posing formidable computational challenges. In this paper we design and validate a deep learning architecture based on generative models rooted in a modular approach and separable convolutional blocks (which result in a 20-fold decrease in parameter utilization) in order to a) fuse multiple 3D neuroimaging modalities on a voxel-wise level, b) efficiently convert them into informative latent embeddings through heavy dimensionality reduction, c) maintain good generalizability and minimal information loss. As proof of concept, we test our architecture on the well characterized
Breast cancer is a leading cause of mortality among women, emphasizing the critical need for precise early detection and prognosis. However, conventional methods often struggle to differentiate precancerous lesions or...
详细信息
Breast cancer is a leading cause of mortality among women, emphasizing the critical need for precise early detection and prognosis. However, conventional methods often struggle to differentiate precancerous lesions or tailor treatments effectively. Thermal imaging, capturing subtle temperature variations, presents a promising avenue for non-invasive cancer detection. While some studies explore thermography for breast cancer detection, integrating it with advanced machine learning for early diagnosis and personalized prediction remains relatively unexplored. This study proposes a novel hybrid machine learning system (HMLS) incorporating deep autoencoder techniques for automated early detection and prognostic stratification of breast cancer patients. By exploiting the temporal dynamics of thermographic data, this approach offers a more comprehensive analysis than static single-frame approaches. Data processing involves splitting the dataset for training and testing. A predominant infrared image was selected, and matrix factorization was applied to capture temperature changes over time. Integration of convex factor analysis and bell-curve membership function embedding for dimensionality reduction and feature extraction. The autoencoderdeep neural network further reduces dimensionality. HMLS model development included feature selection and optimization of survival prediction algorithms through cross-validation. Model performance was assessed using accuracy and F-measure metrics. HMLS, integrating clinical data, achieved 81.6% accuracy, surpassing 77.6% using only convex-NMF. The best classifier attained 83.2% accuracy on test data. This study demonstrates the effectiveness of thermographic imaging and HMLS for accurate early detection and personalized prediction of breast cancer. The proposed framework holds promise for enhancing patient care and potentially reducing mortality rates.
Discriminative dictionary learning has been extensively used for pattern classification tasks. By incorporating different kinds of label information into the dictionary learning framework, a dictionary can be attained...
详细信息
Discriminative dictionary learning has been extensively used for pattern classification tasks. By incorporating different kinds of label information into the dictionary learning framework, a dictionary can be attained that represents the original signal with discriminative reconstruction. The previous works learn the dictionary in the original space which limits the dictionary learning performance. In this paper, we propose an approach, namely deep Discriminative Dictionary Pair Learning ((DPL)-P-3) for image classification. The input of (DPL)-P-3 is not the matrix collected by original gray images or hand-crafted features but the relatively deeper features derived from autoencoders. Then, a structured dictionary is designed based on the discriminative contributions across different classes to reconstruct the deep feature. In addition, the associated structured projective dictionary is learned as well to guarantee the decoders updating towards the minimal error of deconvolution operator. By leveraging the discriminative-dictionary-learning-based loss function and the autoencoder loss function, (DPL)-P-3 can simultaneously learn the deep potential feature and the corresponding dictionary pair. In the testing phase of (DPL)-P-3, the minimum error between the deep feature and the structured projective component with regard to different classes can directly indicate the label by a basic matrix multiplication operation. Experimental results on challenging Extended Yale B, AR, UMIST, COIL20, Scene 15, and Caltech101 datasets demonstrate that the proposed (DPL)-P-3 outperforms the prominent dictionary learning methods.
Marine diesel engine with high thermal efficiency and good economy has become the main power of ships. Anomaly detection is an important method to improve the operation reliability of marine diesel engine. Most anomal...
详细信息
Marine diesel engine with high thermal efficiency and good economy has become the main power of ships. Anomaly detection is an important method to improve the operation reliability of marine diesel engine. Most anomaly detection research focuses on failures that have occurred, and few studies consider anomaly prediction. A predictive anomaly detection method based on echo state network (ESN) and deep autoencoder is proposed. Historical sample data is collected and used to train the prediction network ESN and the anomaly detection network deep auto-encoder. After training, the prediction network ESN is used to predict the sensor data sequence in the future. And the predicted sequence is input into the anomaly detection network deep auto-encoder to obtain the predictive anomaly detection result. The relative error and root mean square error of the proposed method are at least 0.089 and 1.002 lower than other methods, respectively. Compared with other anomaly detection methods, the proposed autoencoder method obtains the best precision, accurate, recall indicators. Experiments show that it is feasible to establish a predictive anomaly detection method. More experiments under different conditions need to be studied, and higher performance algorithms need to be developed in the future. (C) 2022 Published by Elsevier Ltd.
With the development of cloud computing, more and more security problems like "fuzzy boundary" are exposed. To solve such problems, unsupervised anomaly detection is increasingly used in cloud security, wher...
详细信息
With the development of cloud computing, more and more security problems like "fuzzy boundary" are exposed. To solve such problems, unsupervised anomaly detection is increasingly used in cloud security, where density estimation is commonly used in anomaly detection clustering tasks. However, in practical use, the excessive amount of data and high dimensionality of data features can lead to difficulties in data calibration, data redundancy, and reduced effectiveness of density estimation algorithms. Although auto-encoders have made fruitful progress in data dimensionality reduction, using auto-encoders alone may still cause the model to be too generalized and unable to detect specific anomalies. In this paper, a new unsupervised anomaly detection method, MemAe-gmm-ma, is proposed. MemAe-gmm-ma generates a low-dimensional representation and reconstruction error for each input sample by a deep auto-encoder. It adds a memory module inside the auto-encoder to better learn the inner meaning of the training samples, and finally puts the low-dimensional information of the samples into a Gaussian mixture model (GMM) for density estimation. MemAe-gmm-ma demonstrates better performance on the public benchmark dataset, with a 4.47% improvement over the MemAe model standard F1 score on the NSL-KDD dataset, and a 9.77% improvement over the CAE-GMM model standard F1 score on the CIC-IDS-2017 dataset.
sensors with microelectromechanical systems technology are an integral part of many modern electronic devices such as wearable medical products, which are inherently subject to memory, bandwidth, and energy constraint...
详细信息
sensors with microelectromechanical systems technology are an integral part of many modern electronic devices such as wearable medical products, which are inherently subject to memory, bandwidth, and energy constraints due to their size and purpose. One of the important challenges for the progress in this area is the storage, transmission, and processing of large quantities of inertial sensors signal. To address this issue, this paper presents a method for near-lossless compression of multi-axis inertial signals. To improve the inertial signal compression capability, the proposed compression method employs the independent component analysis method with a principal component analysis preprocessing step to extract independent components from the signals. A deep autoencoder is used to compress the independent components and later to estimate them in the reconstruction phase. The reconstruction error is also quantized and coded using arithmetic coding and transmitted alongside the compressed components. This paper also proposes a new approach for improving the quality of the reconstructed signals. In this approach, on the receiver side, the reconstruction error is fed to the Madgwick filter as an external noise and is compensated using this filter. The experimental results demonstrate the high compression rate and low reconstruction error of the proposed method compared to the state-of-the-art methods.
deep learning has a strong ability to extract feature representations from data, since it has a great advantage in processing nonlinear and non-stationary data and reflecting nonlinear interactive relationship. This p...
详细信息
deep learning has a strong ability to extract feature representations from data, since it has a great advantage in processing nonlinear and non-stationary data and reflecting nonlinear interactive relationship. This paper proposes to apply deep learning algorithms including deep neural network and deep autoencoder to track index performance and introduces a dynamic weight calculation method to measure the direct effects of the stocks on index. The empirical study takes historical data of Hang Seng Index (HSI) and its constituents to analyze the effectiveness and practicability of the index tracking method. The results show that the index tracking method based on deep neural network has a smaller tracking error, and thus can effectively track the index. (C) 2018 Elsevier B.V. All rights reserved.
暂无评论