Accurately predicting distant metastasis in head & neck cancer has the potential to improve patient survival by allowing early treatment intensification with systemic therapy for high-risk patients. By extracting ...
详细信息
Accurately predicting distant metastasis in head & neck cancer has the potential to improve patient survival by allowing early treatment intensification with systemic therapy for high-risk patients. By extracting large amounts of quantitative features and mining them, radiomics has achieved success in predicting treatment outcomes for various diseases. However, there are several challenges associated with conventional radiomic approaches, including: (1) how to optimally combine information extracted from multiple modalities;(2) how to construct models emphasizing different objectives for different clinical applications;and (3) how to utilize and fuse output obtained by multiple classifiers. To overcome these challenges, we propose a unified model termed as multifaceted radiomics (M-radiomics). In M-radiomics, a deep learning with stacked sparse autoencoder is first utilized to fuse features extracted from different modalities into one representation feature set. A multi-objective optimization model is then introduced into M-radiomics where probability-based objective functions are designed to maximize the similarity between the probability output and the true label vector. Finally, M-radiomics employs multiple base classifiers to get a diverse Pareto-optimal model set and then fuses the output probabilities of all the Pareto-optimal models through an evidential reasoning rule fusion (ERRF) strategy in the testing stage to obtain the final output probability. Experimental results show that M-radiomics with the stacked autoencoder outperforms the model without the autoencoder. M-radiomics obtained more accurate results with a better balance between sensitivity and specificity than other single-objective or single-classifier-based models.
Abnormal driving may cause serious danger to both the driver and the public. Existing detectors of abnormal driving behavior are mainly based on shallow models, which require large quantities of labeled data. The acqu...
详细信息
Abnormal driving may cause serious danger to both the driver and the public. Existing detectors of abnormal driving behavior are mainly based on shallow models, which require large quantities of labeled data. The acquisition and labelling of abnormal driving data are, however, difficult, labor-intensive and time-consuming. This situation inspires us to rethink the abnormal driving detection problem and to apply deep architecture models. In this study, we establish a novel deep-learning-based model for abnormal driving detection. A stacked sparse autoencoders model is used to learn generic driving behavior features. The model is trained in a greedy layer-wise fashion. As far as the authors know, this is the first time that a deep learning approach is applied using autoencoders as building blocks to represent driving features for abnormal driving detection. In addition, a method for denoising is added to the algorithm to increase the robustness of feature expression. The dropout technology is introduced into the entire training process to avoid overfitting. Experiments carried out on our self-created driving behavior dataset demonstrate that the proposed scheme achieves a superior performance for abnormal driving detection compared to the state-of-the-art.
Visual cortex is able to process information in multiple pathways and integrate various forms of representations. This paper proposed a bio-inspired method that utilizes the line-segment-based representation to perfor...
详细信息
Visual cortex is able to process information in multiple pathways and integrate various forms of representations. This paper proposed a bio-inspired method that utilizes the line-segment-based representation to perform a dedicated channel for the geometric feature learning process. The extracted geometric information can be integrated with the original pixel-based information and implemented on both the convolutional neural networks (SegCNN) and the stacked autoencoders (SegSAE). Segment-based operations such as segConvolve and segPooling are designed to further process the extracted geometric features. The proposed models are verified on the MNIST dataset, Caltech 101 dataset and QuickDraw dataset for image classification. According to the experimental results, the proposed models can facilitate the classification accuracies especially when the sizes of the training set are limited. Particularly, the method based on multiple representations is found to be effective for classifying the hand-drawn sketches. (C) 2019 Elsevier B.V. All rights reserved.
Soft sensor technology has become an effective tool to enable real-time estimations of key quality variables in industrial rubber-mixing processes, which facilitates efficient monitoring and a control of rubber manufa...
详细信息
Soft sensor technology has become an effective tool to enable real-time estimations of key quality variables in industrial rubber-mixing processes, which facilitates efficient monitoring and a control of rubber manufacturing. However, it remains a challenging issue to develop high-performance soft sensors due to improper feature selection/extraction and insufficiency of labeled data. Thus, a deep semi-supervised just-in-time learning-based Gaussian process regression (DSSJITGPR) is developed for Mooney viscosity estimation. It integrates just-in-time learning, semi-supervised learning, and deep learning into a unified modeling framework. In the offline stage, the latent feature information behind the historical process data is extracted through a stacked autoencoder. Then, an evolutionary pseudo-labeling estimation approach is applied to extend the labeled modeling database, where high-confidence pseudo-labeled data are obtained by solving an explicit pseudo-labeling optimization problem. In the online stage, when the query sample arrives, a semi-supervised JITGPR model is built from the enlarged modeling database to achieve Mooney viscosity estimation. Compared with traditional Mooney-viscosity soft sensor methods, DSSJITGPR shows significant advantages in extracting latent features and handling label scarcity, thus delivering superior prediction performance. The effectiveness and superiority of DSSJITGPR has been verified through the Mooney viscosity prediction results from an industrial rubber-mixing process.
Travel time data is a vital factor for numbers of performance measures in transportation systems. Travel time prediction is both a challenging and interesting problem in ITS, because of the underlying traffic and even...
详细信息
Travel time data is a vital factor for numbers of performance measures in transportation systems. Travel time prediction is both a challenging and interesting problem in ITS, because of the underlying traffic and events' hidden patterns. In this study, we propose a multi-step deep-learning-based algorithm for predicting travel time. Our algorithm starts with data pre-processing. Then, the data is augmented by incorporating external datasets. Moreover, extensive feature learning and engineering such as spatiotemporal feature analysis, feature extraction, and clustering algorithms is applied to improve the feature space. Furthermore, for representing features we used a deep stacked autoencoder with dropout layer as regularizer. Finally, a deep multi-layer perceptron is trained to predict travel times. For testing our predictive accuracy, we used a 5-fold cross validation to test the generalization of our predictive model. As we observed, the performance of the proposed algorithm is on average 4 min better than applying the deep neural network to the initial feature space. Furthermore, we have noticed that representation learning using stacked autoencoders makes our learner robust to overfitting. Moreover, our algorithm is capable of capturing the general dynamics of the traffic, however further works need to be done for some rare events which impact travel time prediction significantly. (C) 2019 Elsevier Ltd. All rights reserved.
Received signal strength (RSS) fingerprintbased indoor localization has received increasing popularity over the past decades. However, it suffers from the high calibration effort for fingerprint collection. In this pa...
详细信息
Received signal strength (RSS) fingerprintbased indoor localization has received increasing popularity over the past decades. However, it suffers from the high calibration effort for fingerprint collection. In this paper, a Centralized indooR localizatioNmethod using Pseudo-label (CRNP) is proposed, which employs a small set of labeled data (RSS fingerprint) along with large volumes of unlabeled data (RSS valueswithout coordinates) to reduce theworkload of labeled data collection and improve the indoor localization performance. However, the rich location data is large in quantity and privacy sensitive, which may lead to high network cost (i. e., data transmission cost, data storage cost) and potential privacy leakage for data transmission to the central server. Therefore, a decentralized indoor localization method incorporating CRNP and federated learning is devised, which keeps the location data on local users' devices and improves the shared CRNP model by aggregating users' updates of the model. The experiment results demonstrate that (i) the proposed CRNP enables to improve the indoor localization accuracy by using unlabeled crowdsourced data;(ii) the designed decentralized scheme is robust to different data distribution and is capable to reduce the network cost and prevent users' privacy leakage.
Network traffic classification has become more important with the rapid growth of Internet and online applications. Numerous studies have been done on this topic which have led to many different approaches. Most of th...
详细信息
Network traffic classification has become more important with the rapid growth of Internet and online applications. Numerous studies have been done on this topic which have led to many different approaches. Most of these approaches use predefined features extracted by an expert in order to classify network traffic. In contrast, in this study, we propose a deep learning-based approach which integrates both feature extraction and classification phases into one system. Our proposed scheme, called "Deep Packet," can handle both traffic characterization in which the network traffic is categorized into major classes (e.g., FTP and P2P) and application identification in which identifying end-user applications (e.g., BitTorrent and Skype) is desired. Contrary to most of the current methods, Deep Packet can identify encrypted traffic and also distinguishes between VPN and non-VPN network traffic. The Deep Packet framework employs two deep neural network structures, namely stacked autoencoder (SAE) and convolution neural network (CNN) in order to classify network traffic. Our experiments show that the best result is achieved when Deep Packet uses CNN as its classification model where it achieves recall of 0.98 in application identification task and 0.94 in traffic categorization task. To the best of our knowledge, Deep Packet outperforms all of the proposed classification methods on UNB ISCX VPN-nonVPN dataset.
The early detection of blade icing is gaining increasing attention due to its importance in guaranteeing wind turbine safety and operation efficiency. In this study, a wind turbine icing fault detection method based o...
详细信息
The early detection of blade icing is gaining increasing attention due to its importance in guaranteeing wind turbine safety and operation efficiency. In this study, a wind turbine icing fault detection method based on discriminative feature learning is proposed. First, a stacked autoencoder (SAE) is trained to generate representations, which utilizes a large amount of normal operating data, as well as time series correlation information. Second, discriminative features are obtained by combining the original data, SAE-extracted features, and the residual vector. Third, the sparse linear discriminant analysis is performed on the discriminative features to achieve simultaneous feature selection and dimension reduction. Finally, the wind turbine operation status is examined using the learned discriminative feature. The proposed discriminative feature learning-based fault detection scheme is tested on a benchmark wind turbine icing dataset. Results of the comparative trial verify the feasibility and superiority of the proposed method.
We consider the problem of index tracking whose goal is to construct a portfolio that minimizes the tracking error between the returns of a benchmark index and the tracking portfolio. This problem carries significant ...
详细信息
We consider the problem of index tracking whose goal is to construct a portfolio that minimizes the tracking error between the returns of a benchmark index and the tracking portfolio. This problem carries significant importance in financial economics as the tracking portfolio represents a parsimonious index that facilitates a practical means to trade the benchmark index. For this reason, extensive studies from various optimization and machine learning-based approaches have ensued. In this paper, we solve this problem through the latest developments from deep learning. Specifically, we associate a deep latent representation of asset returns, obtained through a stacked autoencoder, with the benchmark index's return to identify the assets for inclusion in the tracking portfolio. Empirical results indicate that to improve the performance of previously proposed deep learning-based index tracking, the deep latent representation needs to be learned in a strictly hierarchical manner and the relationship between the returns of the index and the assets should be quantified by statistical measures. Various deep learning-based strategies have been tested for the stock market indices of the S&P 500, FTSE 100 and HSI, and it is shown that our proposed methodology generates the best index tracking performance.
Soft sensing provides a reliable estimation of difficult-to-measure variables and is important for process control, optimization, and monitoring. The extraction of beneficial information from the abundance of availabl...
详细信息
Soft sensing provides a reliable estimation of difficult-to-measure variables and is important for process control, optimization, and monitoring. The extraction of beneficial information from the abundance of available data in modern industrial processes and the development of data-driven soft sensors are becoming areas of increasing interest. In addition, the use of deep neural networks (DNNs) has become a popular data processing and feature extraction technique owing to its superiority in generating high-level abstract representations from massive amounts of data. A deep relevant representation learning (DRRL) approach based on a stacked autoencoder is proposed for the development of an efficient soft sensor. Representations from conventional DNN methods are not extracted for an output prediction, and thus a mutual information analysis is conducted between the representations and the output variable in each layer. Analysis results indicate that irrelevant representations are eliminated during the training of the subsequent layer. Hence, relevant information is highlighted in a layer-by-layer manner. Deep relevant representations are then extracted, and a soft sensor model is established. The results of a numerical example and an industrial oil refining process show that the prediction performance of the proposed DRRL-based soft sensing approach is better than that of other state-of-the-art methods. (C) 2019 Elsevier Inc. All rights reserved.
暂无评论