Classifying road types using machine learning models is an important component of road network intelligent systems, as outputs from these models can provide useful traffic information to road users. This paper present...
详细信息
ISBN:
(数字)9783031337833
ISBN:
(纸本)9783031337826;9783031337833
Classifying road types using machine learning models is an important component of road network intelligent systems, as outputs from these models can provide useful traffic information to road users. This paper presents a new method for road-type classification tasks using a multi-stage graph embedding method. The first stage of the proposed method embeds high-dimensional road segment feature vectors to a smaller compact feature space using deep autoencoder. The second stage uses Graph Convolution Neural Networks to obtain an embedded vector for each road segment by aggregating information from neighbouring road segments. The proposed method outperforms the state-of-the-art Graph Convolution Neural Networks embedding method for solving a similar task based on the same dataset.
Recently, it has become progressively more evident that classic diagnostic labels are unable to accurately and reliably describe the complexity and variability of several clinical phenotypes. This is particularly true...
详细信息
Recently, it has become progressively more evident that classic diagnostic labels are unable to accurately and reliably describe the complexity and variability of several clinical phenotypes. This is particularly true for a broad range of neuropsychiatric illnesses such as depression and anxiety disorders or behavioural phenotypes such as aggression and antisocial personality. Patient heterogeneity can be better described and conceptualized by grouping individuals into novel categories, which are based on empirically-derived sections of intersecting continua that span both across and beyond traditional categorical borders. In this context, neuroimaging data (i.e. the set of images which result from functional/metabolic (e.g. functional magnetic resonance imaging, functional near-infrared spectroscopy, or positron emission tomography) and structural (e.g. computed tomography, T1-, T2- PD- or diffusion weighted magnetic resonance imaging) carry a wealth of spatiotemporally resolved information about each patient's brain. However, they are usually heavily collapsed a priori through procedures which are not learned as part of model training, and consequently not optimized for the downstream prediction task. This is due to the fact that every individual participant usually comes with multiple whole-brain 3D imaging modalities often accompanied by a deep genotypic and phenotypic characterization, hence posing formidable computational challenges. In this paper we design and validate a deep learning architecture based on generative models rooted in a modular approach and separable convolutional blocks (which result in a 20-fold decrease in parameter utilization) in order to a) fuse multiple 3D neuroimaging modalities on a voxel-wise level, b) efficiently convert them into informative latent embeddings through heavy dimensionality reduction, c) maintain good generalizability and minimal information loss. As proof of concept, we test our architecture on the well characterized
deep clustering has achieved great success as its powerful ability to learn effective repre-sentation. Especially, graph network clustering has attracted more and more attention. Considering the great success of Graph...
详细信息
deep clustering has achieved great success as its powerful ability to learn effective repre-sentation. Especially, graph network clustering has attracted more and more attention. Considering the great success of Graph autoencoder (GAE) in encoding the graph structure and deep autoencoder (DAE) in extracting valuable representations from the data itself, in this paper, we construct an Adversarially regularized Joint Structured Clustering Network (AJSCN) by integrating GAE and DAE. The framework links GAE and DAE together by trans-ferring the representation learned by DAE to the corresponding layer of GAE to alleviate the over-smoothing problem. Furthermore, the latent representation learned by GAE is enforced to match a prior distribution via an adversarial training scheme to avoid the free of any structure of latent space. We design a joint supervision mechanism to improve the clustering performance consisting of self-supervision and mutual supervision. The self -supervision is to learn more compact representations, and mutual-supervision makes dif-ferent representations more consistent. Experiment results demonstrate the superiority of the proposed model against the state-of-the-art algorithms and achieve significant improvement on six benchmark datasets. (c) 2022 Elsevier Inc. All rights reserved.
This article proposes a novel practical energy harvesting (EH) model-assisted deep learning framework for intelligent channel tracking. Specifically, a multiantenna wireless system is considered for energy beamforming...
详细信息
This article proposes a novel practical energy harvesting (EH) model-assisted deep learning framework for intelligent channel tracking. Specifically, a multiantenna wireless system is considered for energy beamforming in a nonlinear model-based EH scenario. deep autoencoder technique is utilized for learning the channel characteristics due to nonconvexity of the channel estimation optimization problem. The performance evaluation is validated in low signal-to-noise ratio regimes, thereby providing key optimal design insights. Numerical results depict an overall performance enhancement as compared with existing benchmarks.
Although many unsupervised anomaly detection algorithms with outstanding performance have been proposed in last few decades, their performance for high-dimensional data is not guaranteed. Therefore, this paper propose...
详细信息
ISBN:
(纸本)9781450399616
Although many unsupervised anomaly detection algorithms with outstanding performance have been proposed in last few decades, their performance for high-dimensional data is not guaranteed. Therefore, this paper proposes a new framework called deep autoencoding mixture of probabilistic principal component analyzers (DA-MPPCA). This framework uses deep autoencoder (DAE) as a compression network to prepare the low-dimensional representations for a subsequent estimation network NN-MPPCA. Different to conventional MPPCA that is trained by EM algorithm, NN-MPPCA is the neural network form of MPPCA which parameters can be updated via back-propagation algorithm. By jointly training DAE and NN-MPPCA in an end-to-end manner using a defined loss function, we force both dimensionality reduction and density estimation tasks into the unified framework. The experimental results on a variety of public datasets have demonstrated the superior performance of DA-MPPCA over both shallow and deep baseline models with an improvement on F1-score up to 7% over the best baseline.
Due to the nature of Wireless Sensor Networks (WSN), several factors can interfere with sampling and communication. These faults may compromise data quality and disrupt timing and power requirements due to re-sampling...
详细信息
ISBN:
(纸本)9798350311617;9798350311624
Due to the nature of Wireless Sensor Networks (WSN), several factors can interfere with sampling and communication. These faults may compromise data quality and disrupt timing and power requirements due to re-sampling and re-transmission. Recent works in the literature propose data imputation and confidence attribution mechanisms as alternatives to overcome missing, or bad-quality, data. Nevertheless, these solutions often lack mechanisms to evaluate the quality of data imputation on-the-fly. This work combines a confidence attribution mechanism previously proposed by the authors with a deep autoencoder (DAE) to promote an effective data imputation mechanism able to handle transient faults in WSNs. We rely on the ability of deep autoencoders to learn data correlation in order to attribute confidence to data based on the loss of information in the encoding-decoding process. The fine-tuning of the confidence attribution parameters considers the discrepancy between the original confidence attribution method and the loss of information calculated for original and predicted data. Finally, through a case-study, we demonstrate that the DAE-based confidence attribution is capable of matching the confidence attribution that relies on the comparison between original and predicted data in more than 86% of the cases, without requiring the original data.
Storm surge and waves are responsible for a substantial portion of the tropical and extratropical cyclones-induced damage in coastal areas of the USA and Canada. High-fidelity, numerical models can provide accurate si...
详细信息
Storm surge and waves are responsible for a substantial portion of the tropical and extratropical cyclones-induced damage in coastal areas of the USA and Canada. High-fidelity, numerical models can provide accurate simulation results of the water elevation, where a hydrodynamic model (e.g., ADCIRC) is coupled with a wave model (e.g., SWAN). However, they are computationally expensive, hence cannot be employed as part of an early warning system for urban flooding hazards or implemented in probabilistic tropical and extratropical cyclones' risk assessment. In this study, an alternative and efficient approach is proposed based on hybrid machine learning approaches. First, a dimensionality reduction technique based on deep autoencoder is developed to encode the spatial information in a reduced state space. Then, a machine learning-based model is developed in the latent space to predict the maximum surge and significant wave height. The latent space is then decompressed back to the original high-dimensional space using the decoder. The high-fidelity data are retrieved from the North Atlantic Comprehensive Coastal Study (NACCS), released by the US Army Corps of Engineers. Due to its high efficiency and accuracy, the proposed methodology can be employed to analyze the impact of input uncertainties on the simulation results. Four machine learning algorithms are used to predict the maximum surge and significant wave height including artificial neural network (ANN), support vector regression (SVR), gradient boosting regression (GBR), and random forest regression (RFR). The coupled autoencoder-ANN model for the prediction of the storm surge (significant wave height) outperformed all other algorithms with a coefficient of determination R-2 of 0.953 (0.921) for the testing set. In addition, the comparison between deep autoencoder and the widely used principal component analysis (PCA) technique indicated the superior performance of the former since it is able to accurately captur
deep learning base solutions for computer vision made life easier for humans. Video data contain a lot of hidden information and patterns, that can be used for Human Action Recognition (HAR). HAR can apply to many are...
详细信息
deep learning base solutions for computer vision made life easier for humans. Video data contain a lot of hidden information and patterns, that can be used for Human Action Recognition (HAR). HAR can apply to many areas, such as behavior analysis, intelligent video surveillance, and robotic vision. Occlusion, viewpoint variation, and illumination are some issues that make the HAR task more difficult. Some action classes have similar actions or some overlapping parts in them. This, among many other problems, is the main reason that contributes the most to misclassification. Traditional hand-engineering and machine learning-based solutions lack the ability to handle overlapping actions. In this paper, we propose a deep learning-based spatiotemporal HAR framework for overlapping human actions in long videos. Transfer learning techniques are used for deep feature extraction. Fine-tuned pre-trained CNN models learn the spatial relationship at the frame level. An optimized deep autoencoder was used to squeeze high-dimensional deep features. An RNN with LSTM was used to learn the long-term temporal relationships. An iterative module added at the end to fine-tune the trained model on new videos that learns and adopt changes. Our proposed framework achieved state-of-the-art performance in spatiotemporal HAR for overlapping human actions in long visual data streams for non-stationary surveillance environments.
The rapid proliferation of the Internet of Things (IoT) has given rise to security challenges, necessitating the real-time detection and mitigation of cyberattacks. Federated Learning (FL) is promising because it enab...
详细信息
ISBN:
(纸本)9798350308266;9798350308259
The rapid proliferation of the Internet of Things (IoT) has given rise to security challenges, necessitating the real-time detection and mitigation of cyberattacks. Federated Learning (FL) is promising because it enables collaborative global model training in IoT for attack detection while maintaining data privacy. However, FL-enabled systems for IoT face challenges related to class imbalance, inconsistent data distributions, and increased training time. This paper presents a novel FL-enabled approach for IoT attack detection that effectively resolves class imbalance by utilizing Generative Adversarial Networks (GANs). The proposed approach also handles non-iid data distributions through the implementation of Vertical FL. Additionally, ordinal encoding techniques are employed to enhance detection accuracy. The effectiveness of the proposed approach is extensively evaluated on the ToN-IoT and N-BaIoT datasets, the DAEs and MLPs both deep learning (DL) algorithms consistently models achieve high precision, recall, and F1 score on ToN-IoT (MLPs: 99.39%, DAEs: 99.62%) and N-BaIoT datasets (DAEs: 93.66%, MLPs: 94.63%), demonstrating its capability in improving attack detection accuracy and mitigating the limitations of FL-enabled systems in IoT scenarios.
A priori knowledge-incorporating method based on time resolved fluorescence was successfully developed for the determination of polycyclic aromatic hydrocarbons in edible vegetable oils. Specifically, fluorescence dec...
详细信息
A priori knowledge-incorporating method based on time resolved fluorescence was successfully developed for the determination of polycyclic aromatic hydrocarbons in edible vegetable oils. Specifically, fluorescence decay functions of polycyclic aromatic hydrocarbons at characteristic emission wavelengths were used as the priori models and incorporated into the deep-autoencoder. The priori model-incorporating deep-autoencoder models were shown to be effective for the determination of polycyclic aromatic hydrocarbons in edible vegetable oils and root-mean-square errors of prediction lower than 2% were achieved. The influence of analyte, matrix and proportion of priori model were characterized. Increasing the proportion of priori model appropriately was beneficial to the performance of models and 16% was shown to be the best incorporated proportion.
暂无评论