When loaded with difficulties in fulfilling daily requirements, a lot of people in today's world experience an emotional pressure known as stress. Stress that lasts for a short duration of time has more advantages...
详细信息
When loaded with difficulties in fulfilling daily requirements, a lot of people in today's world experience an emotional pressure known as stress. Stress that lasts for a short duration of time has more advantages as they are good for mental health. But, the persistence of stress for a long duration of time may lead to serious health impacts in individuals, such as high blood pressure, cardiovascular disease, stroke and so on. Long-term stress, if unidentified and not treated, may also result in personality disorder, depression and anxiety. The initial detection of stress has become more important to prevent the health issues that arise due to stress. Detection of stress based on brain signals for analysing the emotion in humans leads to accurate detection outcomes. Using EEG-based detection systems and disease, disability and disorders can be identified from the brain by utilising the brain waves. Sentiment Analysis (SA) is helpful in identifying the emotions and mental stress in the human brain. So, a system to accurately and precisely detect depression in human based on their emotion through the utilisation of SA is of high necessity. The development of a reliable and precise Emotion and Stress Recognition (ESR) system in order to detect depression in real-time using deep learning techniques with the aid of Electroencephalography (EEG) signal-based SA is carried out in this paper. The essentials needed for performing stress and emotion detection are gathered initially from benchmark databases. Next, the pre-processing procedures, like the removal of artifacts from the gathered EEG signal, are carried out on the implemented model. The extraction of the spectral attributes is carried out from the pre-processed. The extracted spectral features are considered the first set of features. Then, with the aid of a Conditional variational autoencoder (CVA), the deep features are extracted from the pre-processed signals forming a second set of features. The weights are opti
Marginal maximum likelihood (MML) estimation is the preferred approach to fitting item response theory models in psychometrics due to the MML estimator's consistency, normality, and efficiency as the sample size t...
详细信息
Marginal maximum likelihood (MML) estimation is the preferred approach to fitting item response theory models in psychometrics due to the MML estimator's consistency, normality, and efficiency as the sample size tends to infinity. However, state-of-the-art MML estimation procedures such as the Metropolis-Hastings Robbins-Monro (MH-RM) algorithm as well as approximate MML estimation procedures such as variational inference (VI) are computationally time-consuming when the sample size and the number of latent factors are very large. In this work, we investigate a deep learning-based VI algorithm for exploratory item factor analysis (IFA) that is computationally fast even in large data sets with many latent factors. The proposed approach applies a deep artificial neural network model called an importance-weighted autoencoder (IWAE) for exploratory IFA. The IWAE approximates the MML estimator using an importance sampling technique wherein increasing the number of importance-weighted (IW) samples drawn during fitting improves the approximation, typically at the cost of decreased computational efficiency. We provide a real data application that recovers results aligning with psychological theory across random starts. Via simulation studies, we show that the IWAE yields more accurate estimates as either the sample size or the number of IW samples increases (although factor correlation and intercepts estimates exhibit some bias) and obtains similar results to MH-RM in less time. Our simulations also suggest that the proposed approach performs similarly to and is potentially faster than constrained joint maximum likelihood estimation, a fast procedure that is consistent when the sample size and the number of items simultaneously tend to infinity.
This paper introduces Wasserstein Adversarially Regularized Graph autoencoder (WARGA), an implicit generative algorithm that directly regularizes the latent distribution of node embedding to a target distribution via ...
详细信息
This paper introduces Wasserstein Adversarially Regularized Graph autoencoder (WARGA), an implicit generative algorithm that directly regularizes the latent distribution of node embedding to a target distribution via the Wasserstein metric. To ensure the Lipschitz continuity, we propose two approaches: WARGA-WC that uses weight clipping method and WARGA-GP that uses gradient penalty method. The proposed models have been validated by link prediction and node clustering on real-world graphs with visualizations of node embeddings, in which WARGA generally outperforms other state-of-the-art models based on Kullback-Leibler (KL) divergence and typical adversarial framework. (c) 2023 Elsevier B.V. All rights reserved.
Domain generalization aims at generalizing the network trained on multiple domains to unknown but related domains. Under the assumption that different domains share the same classes, previous works can build relations...
详细信息
Domain generalization aims at generalizing the network trained on multiple domains to unknown but related domains. Under the assumption that different domains share the same classes, previous works can build relationships across domains. However, in realistic scenarios, the change of domains is always followed by the change of categories, which raises a difficulty for collecting sufficient aligned categories across domains. Bearing this in mind, this article introduces union domain generalization (UDG) as a new domain generalization scenario, in which the label space varies across domains, and the categories in unknown domains belong to the union of all given domain categories. The absence of categories in given domains is the main obstacle to aligning different domain distributions and obtaining domain-invariant information. To address this problem, we propose category-stitch learning (CSL), which aims at jointly learning the domain-invariant information and completing missing categories in all domains through an improved variational autoencoder and generators. The domain-invariant information extraction and sample generation cross-promote each other to better generalizability. Additionally, we decouple category and domain information and propose explicitly regularizing the semantic information by the classification loss with transferred samples. Thus our method can breakthrough the category limit and generate samples of missing categories in each domain. Extensive experiments and visualizations are conducted on MNIST, VLCS, PACS, Office-Home, and DomainNet datasets to demonstrate the effectiveness of our proposed method.
The design of optimal microstructures requires first, the identification of microstructural features that influence the material's properties and, then, a search for a combination of these features that give rise ...
详细信息
The design of optimal microstructures requires first, the identification of microstructural features that influence the material's properties and, then, a search for a combination of these features that give rise to desired properties. For microstructures with complex morphologies, where the number of features is large, deriving these structure-property relationships is a challenging task. To address this challenge, we propose a generative machine learning model that can automatically identify low-dimensional descriptors of microstructural features that can be used to establish structure-property relationships. Based on this model, we present an integrated, data-driven framework for microstructure characterization, reconstruction, and design that is applicable to heterogeneous materials with polycrystalline microstructures. The proposed method is evaluated on a case study of designing dual-phase steel microstructures created with the multi-level Voronoi tessellation method. To this end, we train a variational autoencoder to identify the descriptors from these synthetic dual-phase steel microstructures. Subsequently, we employ Bayesian optimization to search for the optimal combination of the descriptors and generate microstructures with specific yield stress and low susceptibility for damage initiation. The presented results show how microstructure descriptors, determined by the variational autoencoder model, act as design variables for an optimization algorithm that identifies microstructures with desired properties.
The conventional trust model employed in satellite network security routing algorithms exhibits limited accuracy in detecting malicious nodes and lacks adaptability when confronted with unknown attacks. To address thi...
详细信息
The conventional trust model employed in satellite network security routing algorithms exhibits limited accuracy in detecting malicious nodes and lacks adaptability when confronted with unknown attacks. To address this challenge, this paper introduces a secure satellite network routing technology founded on deep learning and trust management. The approach embraces the concept of distributed trust management, resulting in all satellite nodes in this paper being equipped with trust management and anomaly detection modules for assessing the security of neighboring nodes. In a more detailed breakdown, this technology commences by preprocessing the communication behavior of satellite network nodes using D-S evidence theory, effectively mitigating interference factors encountered during the training of VAE modules. Following this preprocessing step, the trust vector, which has undergone prior processing, is input into the VAE module. Once the VAE module's training is completed, the satellite network can assess safety factors by employing the safety module during the collection of trust evidence. Ultimately, these security factors can be integrated with the pheromone component within the ant colony algorithm to guide the ants in discovering pathways. Simulation results substantiate that the proposed satellite network secure routing algorithm effectively counters the impact of malicious nodes on data transmission within the network. When compared to the traditional trust management model of satellite network secure routing algorithms, the algorithm demonstrates enhancements in average end-to-end delay, packet loss rate, and throughput.
UAV aerial survey technology has been widely used in agricultural production, in the aerial survey mission sensor by signal interference, environmental changes and other factors will have the problem of missing flight...
详细信息
UAV aerial survey technology has been widely used in agricultural production, in the aerial survey mission sensor by signal interference, environmental changes and other factors will have the problem of missing flight data. In order to accurately complement the time-series data, the paper proposes a complementary model based on VAE-CGAN optimization, which uses a combination of conditional generative adversarial network (CGAN) and variational autoencoder (VAE), incorporates QRNN as a regressor for VAE-CGAN reduction, adds ProbSparse Self-attention (PSA) to reduce computational complexity, and uses a new discriminator structure. Comparative experiments on real aerial survey project datasets show that the model is universally applicable on data with different parameter time series missing and outperforms other comparative models in terms of sample generation capability and prediction results with different missing rates.
With the emergence of new Internet services and the drastic increase in Internet traffic, traffic classification has become increasingly important to effectively satisfy the quality of service to users. The traffic cl...
详细信息
With the emergence of new Internet services and the drastic increase in Internet traffic, traffic classification has become increasingly important to effectively satisfy the quality of service to users. The traffic classification system should be resilient and operate smoothly regardless of network conditions or performance and should be capable of handling various classes of Internet services. This paper proposes a traffic classification method in a software-defined network environment that employs a variational autoencoder (VAE) to accomplish this. The proposed method trains the VAE using six statistical features and extracts the distributions of latent features for the flows in each service class. Furthermore, it classifies the query traffic by comparing the distributions of latent features for the query traffic with the learned distributions of the service classes. For the experiment, the statistical features of network flows were collected from real-world domestic and overseas Internet services for training and testing. According to the experimental results, the proposed method has an average accuracy of 89%. This accuracy was 52%, 47%, 39%, 59%, and 26% higher than conventional statistics-based classification methods, MLP, AE+MLP, VAE+MLP, and SVM, respectively. This result clearly suggests that probability distributions of latent features, rather than specific values for latent features, can be used as more stable features.
Spectral unmixing is one of the prime topics in hyperspectral image analysis, as images often contain multiple sources of spectra. Spectral variability is one of the key factors affecting unmixing accuracy, since spec...
详细信息
Spectral unmixing is one of the prime topics in hyperspectral image analysis, as images often contain multiple sources of spectra. Spectral variability is one of the key factors affecting unmixing accuracy, since spectral signatures are affected by variations in environmental conditions. These and other factors interfere with the accurate discrimination of source type. Several spectral mixing models have been proposed for hyperspectral unmixing to address the spectral variability problem. The interpretation for the spectral variability of these models is usually insufficient, and the unmixing algorithms corresponding to these models are usually classic unmixing techniques. Hyperspectral unmixing algorithms based on deep learning have outperformed classic algorithms. In this paper, based on the typical extended linear mixing model and the perturbed linear mixing model, the scaled and perturbed linear mixing model is constructed, and a spectral unmixing network based on this model is constructed using fully connected neural networks and variational autoencoders to update the abundances, scales, and perturbations involved in the variable endmembers. Adding spatial smoothness constraints to the scale and adding regularization constraints to the perturbation improve the robustness of the model, and adding sparseness constraints to the abundance determination prevents overfitting. The proposed approach is evaluated on both synthetic and real data sets. Experimental results show the superior performance of the proposed method against other competitors.
The labyrinth of the inner ear is an important auditory and balanced sensory organ and is closely related to tinnitus, hearing loss, vertigo, and Meniere diseases. Quantitative description and measurement of the labyr...
详细信息
The labyrinth of the inner ear is an important auditory and balanced sensory organ and is closely related to tinnitus, hearing loss, vertigo, and Meniere diseases. Quantitative description and measurement of the labyrinth is a challenging task in both clinical practice and medical research. A data-driven-based labyrinth morphological modeling method is proposed for extracting simple and low-dimensional representations or feature vectors to quantify the normal and abnormal labyrinths in morphology. Firstly, a two-stage pose alignment strategy is introduced to align the segmented inner ear labyrinths. Then, an energy-adaptive spatial and inter-slice dimensionality reduction strategy is adopted to extract compact morphological features via a variational autoencoder (VAE). Finally, a statistical model of the compact feature in the latent space is established to represent the morphology distribution of the labyrinths. As one of an application of our model, a reference-free quality evaluation for the segmentation of the labyrinth is explored. The experimental results show that the consistency between the proposed method and the Dice similarity coefficient (DSC) reaches 0.78. Further analysis showed that the model also has a high potential to apply to morphological analysis, such as anomaly detection, of the labyrinths.
暂无评论