BACKGROUND Arrhythmogenic right ventricular cardiomyopathy (ARVC) is a rare genetic heart disease associated with life-threatening ventricular arrhythmias. Diagnosis of ARVC is based on the 2010 Task Force Criteria (T...
详细信息
BACKGROUND Arrhythmogenic right ventricular cardiomyopathy (ARVC) is a rare genetic heart disease associated with life-threatening ventricular arrhythmias. Diagnosis of ARVC is based on the 2010 Task Force Criteria (TFC), application of which often requires clinical expertise at specialized centers. OBJECTIVE The purpose of this study was to develop and validate an electrocardiogram (ECG) deep learning (DL) tool for ARVC diagnosis. METHODS ECGs of patients referred for ARVC evaluation were used to develop (n = 551 [80.1%]) and test (n = 137 [19.9%]) an ECG-DL model for prediction of TFC-defined ARVC diagnosis. The ARVC ECG-DL model was externally validated in a cohort of patients with pathogenic or likely pathogenic (P/LP) ARVC gene variants identified through the Geisinger MyCode Community Health Initiative (N = 167). RESULTS Of 688 patients evaluated at Johns Hopkins Hospital (JHH) (57.3% male, mean age 40.2 years), 329 (47.8%) were diagnosed with ARVC. Although ARVC diagnosis made by referring cardiologist ECG interpretation was unreliable (c-statistic 0.53;confidence interval [CI] 0.52-0.53), ECG-DL discrimination in the hold-out testing cohort was excellent (0.87;0.86-0.89) and compared favorably to that of ECG interpretation by an ARVC expert (0.85;0.84-0.86). In the Geisinger cohort, prevalence of ARVC was lower (n = 17 [10.2%]), but ECG-DL-based identification of ARVC phenotype remained reliable (0.80;0.77-0.83). Discrimination was further increased when ECG-DL predictions were combined with non-ECG-derived TFC in the JHH testing (c-statistic 0.940;95% CI 0.933-0.948) and Geisinger validation (0.897;95% CI 0.883-0.912) cohorts. CONCLUSION ECG-DL augments diagnosis of ARVC to the level of an ARVC expert and can differentiate true ARVC diagnosis from phenotype-mimics and at-risk family members/genotype-positive individuals.
Map matching has been widely used in various indoor localization technologies. However, conventional map matching technologies based on probabilistic models, such as particle filter (PF), still have a series of limita...
详细信息
Map matching has been widely used in various indoor localization technologies. However, conventional map matching technologies based on probabilistic models, such as particle filter (PF), still have a series of limitations, such as underutilization of map information, poor generalization, and relatively low precision. To improve the performance of PF-based map matching technique, this paper proposes MapDem, a novel map matching model fusing dynamic word embeddings and variational autoencoder (VAE) to improve matching performance significantly. The key to our approach is to extract map information using dynamic word embeddings to represent each reachable point on the map as word vectors with allowable oriented trajectory information. The same point has different representations on different trajectories so that MapDem can adaptively learn the contextual information of map for position estimation. Unlike traditional particle filters, MapDem focuses on the learning of particle sets distribution by a statistical model, variational autoencoder (VAE), followed by estimating position with combined current and previous sequence information. Extensive experiments have been conducted with 610 trajectories in three real-world scenarios. Numerical results demonstrate the adaptability of MapDem which works equally well in all three different scenarios, outperforming traditional particle filters by 18% on average.
3D indoor scenes are widely used in computer graphics, with applications ranging from interior design to gaming to virtual and augmented reality. They also contain rich information, including room layout, as well as f...
详细信息
3D indoor scenes are widely used in computer graphics, with applications ranging from interior design to gaming to virtual and augmented reality. They also contain rich information, including room layout, as well as furniture type, geometry, and placement. High-quality 3D indoor scenes are highly demanded while it requires expertise and is time-consuming to design high-quality 3D indoor scenes manually. Existing research only addresses partial problems: some works learn to generate room layout, and other works focus on generating detailed structure and geometry of individual furniture objects. However, these partial steps are related and should be addressed together for optimal synthesis. We propose SCENEHGN, a hierarchical graph network for 3D indoor scenes that takes into account the full hierarchy from the room level to the object level, then finally to the object part level. Therefore for the first time, our method is able to directly generate plausible 3D room content, including furniture objects with fine-grained geometry, and their layout. To address the challenge, we introduce functional regions as intermediate proxies between the room and object levels to make learning more manageable. To ensure plausibility, our graph-based representation incorporates both vertical edges connecting child nodes with parent nodes from different levels, and horizontal edges encoding relationships between nodes at the same level. Our generation network is a conditional recursive neural network (RvNN) based variational autoencoder (VAE) that learns to generate detailed content with fine-grained geometry for a room, given the room boundary as the condition. Extensive experiments demonstrate that ourmethod produces superior generation results, even when comparing results of partial steps with alternative methods that can only achieve these. We also demonstrate that our method is effective for various applications such as part-level room editing, room interpolation, and room gener
The ability of deep learning (DL) approaches to learn generalised signal and noise models, coupled with their fast inference on GPUs, holds great promise for enhancing gravitational-wave (GW) searches in terms of spee...
详细信息
The ability of deep learning (DL) approaches to learn generalised signal and noise models, coupled with their fast inference on GPUs, holds great promise for enhancing gravitational-wave (GW) searches in terms of speed, parameter space coverage, and search sensitivity. However, the opaque nature of DL models severely harms their reliability. In this work, we meticulously develop a DL model stage-wise and work towards improving its robustness and reliability. First, we address the problems in maintaining the purity of training data by deriving a new metric that better reflects the visual strength of the 'chirp' signal features in the data. Using a reduced, smooth representation obtained through a variational auto-encoder (VAE), we build a classifier to search for compact binary coalescence (CBC) signals. Our tests on real LIGO data show an impressive performance of the model. However, upon probing the robustness of the model through adversarial attacks, its simple failure modes were identified, underlining how such models can still be highly fragile. As a first step towards bringing robustness, we retrain the model in a novel framework involving a generative adversarial network (GAN). Over the course of training, the model learns to eliminate the primary modes of failure identified by the adversaries. Although absolute robustness is practically impossible to achieve, we demonstrate some fundamental improvements earned through such training, like sparseness and reduced degeneracy in the extracted features at different layers inside the model. We show that these gains are achieved at practically zero loss in terms of model performance on real LIGO data before and after GAN training. Through a direct search on similar to 8.8 days of LIGO data, we recover two significant CBC events from GWTC-2.1 (Abbott et al 2022 2108.01045 [gr-qc]), GW190519_153544 and GW190521_074359. We also report the search sensitivity obtained from an injection study.
Traffic signal control aims to coordinate traffic signals across intersections to improve the traffic efficiency of a district or a city. Deep reinforcement learning (RL) has been applied to traffic signal control rec...
详细信息
Traffic signal control aims to coordinate traffic signals across intersections to improve the traffic efficiency of a district or a city. Deep reinforcement learning (RL) has been applied to traffic signal control recently and demonstrated promising performance where each traffic signal is regarded as an agent. However, there are still several challenges that may limit its large-scale application in the real world. On the one hand, the policy of the current traffic signal is often heavily influenced by its neighbor agents, and the coordination between the agent and its neighbors needs to be considered. Hence, the control of a road network composed of multiple traffic signals is naturally modeled as a multi-agent system, and all agents' policies need to be optimized simultaneously. On the other hand, once the policy function is conditioned on not only the current agent's observation but also the neighbors', the policy function would be closely related to the training scenario and cause poor generalizability because the agents in various scenarios often have heterogeneous neighbors. To make the policy learned from a training scenario generalizable to new unseen scenarios, a novel Meta variationally Intrinsic Motivated (MetaVIM) RL method is proposed to learn the decentralized policy for each intersection that considers neighbor information in a latent way. Specifically, we formulate the policy learning as a meta-learning problem over a set of related tasks, where each task corresponds to traffic signal control at an intersection whose neighbors are regarded as the unobserved part of the state. Then, a learned latent variable is introduced to represent the task's specific information and is further brought into the policy for learning. In addition, to make the policy learning stable, a novel intrinsic reward is designed to encourage each agent's received rewards and observation transition to be predictable only conditioned on its own history. Extensive experiments cond
Two recent works have shown the benefit of modeling both high-level factors and their related features to learn disentangled representations with variational autoencoders (VAE). We propose here a novel VAEbased approa...
详细信息
Two recent works have shown the benefit of modeling both high-level factors and their related features to learn disentangled representations with variational autoencoders (VAE). We propose here a novel VAEbased approach that follows this principle. Inspired by conditional VAE, the features are no longer treated as random variables over which integration must be performed. Instead, they are deterministically computed from the input data using a neural network whose parameters can be estimated jointly with those of the decoder and of the encoder. Moreover, the quality of the generated images has been improved by using discrete latent variables and a two-step learning procedure, which makes it possible to increase the size of the latent space without altering the disentanglement properties of the model. Results obtained on two different datasets validate the proposed approach that achieves better performance than the two aforementioned works in terms of disentanglement, while providing higher quality images.& COPY;2023 Elsevier B.V. All rights reserved.
Over the last decades, computer modeling has evolved from a supporting tool for engineering prototype design to an ubiquitous instrument in non-traditional fields such as medical rehabilitation. This area comes with u...
详细信息
Over the last decades, computer modeling has evolved from a supporting tool for engineering prototype design to an ubiquitous instrument in non-traditional fields such as medical rehabilitation. This area comes with unique challenges, e.g. the complex modeling of soft tissue or the analysis of musculoskeletal systems. Conventional modeling approaches like the finite element (FE) method are computationally costly when dealing with such models, limiting their usability for real-time simulation or deployment on low-end hardware, if the model at hand cannot be simplified without losing its expressiveness. Non-traditional approaches such as surrogate modeling using data-driven model order reduction are used to make complex high-fidelity models more widely available regardless. They often involve a dimensionality reduction step, in which the high-dimensional system state is transformed onto a low-dimensional subspace or manifold, and a regression approach to capture the reduced system behavior. While most publications focus on one dimensionality reduction, such as principal component analysis (PCA) (linear) or autoencoder (nonlinear), we consider and compare PCA, kernel PCA, autoencoders, as well as variational autoencoders for the approximation of a continuum-mechanical system. In detail, we demonstrate the benefits of the surrogate modeling approach on a complex musculoskeletal system of a human upper-arm with severe nonlinearities and physiological geometry. We consider both, the model's deformation and the internal stress as the two main quantities of interest in a FE context. By doing so we are able to create computationally low-cost surrogate models which capture the system behavior with high approximation quality and fast evaluations.
Background and hypothesis: Schizophrenic symptoms are currently classified into several dimensions: positive symptoms, negative symptoms, disorganized thought, cognitive deficits, and general symptoms. This multidimen...
详细信息
Background and hypothesis: Schizophrenic symptoms are currently classified into several dimensions: positive symptoms, negative symptoms, disorganized thought, cognitive deficits, and general symptoms. This multidimensional categorization of symptoms has sparked an ongoing debate in the field: what is the core symptom of schizophrenia? This question remains central to our understanding of the disorder. In our previous research, we demonstrated the centrality of disorganized thought in schizophrenia. Building on this finding, we hypothesized that this centrality is not a feature of all psychotic disorders. Study design: We analyzed Brief Psychiatric Rating Scale (BPRS) data from 17,414 outpatients using a variational autoencoder (VAE) model. Clustering analysis was performed on the latent representations, followed by network analysis to examine symptom interactions within identified clusters. Analyses were further stratified by age and BPRS item types. Study results: Clustering analysis revealed four distinct groups, in which one cluster exhibiting characteristics with more severe symptoms (mean BPRS score 99.73 [SD 9.46]), younger age (mean 19.48 [SD 9.31] years), and smaller sample size showed a unique centrality of thought disturbance among five subscale symptoms. This pattern persisted in both adult and adolescent subgroups and when analyzing only verbally reported items. Conclusions: Our findings support the specificity of disorganized thought centrality in one particular cluster, distinguishing it from other psychiatric presentations. This study demonstrates the value in analyzing large-scale clinical data to refine our understanding of complex psychiatric disorders, potentially informing future targeted interventions.
The Global Navigation Satellite System (GNSS) plays critical role in providing precise timing and positioning information for various applications. However, its civilian signals are vulnerable to spoofing attacks, nec...
详细信息
The Global Navigation Satellite System (GNSS) plays critical role in providing precise timing and positioning information for various applications. However, its civilian signals are vulnerable to spoofing attacks, necessitating robust detection methods. While supervised learning has shown promise in spoof detection, it requires distinct features in the training data for effective learning. In this study, we propose a novel training feature set that combines power and Signal Quality Monitoring (SQM) metrics from a single-antenna GNSS receiver. We introduce a Two-Stage Artificial Neural Network (TS-ANN) that leverages this feature set alongside multi-correlator finger values for effective spoof detection. Furthermore, supervised learning models often struggle with previously unseen attacks due to limited overlap with the training data. To address this, we present a zero-day attack detector based on unsupervised representation learning using a variational autoencoder (VAE) which is trained exclusively on authentic datasets, enhancing its ability to detect novel attack patterns. Experimental results on publicly available TEXBAT datasets demonstrate that our TS-ANN achieves a detection probability (PD) exceeding 99% for similar test datasets. In more sophisticated attack scenarios, such as DS-7, performance may decrease to 50.68%. Nevertheless, our zero-day detector maintains a PD exceeding 92.5%, highlighting its resilience to previously unseen attacks.
作者:
Mora, ArianeSchmidt, ChristinaBalderson, BradFrezza, ChristianBoden, MikaelUniv Queensland
Sch Chem & Mol Biosci Mol Biosci Bldg 76 St Lucia Qld 4072 Australia Univ Cambridge
Hutchison MRC Res Ctr Med Res Council Canc Unit Box 197Cambridge Biomed Campus Cambridge CB2 0X2 England Univ Cologne
Fac Med & Univ Hosp Cologne Inst Metabol Ageing Cluster Excellence Cellular Stress Responses Aging Joseph Stelzmann Str 26 D-50931 Cologne Germany Univ Cologne
Inst Genet Fac Math & Nat Sci Cluster Excellence Cellular Stress Responses Aging Cologne Germany
BackgroundClear cell renal cell carcinoma (ccRCC) tumours develop and progress via complex remodelling of the kidney epigenome, transcriptome, proteome and metabolome. Given the subsequent tumour and inter-patient het...
详细信息
BackgroundClear cell renal cell carcinoma (ccRCC) tumours develop and progress via complex remodelling of the kidney epigenome, transcriptome, proteome and metabolome. Given the subsequent tumour and inter-patient heterogeneity, drug-based treatments report limited success, calling for multi-omics studies to extract regulatory relationships, and ultimately, to develop targeted therapies. Yet, methods for multi-omics integration to reveal mechanisms of phenotype regulation are ***, we present SiRCle (Signature Regulatory Clustering), a method to integrate DNA methylation, RNA-seq and proteomics data at the gene level by following central dogma of biology, i.e. genetic information proceeds from DNA, to RNA, to protein. To identify regulatory clusters across the different omics layers, we group genes based on the layer where the gene's dysregulation first occurred. We combine the SiRCle clusters with a variational autoencoder (VAE) to reveal key features from omics' data for each SiRCle cluster and compare patient subpopulations in a ccRCC and a PanCan *** SiRCle to a ccRCC cohort, we showed that glycolysis is upregulated by DNA hypomethylation, whilst mitochondrial enzymes and respiratory chain complexes are translationally suppressed. Additionally, we identify metabolic enzymes associated with survival along with the possible molecular driver behind the gene's perturbations. By using the VAE to integrate omics' data followed by statistical comparisons between tumour stages on the integrated space, we found a stage-dependent downregulation of proximal renal tubule genes, hinting at a loss of cellular identity in cancer cells. We also identified the regulatory layers responsible for their suppression. Lastly, we applied SiRCle to a PanCan cohort and found common signatures across ccRCC and PanCan in addition to the regulatory layer that defines tissue *** results highlight SiRCle's ability to reveal mechanisms of p
暂无评论