Accurate prediction of progression in subjects at risk of Alzheimer's disease is crucial for enrolling the right subjects in clinical trials. However, a prospective comparison of state-of-the-art algorithms for pr...
详细信息
Accurate prediction of progression in subjects at risk of Alzheimer's disease is crucial for enrolling the right subjects in clinical trials. However, a prospective comparison of state-of-the-art algorithms for predicting disease onset and progression is currently lacking. We present the findings of The Alzheimer's Disease Prediction Of Longitudinal Evolution (TADPOLE) Challenge, which compared the performance of 92 algorithms from 33 international teams at predicting the future trajectory of 219 individuals at risk of Alzheimer's disease. Challenge participants were required to make a prediction, for each month of a 5-year future time period, of three key outcomes: clinical diagnosis, Alzheimer's Disease Assessment Scale Cognitive Subdomain (ADAS-Cog13), and total volume of the ventricles. The methods used by challenge participants included multivariate linear regression, machine learning methods such as support vector machines and deep neural networks, as well as disease progression models. No single submission was best at predicting all three outcomes. For clinical diagnosis and ventricle volume prediction, the best algorithms strongly outperform simple baselines in predictive ability. However, for ADASCog13 no single submitted prediction method was significantly better than random guesswork. Two ensemble methods based on taking the mean and median over all predictions, obtained top scores on almost all tasks. Better than average performance at diagnosis prediction was generally associated with the additional inclusion of features from cerebrospinal fluid (CSF) samples and diffusion tensor imaging (DTI). On the other hand, better performance at ventricle volume prediction was associated with inclusion of summary statistics, such as the slope or maxima/minima of patient-specific biomarkers. On a limited, cross-sectional subset of the data emulating clinical trials, performance of the best algorithms at predicting clinical diagnosis decreased only slightly (2 perce
The rapidly growing quantum information science and engineering (QISE) industry will require both quantum-aware and quantum-proficient engineers at the bachelor's level. We provide a roadmap for building a quantum...
详细信息
BACKGROUND:Event-based modeling (EBM) traces sequential progression of events in complex processes like neurodegenerative diseases, adept at handling uncertainties. This study validated an EBM for Alzheimer's dise...
详细信息
BACKGROUND:Event-based modeling (EBM) traces sequential progression of events in complex processes like neurodegenerative diseases, adept at handling uncertainties. This study validated an EBM for Alzheimer's disease (AD) staging designed by EuroPOND, an EU-funded Horizon 2020 project, using research and real-world datasets, a crucial step towards application in multi-center trials.
METHODS:The training dataset comprised 1737 subjects from ADNI-1/GO/2, using the EuroPOND EBM toolbox. Testing datasets included a research cohort from University of Antwerp (controls, CN (n = 46), subjective cognitive decline, SCD (n = 10), mild cognitive impairment, MCI (n = 47), AD dementia, ADD (n = 16)) and a real-world cohort from 9 Belgian Dementia Council memory clinics (CN (n = 91), SCD (n = 66), (non-amnestic) naMCI (n = 54), aMCI (n = 255), and ADD (n = 220). Biomarkers included: 2 clinical scores (Mini Mental State Examination (MMSE), Rey Auditory Verbal Learning Test (RAVLT)); 3 CSF-biomarkers (Aβ, P-tau, total-Tau); and 4 magnetic resonance imaging (MRI) biomarkers (volumes of the hippocampi, temporal, parietal, and frontal cortices) computed with icobrain dm. The naMCI and aMCI groups were compared by EBM stage proportions, and the model's effectiveness at patient level was evaluated.
RESULTS:The research cohort's maximum likelihood event sequence comprised CSF Aβ, P-tau, T-tau, RAVLT, MMSE, and cortical volumes. The clinical cohort's order was frontal cortex volume, MMSE, and remaining cortical regions. aMCI subjects showed higher staging than naMCI, with 54% in the two most advanced stages compared to 38% in naMCI. In the research cohort, 10 outliers were identified with potential mismatches between assigned stages and clinical or biomarker profiles, with CN (n = 4) and SCD (n = 2) subjects assigned in stage 4, one control in stage 9 with abnormal imaging, and three aMCI cases in stage 0 despite clinical or volumetric signs of impairment.
CONCLUSIONS:This study highlight
Context: Tangled commits are changes to software that address multiple concerns at once. For researchers interested in bugs, tangled commits mean that they actually study not only bugs, but also other concerns irrelev...
详细信息
KAGRA, the underground and cryogenic gravitational-wave detector, was operated for its solo observation from February 25 to March 10, 2020, and its first joint observation with the GEO 600 detector from April 7 to Apr...
KAGRA, the underground and cryogenic gravitational-wave detector, was operated for its solo observation from February 25 to March 10, 2020, and its first joint observation with the GEO 600 detector from April 7 to April 21, 2020 (O3GK). This study presents an overview of the input optics systems of the KAGRA detector, which consist of various optical systems, such as a laser source, its intensity and frequency stabilization systems, modulators, a Faraday isolator, mode-matching telescopes, and a high-power beam dump. These optics were successfully delivered to the KAGRA interferometer and operated stably during the observations. The laser frequency noise was observed to limit the detector sensitivity above a few kilohertz, whereas the laser intensity did not significantly limit the detector sensitivity.
Registration of longitudinal brain Magnetic Resonance Imaging (MRI) scans containing pathologies is challenging due to dramatic changes in tissue appearance. Although there has been considerable progress in developing...
详细信息
Registration of longitudinal brain Magnetic Resonance Imaging (MRI) scans containing pathologies is challenging due to dramatic changes in tissue appearance. Although there has been considerable progress in developing general-purpose medical image registration techniques, they have not yet attained the requisite precision and reliability for this task, highlighting its inherent complexity. Here we describe the Brain Tumor Sequence Registration (BraTS-Reg) challenge, as the first public benchmark environment for deformable registration algorithms focusing on estimating correspondences between pre-operative and follow-up scans of the same patient diagnosed with a diffuse brain glioma. The challenge was conducted in conjunction with both the IEEE International Symposium on Biomedical Imaging (ISBI) 2022 and the International Conference on Medical Image computing and computer-Assisted Intervention (MICCAI) 2022. The BraTS-Reg data comprise de-identified multi-institutional multi-parametric MRI (mpMRI) scans, curated for size and resolution according to a canonical anatomical template, and divided into training, validation, and testing sets. Clinical experts annotated ground truth (GT) landmark points of anatomical locations distinct across the temporal domain. The training data with their GT annotations, were publicly released to enable the development of registration algorithms. The validation data, without their GT annotations, were also released to allow for algorithmic evaluation prior to the testing phase, which only allowed submission of containerized algorithms for evaluation on hidden hold-out testing data. Quantitative evaluation and ranking was based on the Median Euclidean Error (MEE), Robustness, and the determinant of the Jacobian of the displacement field. The top-ranked methodologies yielded similar performance across all evaluation metrics and shared several methodological commonalities, including pre-alignment, deep neural networks, inverse consistency
We present a Bayesian population modeling method to analyze the abundance of galaxy clusters identified by the South Pole Telescope (SPT) with a simultaneous mass calibration using weak gravitational lensing data from...
详细信息
We present a Bayesian population modeling method to analyze the abundance of galaxy clusters identified by the South Pole Telescope (SPT) with a simultaneous mass calibration using weak gravitational lensing data from the Dark Energy Survey (DES) and the Hubble Space Telescope (HST). We discuss and validate the modeling choices with a particular focus on a robust, weak-lensing-based mass calibration using DES data. For the DES Year 3 data, we report a systematic uncertainty in weak-lensing mass calibration that increases from 1% at z=0.25 to 10% at z=0.95, to which we add 2% in quadrature to account for uncertainties in the impact of baryonic effects. We implement an analysis pipeline that joins the cluster abundance likelihood with a multiobservable likelihood for the Sunyaev-Zel’dovich effect, optical richness, and weak-lensing measurements for each individual cluster. We validate that our analysis pipeline can recover unbiased cosmological constraints by analyzing mocks that closely resemble the cluster sample extracted from the SPT-SZ, SPTpol ECS, and SPTpol 500d surveys and the DES Year 3 and HST-39 weak-lensing datasets. This work represents a crucial prerequisite for the subsequent cosmological analysis of the real dataset.
暂无评论