—Data size is the bottleneck for developing deep saliency models, because collecting eye-movement data is very time-consuming and expensive. Most of current studies on human attention and saliency modeling have used ...
详细信息
—Data size is the bottleneck for developing deep saliency models, because collecting eye-movement data is very time-consuming and expensive. Most of current studies on human attention and saliency modeling have used high-quality stereotype stimuli. In real world, however, captured images undergo various types of transformations. Can we use these transformations to augment existing saliency datasets? Here, we first create a novel saliency dataset including fixations of 10 observers over 1900 images degraded by 19 types of transformations. Second, by analyzing eye movements, we find that observers look at different locations over transformed versus original images. Third, we utilize the new data over transformed images, called data augmentation transformation (DAT), to train deep saliency models. We find that label-preserving DATs with negligible impact on human gaze boost saliency prediction, whereas some other DATs that severely impact human gaze degrade the performance. These label-preserving valid augmentation transformations provide a solution to enlarge existing saliency datasets. Finally, we introduce a novel saliency model based on generative adversarial network (dubbed GazeGAN). A modified U-Net is proposed as the generator of the GazeGAN, which combines classic "skip connections" with a novel "center-surround connection (CSC)", in order to leverage multi-level features. We also propose a histogram loss based on Alternative Chi-Square Distance (ACS HistLoss) to refine the saliency map in terms of luminance distribution. Extensive experiments and comparisons over 3 datasets indicate that GazeGAN achieves the best performance in terms of popular saliency evaluation metrics, and is more robust to various perturbations. We also provide a comprehensive quantitative comparison of 22 state-of-the-art saliency models on distorted images, which contributes a robustness benchmark for saliency community. Our code and data are available at: https://***/CZHQuality/
Background Early amyloid deposition results in functional and structural brain alterations in predementia stages of Alzheimer's disease (AD). However, how early functional and structural brain changes are related ...
Background Early amyloid deposition results in functional and structural brain alterations in predementia stages of Alzheimer's disease (AD). However, how early functional and structural brain changes are related to each other remains unclear. Investigating the simultaneous disruptions of functional-structural brain features within individuals in relation to amyloid deposition may increase our understanding of the neuropathological processes underlying AD. Method We included 648 non-demented participants from the European Prevention for Alzheimer’s Dementia (EPAD) cohort vIMI baseline data release. The cut-off for cerebrospinal fluid (CSF) amyloid-β positivity (A+) was defined at <1000pg/mL (Elecsys assay). Functional connectivity was estimated with functional eigenvector centrality (EC) from the resting state functional MRI. White matter (WM) integrity was estimated with fractional anisotropy (FA), computed on diffusion MRI. Both measures were computed at the voxel level within the gray and white matter, respectively. First, we used linear regression models to investigate differences between A+ and A- participants within each modality separately, adjusting for age, sex, and site. In the whole group, we performed sparse canonical correlation analysis (sCCA) on functional and structural measures, identifying shared components between the two modalities. Individual sCCA scores were compared between amyloid groups using a generalized linear model (GLM). To evaluate this association in A+ individuals, the same sCCA analysis was performed normalizing the structural and functional matrices using the mean and SD from A- participants. Results Sample characteristics are provided in Table 1 . Differences in voxelwise EC and FA between amyloid groups are shown in Figure 1 . In the whole group, sCCA analysis showed that EC in the default mode, executive and cerebellar networks covary with FA in parietal, temporal and cerebellar areas ( Figure 2 ). sCCA projections were differen
Registration of longitudinal brain Magnetic Resonance Imaging (MRI) scans containing pathologies is challenging due to dramatic changes in tissue appearance. Although there has been considerable progress in developing...
详细信息
Registration of longitudinal brain Magnetic Resonance Imaging (MRI) scans containing pathologies is challenging due to dramatic changes in tissue appearance. Although there has been considerable progress in developing general-purpose medical image registration techniques, they have not yet attained the requisite precision and reliability for this task, highlighting its inherent complexity. Here we describe the Brain Tumor Sequence Registration (BraTS-Reg) challenge, as the first public benchmark environment for deformable registration algorithms focusing on estimating correspondences between pre-operative and follow-up scans of the same patient diagnosed with a diffuse brain glioma. The challenge was conducted in conjunction with both the IEEE International Symposium on Biomedical Imaging (ISBI) 2022 and the International Conference on Medical image Computing and computer-Assisted Intervention (MICCAI) 2022. The BraTS-Reg data comprise de-identified multi-institutional multi-parametric MRI (mpMRI) scans, curated for size and resolution according to a canonical anatomical template, and divided into training, validation, and testing sets. Clinical experts annotated ground truth (GT) landmark points of anatomical locations distinct across the temporal domain. The training data with their GT annotations, were publicly released to enable the development of registration algorithms. The validation data, without their GT annotations, were also released to allow for algorithmic evaluation prior to the testing phase, which only allowed submission of containerized algorithms for evaluation on hidden hold-out testing data. Quantitative evaluation and ranking was based on the Median Euclidean Error (MEE), Robustness, and the determinant of the Jacobian of the displacement field. The top-ranked methodologies yielded similar performance across all evaluation metrics and shared several methodological commonalities, including pre-alignment, deep neural networks, inverse consistency
Early detection of microaneurysms (MAs), the first sign of Diabetic Retinopathy (DR), is an essential first step in automated detection of DR to prevent vision loss and blindness. This study presents a novel and diffe...
详细信息
Hyperspectral imagery has emerged as a popular sensing modality for a variety of applications, and sparsity-based methods were shown to be very effective to deal with challenges coming from high dimensionality in most...
详细信息
In most computervision applications, convolutional neural networks (CNNs) operate on dense image data generated by ordinary cameras. Designing CNNs for sparse and irregularly spaced input data is still an open proble...
详细信息
One of the fundamental challenges in video object segmentation is to find an effective representation of the target and background appearance. The best performing approaches resort to extensive fine-tuning of a convol...
详细信息
This paper reviews the NTIRE 2020 challenge on real world super-resolution. It focuses on the participating methods and final results. The challenge addresses the real world setting, where paired true high and low-res...
详细信息
Computational competitions are the standard for benchmarking medical imageanalysis algorithms, but they typically use small curated test datasets acquired at a few centers, leaving a gap to the reality of diverse mul...
Computational competitions are the standard for benchmarking medical imageanalysis algorithms, but they typically use small curated test datasets acquired at a few centers, leaving a gap to the reality of diverse multicentric patient data. To this end, the Federated Tumor Segmentation (FeTS) Challenge represents the paradigm for real-world algorithmic performance evaluation. The FeTS challenge is a competition to benchmark (i) federated learning aggregation algorithms and (ii) state-of-the-art segmentation algorithms, across multiple international sites. Weight aggregation and client selection techniques were compared using a multicentric brain tumor dataset in realistic federated learning simulations, yielding benefits for adaptive weight aggregation, and efficiency gains through client sampling. Quantitative performance evaluation of state-of-the-art segmentation algorithms on data distributed internationally across 32 institutions yielded good generalization on average, albeit the worst-case performance revealed data-specific modes of failure. Similar multi-site setups can help validate the real-world utility of healthcare AI algorithms in the future.
Automatic aorta segmentation and quantification in thoracic computed tomography (CT) images is important for detection and prevention of aortic diseases. This paper proposes an automatic aorta segmentation algorithm i...
详细信息
暂无评论