版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Georgia Institute of Technology Georgia Georgia State University Georgia Emory University Georgia Center for Translational Research in Neuroimaging and Data Science AtlantaGA United States Mila Université de Montréal MontrealQC Canada
出 版 物:《arXiv》 (arXiv)
年 卷 期:2020年
核心收录:
主 题:Neurodegenerative diseases
摘 要:Introspection of deep supervised predictive models trained on functional and structural brain imaging may uncover novel markers of Alzheimer s disease (AD). However, supervised training is prone to learning from spurious features (shortcut learning), impairing its value in the discovery process. Deep unsupervised and, recently, contrastive self-supervised approaches, not biased to classification, are better candidates for the task. Their multimodal options specifically offer additional regularization via modality interactions. This paper introduces a way to exhaustively consider multimodal architectures for a contrastive self-supervised fusion of fMRI and MRI of AD patients and controls. We show that this multimodal fusion results in representations that improve the downstream classification results for both modalities. We investigate the fused self-supervised features projected into the brain space and introduce a numerically stable way to do so. Copyright © 2020, The Authors. All rights reserved.