咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >ON SELF-SUPERVISED MULTIMODAL ... 收藏
arXiv

ON SELF-SUPERVISED MULTIMODAL REPRESENTATION LEARNING: AN APPLICATION TO ALZHEIMER'S DISEASE

作     者:Fedorov, Alex Wu, Lei Sylvain, Tristan Luck, Margaux DeRamus, Thomas P. Bleklov, Dmitry Plis, Sergey M. Calhoun, Vince D. 

作者机构:Georgia Institute of Technology Georgia Georgia State University Georgia Emory University Georgia Center for Translational Research in Neuroimaging and Data Science AtlantaGA United States Mila Université de Montréal MontrealQC Canada 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2020年

核心收录:

主  题:Neurodegenerative diseases 

摘      要:Introspection of deep supervised predictive models trained on functional and structural brain imaging may uncover novel markers of Alzheimer s disease (AD). However, supervised training is prone to learning from spurious features (shortcut learning), impairing its value in the discovery process. Deep unsupervised and, recently, contrastive self-supervised approaches, not biased to classification, are better candidates for the task. Their multimodal options specifically offer additional regularization via modality interactions. This paper introduces a way to exhaustively consider multimodal architectures for a contrastive self-supervised fusion of fMRI and MRI of AD patients and controls. We show that this multimodal fusion results in representations that improve the downstream classification results for both modalities. We investigate the fused self-supervised features projected into the brain space and introduce a numerically stable way to do so. Copyright © 2020, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分