咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Deep symmetric adaptation netw... 收藏
arXiv

Deep symmetric adaptation network for cross-modality medical image segmentation

作     者:Han, Xiaoting Qi, Lei Yu, Qian Zhou, Ziqi Zheng, Yefeng Shi, Yinghuan Gao, Yang 

作者机构:The State Key Laboratory for Novel Software Technology The National Institute for Healthcare Data Science The Collaborative Innovation Center of Novel Software Technology and Industrialization Nanjing University China  Southeast University China The School of Data and Computer Science Shandong Women’s University China The Tencent Jarvis Lab China 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2021年

核心收录:

主  题:Semantics 

摘      要:Unsupervised domain adaptation (UDA) methods have shown their promising performance in the cross-modality medical image segmentation tasks. These typical methods usually utilize a translation network to transform images from the source domain to target domain or train the pixel-level classifier merely using translated source images and original target images. However, when there exists a large domain shift between source and target domains, we argue that this asymmetric structure could not fully eliminate the domain gap. In this paper, we present a novel deep symmetric architecture of UDA for medical image segmentation, which consists of a segmentation sub-network, and two symmetric source and target domain translation sub-networks. To be specific, based on two translation sub-networks, we introduce a bidirectional alignment scheme via a shared encoder and private decoders to simultaneously align features 1) from source to target domain and 2) from target to source domain, which helps effectively mitigate the discrepancy between domains. Furthermore, for the segmentation sub-network, we train a pixel-level classifier using not only original target images and translated source images, but also original source images and translated target images, which helps sufficiently leverage the semantic information from the images with different styles. Extensive experiments demonstrate that our method has remarkable advantages compared to the state-of-the-art methods in both cross-modality Cardiac and BraTS segmentation tasks. Copyright © 2021, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分