Multi-domain data are widely leveraged in vision applications taking advantage of complementary information from different modalities, e.g., brain tumor segmentation from multi-parametric magnetic resonance imaging (M...
详细信息
Multi-domain data are widely leveraged in vision applications taking advantage of complementary information from different modalities, e.g., brain tumor segmentation from multi-parametric magnetic resonance imaging (MRI). However, due to possible data corruption and different imaging protocols, the availability of images for each domain could vary amongst multiple data sources in practice, which makes it challenging to build a universal model with a varied set of input data. To tackle this problem, we propose a general approach to complete the random missingdomain(s) data in real applications. Specifically, we develop a novel multi-domain image completion method that utilizes a generative adversarial network (GAN) with a representational disentanglement scheme to extract shared content encoding and separate style encoding across multiple domains. We further illustrate that the learned representation in multi-domain image completion could be leveraged for high-level tasks, e.g., segmentation, by introducing a unified framework consisting of image completion and segmentation with a shared content encoder. The experiments demonstrate consistent performance improvement on three datasets for brain tumor segmentation, prostate segmentation, and facial expression image completion respectively.
Multi-modal MRIs are essential in medical diagnosis;however, the problem of missing modalities often occurs in clinical practice. Although recent works have attempted to extract modality-invariant representations from...
详细信息
ISBN:
(纸本)9783031537660;9783031537677
Multi-modal MRIs are essential in medical diagnosis;however, the problem of missing modalities often occurs in clinical practice. Although recent works have attempted to extract modality-invariant representations from available modalities to perform image completion and enhance segmentation, they neglect the most essential attributes across different modalities. In this paper, we propose a unified generative adversarial network (GAN) with pairwise modality-shared feature disentanglement. We develop a multi-pooling feature fusion module to combine features from all available modalities, and then provide a distance loss together with a margin loss to regularize the symmetry of features. Our model outperforms the existing state-of-the-art methods for the missing modality completion task in terms of the generation quality in most cases. We show that the generated images can improve brain tumor segmentation when the important modalities are missing, especially in the regions which need details from various modalities for accurate diagnosis.
暂无评论