Positron emission tomography (PET) is an important medical imaging technique, especially for brain and cancer disease diagnosis. Modern PET scanner is usually combined with computed tomography (CT), where CT image is ...
详细信息
ISBN:
(纸本)9783031164460;9783031164453
Positron emission tomography (PET) is an important medical imaging technique, especially for brain and cancer disease diagnosis. Modern PET scanner is usually combined with computed tomography (CT), where CT image is used for anatomical localization, PET attenuation correction, and radiotherapy treatment planning Considering radiation dose of CT image as well as increasing spatial resolution of PET image, there is a growing demand to synthesize CT image from PET image (without scanning CT) to reduce risk of radiation exposure. However, most existing works perform learning-based image synthesis to construct cross-modality mapping only in the image domain, without considering of the projection domain, leading to potential physical inconsistency. To address this problem, we propose a novel PET-CT synthesis framework by exploiting dual-domain information (i.e., image domain and projection domain). Specifically, we design both image domain network and projection domain network to jointly learn high-dimensional mapping from PET to CT. The image domain and the projection domain can be connected together with a forward projection (FP) and a filtered back projection (FBP). To further help the PET-to-CT synthesis task, we also design a secondary CT-toPET synthesis task with the same network structure, and combine the two tasks into a bidirectional mapping framework with several closed cycles. More importantly, these cycles can serve as cycle-consistent losses to further help network training for better synthesis performance. Extensive validations on the clinical PET-CT data demonstrate the proposed PET-CT synthesis framework outperforms the state-of-the-art (SOTA) medical image synthesis methods with significant improvements.
Recent researches on deep-learning methods have shown enormous promise for depth estimates. Although self-supervised estimators alleviate laborious annotation, numerous pixels fail to match the corresponding ones in a...
详细信息
Recent researches on deep-learning methods have shown enormous promise for depth estimates. Although self-supervised estimators alleviate laborious annotation, numerous pixels fail to match the corresponding ones in adjacent frames. Single re-projection loss functions are oversimplified for stereoscopic adjacent frame constraints. In this work, we constrain the primitive inter-frame-supervised depth estimation via multiple bilateral consistency, which builds a bi-directional mapping between adjacent frames with inherent properties, allowing to develop pose-consistent and depth-consistent models. For the mismatching pixels in adjacent frames, a cycle-consistency framework reconstructs depth maps as scene images with an additional symmetrical re-rendering network. Pose-consistentconstraint aims to ensure the reversibility of ego-motion transformations between adjacent frames, and depth-consistentconstraint strives to ensure the continuity of adjacent frames' depths. With the joint optimization of three independent modules, depth network, pose network and re-rendering network, the proposed framework yields state-of-the-art metrics on the KITTI depth dataset pre-trained on CityScapes, with the AbsRel, SqRel, RMSE and RMS(log) decreased by 6.6%, 12.4%, 0.4%, and 2.2%, respectively.
暂无评论