版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Guilin Univ Elect Technol Sch Comp & Informat Secur Guilin 541004 Peoples R China Southern Med Univ Guangdong Prov Peoples Hosp Guangdong Acad Med Sci Dept Radiol Guangzhou 510080 Peoples R China Guangdong Prov Key Lab Artificial Intelligence Med Guangzhou 510080 Peoples R China Southern Med Univ Guangdong Prov Peoples Hosp Med Res Inst Guangdong Acad Med Sci Guangzhou 510080 Peoples R China South China Univ Technol Guangzhou Peoples Hosp 1 Sch Med Dept Radiol Guangzhou 510080 Peoples R China Shanxi Med Univ Shanxi Prov Canc Hosp Shanxi HospCanc Hosp Chinese Acad Med SciDept Radiol Taiyuan 030013 Peoples R China Huazhong Univ Sci & Technol Union Hosp Tongji Med Coll Dept Thorac Surg Wuhan 430021 Peoples R China Huazhong Univ Sci & Technol Union Hosp Tongji Med Coll Dept Radiol Wuhan 430021 Peoples R China Maastricht Univ Fac Hlth Med Life Sci Clin Data Sci NL-6229 ET Maastricht Netherlands Maastricht Univ GROW Sch Oncol & Reprod Dept Radiat Oncol Maastro Med Ctr NL-6229 ET Maastricht Netherlands
出 版 物:《EXPERT SYSTEMS WITH APPLICATIONS》 (Expert Sys Appl)
年 卷 期:2025年第269卷
核心收录:
学科分类:1201[管理学-管理科学与工程(可授管理学、工学学位)] 0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:National Natural Science Foundation of China [82272075, 82102034, 82472062, 82372044, 82371954, 82360356, 82001789, 62102103] Natural Science Foundation of Guangdong Province, China [2024A15 15011672] Guangxi Science and Technology Project [AB24010086, AB21220037] Key-Area Research and Development Program of Guangdong Province, China [2021B0101420006] Regional Innovation and Development Joint Fund of National Natural Science Foundation of China [U22A20345] Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application [2022B1212010011] Natural Science Foundation for Distinguished Young Scholars of Guangdong Province [2023B1515020043] Applied Basic Research Projects of Shanxi Province, China, Outstanding Youth Foundation [2021030212 22014]
主 题:Lung tumor segmentation Vision transformer Contrastive learning Masked image modeling Self-supervised learning
摘 要:Precise and automatic segmentation of lung tumors is crucial for computer-aided diagnosis and subsequent treatment planning. However, the heterogeneity of lung tumors, varying in size, shape, and location, combined with the low contrast between tumors and adjacent tissues, significantly complicates accurate segmentation. Furthermore, most supervised segmentation models face limitations due to the scarcity and lack of diversity in labeled training data. Although various self-supervised learning strategies have been developed for model pre-training with unlabeled data, their relative benefits for the downstream task of lung tumor segmentation on CT scans remain uncertain. To address these challenges, we introduce a robust and label-efficient Transformer- based framework with different self-supervised strategies for lung tumor segmentation. Our model training is conducted in two phase, during the pre-training phase, we pre-train the model on a large amount of unlabeled CT scans, employing three different pre-training strategies and comparing their impacts on downstream lung tumor segmentation task. In the fine-tuning phase, we utilize the encoders of the pre-trained models for label-efficient supervised fine-tuning. In addition, we design a surrounding samples-based contrastive learning (SSCL) module at the end of the encoder to enhance feature extraction, especially for tumors with indistinct boundaries. Our proposed methods are evaluated on test sets from seven different center. When only a small amount of labeled data is available, compared to supervised models, Ours (SimMIM3D) demonstrates superior segmentation performance on three internal test sets, achieving Dice coefficients of 0.8419, 0.8346, and 0.8282, respectively. Additionally, it also shows strong generalization on external test sets, with Dice coefficients of 0.7594, 0.7684, 0.6578, and 0.6621, respectively. Extensive experiments confirm the efficacy of our methodology, demonstrating significant improvem