咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >MedCLIP-SAM: Bridging Text and... 收藏
arXiv

MedCLIP-SAM: Bridging Text and Image Towards Universal Medical Image Segmentation

作     者:Koleilat, Taha Asgariandehkordi, Hojat Rivaz, Hassan Xiao, Yiming 

作者机构:Department of Electrical and Computer Engineering Concordia University Montreal Canada Department of Computer Science and Software Engineering Concordia University Montreal Canada 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Semantic Segmentation 

摘      要:Medical image segmentation of anatomical structures and pathology is crucial in modern clinical diagnosis, disease study, and treatment planning. To date, great progress has been made in deep learning-based segmentation techniques, but most methods still lack data efficiency, generalizability, and interactability. Consequently, the development of new, precise segmentation methods that demand fewer labeled datasets is of utmost importance in medical image analysis. Recently, the emergence of foundation models, such as CLIP and Segment-Anything-Model (SAM), with comprehensive cross-domain representation opened the door for interactive and universal image segmentation. However, exploration of these models for data-efficient medical image segmentation is still limited, but is highly necessary. In this paper, we propose a novel framework, called MedCLIP-SAM that combines CLIP and SAM models to generate segmentation of clinical scans using text prompts in both zero-shot and weakly supervised settings. To achieve this, we employed a new Decoupled Hard Negative Noise Contrastive Estimation (DHN-NCE) loss to fine-tune the BiomedCLIP model and the recent gScoreCAM to generate prompts to obtain segmentation masks from SAM in a zero-shot setting. Additionally, we explored the use of zero-shot segmentation labels in a weakly supervised paradigm to improve the segmentation quality further. By extensively testing three diverse segmentation tasks and medical image modalities (breast tumor ultrasound, brain tumor MRI, and lung X-ray), our proposed framework has demonstrated excellent accuracy. Code is available at https://***/HealthX-Lab/MedCLIP-SAM. © 2024, CC BY-NC-SA.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分