咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >FetalCLIP: A Visual-Language F... 收藏
arXiv

FetalCLIP: A Visual-Language Foundation Model for Fetal Ultrasound Image Analysis

作     者:Maani, Fadillah Saeed, Numan Saleem, Tausifa Farooq, Zaid Alasmawi, Hussain Diehl, Werner Mohammad, Ameera Waring, Gareth Valappi, Saudabi Bricker, Leanne Yaqub, Mohammad 

作者机构:Department of Computer Vision Mohamed bin Zayed University of Artificial Intelligence Abu Dhabi United Arab Emirates Department of Machine Learning Mohamed bin Zayed University of Artificial Intelligence Abu Dhabi United Arab Emirates  Abu Dhabi United Arab Emirates 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2025年

核心收录:

主  题:Ultrasonic imaging 

摘      要:Foundation models are becoming increasingly effective in the medical domain, offering pre-trained models on large datasets that can be readily adapted for downstream tasks. Despite progress, fetal ultrasound images remain a challenging domain for foundation models due to their inherent complexity, often requiring substantial additional training and facing limitations due to the scarcity of paired multimodal data. To overcome these challenges, here we introduce FetalCLIP, a vision-language foundation model capable of generating universal representation of fetal ultrasound images. FetalCLIP was pre-trained using a multimodal learning approach on a diverse dataset of 210,035 fetal ultrasound images paired with text. This represents the largest paired dataset of its kind used for foundation model development to date. This unique training approach allows FetalCLIP to effectively learn the intricate anatomical features present in fetal ultrasound images, resulting in robust representations that can be used for a variety of downstream applications. In extensive benchmarking across a range of key fetal ultrasound applications, including classification, gestational age estimation, congenital heart defect (CHD) detection, and fetal structure segmentation, FetalCLIP outperformed all baselines while demonstrating remarkable generalizability and strong performance even with limited labeled data. We plan to release the FetalCLIP model publicly for the benefit of the broader scientific community © 2025, CC BY-NC-ND.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分