咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Clinical impact of an explaina... 收藏

Clinical impact of an explainable machine learning with amino acid PET imaging: application to the diagnosis of aggressive glioma

作     者:Ahrari, Shamimeh Zaragori, Timothee Zinsz, Adeline Hossu, Gabriela Oster, Julien Allard, Bastien Al Mansour, Laure Bessac, Darejan Boumedine, Sami Bund, Caroline De Leiris, Nicolas Flaus, Anthime Guedj, Eric Kas, Aurelie Keromnes, Nathalie Kiraz, Kevin Kuijper, Fiene Marie Maitre, Valentine Querellou, Solene Stien, Guilhem Humbert, Olivier Imbert, Laetitia Verger, Antoine 

作者机构:Univ Lorraine INSERM IADI U1254 Nancy France Univ Lorraine INSERM U1254 & Nancyclotep Imaging Platform Nancy France Univ Lorraine C 1433 Innovat Technol CIC 1433Inserm Nancy France Ctr Hosp Reg Univ Nancy Dept Nucl Med Nancy France Ctr Hosp Valence Dept Med Valence France Hosp Civils Lyon Dept Nucl Med Lyon France ICANS Dept Nucl Med & Mol Imaging Strasbourg France Ctr Antoine Lacassagne Dept Nucl Med Nice France Univ Strasbourg ICube CNRS UMR 7357 Strasbourg France Ctr Hosp Univ Grenoble Alpes Ctr Hospitalier Univ Grenoble Alpes Grenoble France Univ Grenoble Alpes INSERM LRB Grenoble France Lyon Neurosci Res Ctr CNRS UMR5292 INSERM U1028 Lyon France Timone Hosp Dept Nucl Med Marseille France Aix Marseille Univ Inst Fresnel CNRS Cent MarseilleAPHPCERIMED Marseille France Grp Hop Pitie Salpetriere Assistance Publ Hop Paris AP HP Dept Internal Med Paris France Sorbonne Univ Lab Imagerie Biomed INSERM CNRS Paris France Univ Western Brittany UBO Ctr Hosp Reg Univ Brest CHRU Brest INSERMUMR 1304GETBO F-29200 Brest France Univ Cote DAzur INSERM CNRS iBV Nice France CHRU Nancy Hop Brabois Med Nucl Allee Morvan F-54500 Vandoeuvre Les Nancy France 

出 版 物:《EUROPEAN JOURNAL OF NUCLEAR MEDICINE AND MOLECULAR IMAGING》 (Eur. J. Nucl. Med. Mol. Imaging)

年 卷 期:2025年第52卷第6期

页      面:1989-2001页

核心收录:

学科分类:1002[医学-临床医学] 1009[医学-特种医学] 10[医学] 

基  金:None declared 

主  题:Glioma Positron emission tomography Radiomics Explainable machine learning Interpretability 

摘      要:Purpose Radiomics-based machine learning (ML) models of amino acid positron emission tomography (PET) images have shown efficiency in glioma prediction tasks. However, their clinical impact on physician interpretation remains limited. This study investigated whether an explainable radiomics model modifies nuclear physicians assessment of glioma aggressiveness at diagnosis. Methods Patients underwent dynamic 6-[F-18]fluoro-L-DOPA PET acquisition. With a 75%/25% split for training (n = 63) and test sets (n = 22), an ensemble ML model was trained using radiomics features extracted from static/dynamic parametric PET images to classify lesion aggressiveness. Three explainable ML methods-Local Interpretable Model-agnostic Explanations (LIME), Anchor, and SHapley Additive exPlanations (SHAP)-generated patient-specific explanations. Eighteen physicians from eight institutions evaluated the test samples. During the first phase, physicians analyzed the 22 cases exclusively through magnetic resonance and static/dynamic PET images, acquired within a maximum interval of 30 days. In the second phase, the same physicians reevaluated the same cases (n = 22), using all available data, including the radiomics model predictions and explanations. Results Eighty-five patients (54[39-62] years old, 41 women) were selected. In the second phase, physicians demonstrated a significant improvement in diagnostic accuracy compared to the first phase (0.775 [0.750-0.802] vs. 0.717 [0.694-0.737], p = 0.007). The explainable radiomics model augmented physician agreement, with a 22.72% increase in Fleiss s kappa, and significantly enhanced physician confidence (p 0.001). Among all physicians, Anchor and SHAP showed efficacy in 75% and 72% of cases, respectively, outperforming LIME (p = 0.001). Conclusions Our results highlight the potential of an explainable radiomics model using amino acid PET scans as a diagnostic support to assist physicians in identifying glioma aggressiveness.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分