版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Department of Health Informatics and Administration University of Wisconsin-Milwaukee Milwaukee United States Department of Otolaryngology and Communication Sciences Medical College of Wisconsin Milwaukee WI United States Medical College of Wisconsin Clinical and Translational Science Institute of Southeastern Wisconsin United States Department of Electrical Engineering and Computer Science University of Wisconsin–Milwaukee Milwaukee WI United States
出 版 物:《Computer Methods and Programs in Biomedicine Update》 (Comput. Methods Programs Biomed. Update)
年 卷 期:2023年第3卷
基 金:National Institutes of Health, NIH, (UL1TR001436) National Institutes of Health, NIH National Center for Advancing Translational Sciences, NCATS
主 题:Artificial intelligence Bone fracture Computed tomography Interpretable machine learning Text classification
摘 要:Background: Machine learning (ML) has demonstrated success in classifying patients’ diagnostic outcomes in free-text clinical notes. However, due to the machine learning model s complexity, interpreting the mechanism behind classification results remains difficult. Methods: We investigated interpretable representations of text-based machine learning classification models. We created machine learning models to classify temporal bone fractures based on 164 temporal bone computed tomography (CT) text reports. We adopted the XGBoost, Support Vector Machine, Logistic Regression, and Random Forest algorithms. To interpret models, we used two major methodologies: (1) We calculated the average word frequency score (WFS) for keywords. The word frequency score shows the frequency gap between positively and negatively classified cases. (2) We used Local Interpretable Model-Agnostic Explanations (LIME) to show the word-level contribution to bone fracture classification. Results: In temporal bone fracture classification, the random forest model achieved an average F1-score of 0.93. WFS revealed a difference in keyword usage between fracture and non-fracture cases. Additionally, LIME visualized the keywords contributions to the classification results. The evaluation of LIME-based interpretation achieved the highest interpreting accuracy of 0.97. Conclusion: The interpretable text explainer can improve physicians understanding of machine learning predictions. By providing simple visualization, our model can increase the trust of computerized models. Our model supports more transparent computerized decision-making in clinical settings. © 2023