版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Department of Medical Oncology Harbin Medical University Cancer Hospital Harbin 150040 China Department of Endoscope Harbin Medical University Cancer Hospital Harbin 150040 China Department of Control Science and Engineering Harbin Institute of Technology Harbin 150001 China
出 版 物:《Journal of Advanced Research》 (J. Adv. Res.)
年 卷 期:2024年
基 金:Harbin Medical University, HMU Harbin Institute of Technology, HIT, (IR2021224, AUGA9803503221) Harbin Institute of Technology, HIT Haiyan Foundation of Harbin Medical University Cancer Hospital, (JJZD2023-02) National Natural Science Foundation of China, NSFC, (81673024) National Natural Science Foundation of China, NSFC Fundamental Research Funds for the Provincial Universities of Heilongjiang, (2023-KYYWF-0223) Science and Technology Plan Project of Heilongjiang Provincial Health Commission, (20230303100250)
主 题:Attention mechanism Bronchoscopy Computer aided diagnosis Convolutional neural network Deep learning
摘 要:Introduction: Bronchoscopy is of great significance in diagnosing and treating respiratory illness. Using deep learning, a diagnostic system for bronchoscopy images can improve the accuracy of tracheal, bronchial, and pulmonary disease diagnoses for physicians and ensure timely pathological or etiological examinations for patients. Improving the diagnostic accuracy of the algorithms remains the key to this technology. Objectives: To deal with the problem, we proposed a multiscale attention residual network (MARN) for diagnosing lung conditions through bronchoscopic images. The multiscale convolutional block attention module (MCBAM) was designed to enable accurate focus on lesion regions by enhancing spatial and channel features. Gradient-weighted Class Activation Map (Grad-CAM) was provided to increase the interpretability of diagnostic results. Methods: We collected 615 cases from Harbin Medical University Cancer Hospital, including 2900 images. The dataset was partitioned randomly into training sets, validation sets and test sets to update model parameters, evaluate the model s training performance, select network architecture and parameters, and estimate the final model. In addition, we compared MARN with other algorithms. Furthermore, three physicians with different qualifications were invited to diagnose the same test images, and the results were compared to those of the model. Results: In the dataset of normal and lesion images, our model displayed an accuracy of 97.76% and an AUC of 99.79%. The model recorded 92.26% accuracy and 96.82% AUC for datasets of benign and malignant lesion images, while it achieved 93.10% accuracy and 99.02% AUC for normal, benign, and malignant lesion images. Conclusion: These results demonstrated that our network outperforms other methods in diagnostic performance. The accuracy of our model is roughly the same as that of experienced physicians and the efficiency is much higher than doctors. MARN has great potential for assisting p