Solutions based on Artificial Intelligence techniques have been proposed in the healthcare sector, however the lack of understanding of the results has been a cause for concern and one of the main barriers to their ef...
详细信息
Solutions based on Artificial Intelligence techniques have been proposed in the healthcare sector, however the lack of understanding of the results has been a cause for concern and one of the main barriers to their effective use. To fill the gap in understanding these models considered black boxes, the Explainable Artificial Intelligence was proposed to explain the relationship between input data and the prediction results made by the Artificial Intelligence models. In this study, we present the use of explainability techniques for a black box machine learning model in breast cancer classification. Breast cancer is the type of cancer that causes the most deaths among women in the world and early diagnosis is essential to increase the chances of survival. The study evaluated the explainability of breast tumor predictions based on the Multilayer Perceptron artificial neural networks algorithm, considering a sample that contained data from 164 female patients undergoing Core Biopsy in the southern region of Brazil. The SHAP, LIME, PDP and ICE methods were used for the global and local explainability of the predictions. The results showed that in the global assessment, the BI-RADS® 5 ultrasound and mammography attributes were considered the most important for predicting the malignant tumor. Nodule size greater than 2 cm, presence of a family history and palpable nodule were also considered important for the prediction. In the local evaluation, it was found that the model correctly classified the tumors considering different characteristics of the patients. Through the results presented, the methods of explicability show evidence that the predictivemodel for breast cancer can be interpreted and understood, overcoming the barrier of lack understanding of the results of a black box model.
暂无评论