版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Beijing Univ Technol Sch Informat Sci & Technol Beijing 100124 Peoples R China Hainan Normal Univ Coll Informat Sci & Technol Haikou 571158 Peoples R China Northern Border Univ Ctr Sci Res & Entrepreneurship Ar Ar 73213 Saudi Arabia
出 版 物:《SCIENTIFIC REPORTS》 (Sci. Rep.)
年 卷 期:2025年第15卷第1期
页 面:1-16页
核心收录:
基 金:Deanship of Scientific Research at Northern Border University [NBU-FFR-2025-1564-01] Deanship of Scientific Research at Northern Border University, Arar, Saudi Arabia
主 题:Skin Lesion Segmentation Dual Encoder Vision Transformer (ViT) Convolutional Neural Networks (CNNs) ViT-CNN
摘 要:Skin cancer is a prevalent health concern, and accurate segmentation of skin lesions is crucial for early diagnosis. Existing methods for skin lesion segmentation often face trade-offs between efficiency and feature extraction capabilities. This paper proposes Dual Skin Segmentation (DuaSkinSeg), a deep-learning model, to address this gap by utilizing dual encoders for improved performance. DuaSkinSeg leverages a pre-trained MobileNetV2 for efficient local feature extraction. Subsequently, a Vision Transformer-Convolutional Neural Network (ViT-CNN) encoder-decoder architecture extracts higher-level features focusing on long-range dependencies. This approach aims to combine the efficiency of MobileNetV2 with the feature extraction capabilities of the ViT encoder for improved segmentation performance. To evaluate DuaSkinSeg s effectiveness, we conducted experiments on three publicly available benchmark datasets: ISIC 2016, ISIC 2017, and ISIC 2018. The results demonstrate that DuaSkinSeg achieves competitive performance compared to existing methods, highlighting the potential of the dual encoder architecture for accurate skin lesion segmentation.