版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Hohai Univ Coll Informat Sci & Engn Changzhou 213200 Peoples R China Hohai Univ Jiangsu Key Lab Power Transmiss & Distribut Equipm Changzhou 213200 Peoples R China
出 版 物:《VISUAL COMPUTER》 (Visual Comput)
年 卷 期:2025年第41卷第8期
页 面:6271-6297页
核心收录:
学科分类:08[工学] 0835[工学-软件工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:Jiangsu Provincial Key Research and Development Program
主 题:Camouflaged object detection Multi-scale feature extraction Boundary-aware learning Convolutional neural network Multi-guidance
摘 要:Camouflaged object detection (COD) is significantly more challenging than traditional salient object detection (SOD) due to the high intrinsic similarity between camouflaged objects and their backgrounds, as well as complex environmental conditions. Although current deep learning methods have achieved remarkable performance across various scenarios, they still face limitations in challenging situations, such as occluded targets or scenes with multiple targets. Inspired by the human visual process of detecting camouflaged objects, we introduce BGMR-Net, a boundary-guided multi-scale refinement network designed to identify camouflaged objects accurately. Specifically, we propose the Global Information Extraction (GIE) module to expand the receptive field while preserving detailed cues. Additionally, we design the Boundary-Aware (BA) module, which integrates features across all scales and explores local information from neighboring layer features. Finally, we propose the Multi-information Fusion Dual Stream (MFDS) module, which combines various types of guidance information (i.e., side-output backbone guidance, boundary guidance, neighbor guidance, and global guidance) to generate more fine-grained results through a step-by-step refinement process. Extensive experiments on three benchmark datasets demonstrate that our method significantly outperforms 30 competing approaches. Our code is available at https://***/yeqian1961/BGMR-Net.