版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Beijing Univ Posts & Telecommun Sch Comp Sci Beijing 100876 Peoples R China Univ Trento Dept Informat Engn & Comp Sci I-38123 Trento Italy Tencent Technol Beijing 100193 Peoples R China HuaAn Secur Co Ltd Hefei 230031 Peoples R China
出 版 物:《IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS》 (IEEE Trans. Intell. Transp. Syst.)
年 卷 期:2025年第26卷第2期
页 面:1482-1493页
核心收录:
学科分类:0808[工学-电气工程] 08[工学] 0814[工学-土木工程] 0823[工学-交通运输工程]
主 题:Roads Accuracy Data augmentation Uncertainty Image segmentation Deep learning Real-time systems Laser radar Data models Computer science Road segmentation evidence learning RGB-D data augmentation
摘 要:Despite significant progress in RGB-D based road segmentation in recent years, the latest methods cannot achieve both state-of-the-art accuracy and real time due to the high-performance reliance on heavy structures. We argue that this reliance is due to unsuitable multimodal fusion. To be specific, RGB and depth data in road scenes are each sensitive to different regions, but current RGB-D based road segmentation methods generally combine features within sensitive regions which preserves false road representation from one of the data. Based on such findings, we design an Evidence-based Road Segmentation Method (Evi-RoadSeg), which incorporates prior knowledge of the modal-specific characteristics. Firstly, we abandon the cross-modal fusion operation commonly used in existing multimodal based methods. Instead, we collect the road evidence from RGB and depth inputs separately via two low-latency subnetworks, and fuse the road representation of the two subnetworks by taking both modalities evidence as a measure of confidence. Secondly, we propose an RGB-D data augmentation scheme tailored to road scenes to enhance the unique properties of RGB and depth data. It facilitates learning by adding more sensitive regions to the samples. Finally, the proposed method is evaluated on the widely used KITTI-road, ORFD, and R2D datasets. Our method achieves state-of-the-art accuracy at over 70 FPS, 5 $\times$ faster than comparable RGB-D methods. Furthermore, extensive experiments illustrate that our method can be deployed on a Jetson Nano 2GB with a speed of 8 $+$ FPS. The code will be released in https://***/xuefeng-cvr/Evi-RoadSeg.