咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Progressive LiDAR Adaptation f... 收藏

Progressive LiDAR Adaptation for Road Detection

Progressive LiDAR Adaptation for Road Detection

作     者:Zhe Chen Jing Zhang Dacheng Tao 

作者机构:IEEE the UBTECH Sydney Artificial Intelligence Centre and the School of Computer Science Faculty of Engineering and Information Technologies University of Sydney the School of Automation Hangzhou Dianzi University the University of Technology Sydney 

出 版 物:《IEEE/CAA Journal of Automatica Sinica》 (自动化学报(英文版))

年 卷 期:2019年第6卷第3期

页      面:693-702页

核心收录:

学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 08[工学] 

基  金:supported by Australian Research Council Projects(FL-170100117,DP-180103424,IH-180100002) National Natural Science Foundation of China(NSFC)(61806062) 

主  题:Autonomous driving computer vision deep learning LiDAR processing road detection 

摘      要:Despite rapid developments in visual image-based road detection, robustly identifying road areas in visual images remains challenging due to issues like illumination changes and blurry images. To this end, LiDAR sensor data can be incorporated to improve the visual image-based road detection,because LiDAR data is less susceptible to visual noises. However,the main difficulty in introducing LiDAR information into visual image-based road detection is that LiDAR data and its extracted features do not share the same space with the visual data and visual features. Such gaps in spaces may limit the benefits of LiDAR information for road detection. To overcome this issue, we introduce a novel Progressive LiDAR adaptation-aided road detection(PLARD) approach to adapt LiDAR information into visual image-based road detection and improve detection performance. In PLARD, progressive LiDAR adaptation consists of two subsequent modules: 1) data space adaptation, which transforms the LiDAR data to the visual data space to align with the perspective view by applying altitude difference-based transformation; and 2) feature space adaptation, which adapts LiDAR features to visual features through a cascaded fusion structure. Comprehensive empirical studies on the well-known KITTI road detection benchmark demonstrate that PLARD takes advantage of both the visual and LiDAR information, achieving much more robust road detection even in challenging urban scenes. In particular, PLARD outperforms other state-of-theart road detection models and is currently top of the publicly accessible benchmark leader-board.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分