咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Multimodal information fusion ... 收藏

Multimodal information fusion for urban scene understanding

为城市的景色理解的多模式的信息熔化

作     者:Xu, Philippe Davoine, Franck Bordes, Jean-Baptiste Zhao, Huijing Denoeux, Thierry 

作者机构:Univ Technol Compiegne CNRS UMR 7253 Heudiasyc BP 20529 F-60205 Compiegne France Peking Univ Key Lab Machine Percept MOE CNRS LIAMA Beijing 100871 Peoples R China 

出 版 物:《MACHINE VISION AND APPLICATIONS》 (计算机视觉与应用)

年 卷 期:2016年第27卷第3期

页      面:331-349页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:Ministry of Education of the People's Republic of China, MOE Ministère de l'Education Nationale, de l'Enseignement Superieur et de la Recherche, MESR Labex MS2T French Ministry of Foreign French Government European Affairs National Agency for Research, (26193PE, ANR-11-IDEX-0004-02) ANR-NSFC Sino-French, (61161130528 / ANR-11-IS03-0001) Agence Nationale de la Recherche, ANR, (ANR-11-IS03-0001) 

主  题:Information fusion Driving scene understanding Theory of belief functions Intelligent vehicles Dempster-Shafer theory Evidence theory 

摘      要:This paper addresses the problem of scene understanding for driver assistance systems. To recognize the large number of objects that may be found on the road, several sensors and decision algorithms have to be used. The proposed approach is based on the representation of all available information in over-segmented image regions. The main novelty of the framework is its capability to incorporate new classes of objects and to include new sensors or detection methods while remaining robust to sensor failures. Several classes such as ground, vegetation or sky are considered, as well as three different sensors. The approach was evaluated on real publicly available urban driving scene data.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分