In recent years,autonomous driving technology has made good progress,but the noncooperative intelligence of vehicle for autonomous driving still has many technical bottlenecks when facing urban road autonomous driving...
详细信息
In recent years,autonomous driving technology has made good progress,but the noncooperative intelligence of vehicle for autonomous driving still has many technical bottlenecks when facing urban road autonomous driving challenges.V2I(Vehicle-to-Infrastructure)communication is a potential solution to enable cooperative intelligence of vehicles and *** this paper,the RGB-PVRCNN,an environment perception framework,is proposed to improve the environmental awareness of autonomous vehicles at intersections by leveraging V2I communication *** framework integrates vision feature based on *** normal distributions transform(NdT)point cloud registration algorithm is deployed both on onboard and roadside to obtain the position of the autonomous vehicles and to build the local map objectsdetected by roadside multi-sensor system are sent back to autonomous vehicles to enhance the perception ability of autonomous vehicles for benefiting path planning and traffic efficiency at the *** field-testing results show that our method can effectively extend the environmental perception ability and range of autonomous vehicles at the intersection and outperform the PointPillar algorithm and the VoxelRCNN algorithm in detection accuracy.
This study aims to achieve accurate three-dimensional (3d) localization of multiple objects in a complicated scene using passive imaging. It is challenging, as it requires accurate localization of the objects in all t...
详细信息
This study aims to achieve accurate three-dimensional (3d) localization of multiple objects in a complicated scene using passive imaging. It is challenging, as it requires accurate localization of the objects in all three dimensions given recorded 2d images. An integral imaging system captures the scene from multiple angles and is able to computationally produce blur-baseddepth information about the objects in the scene. We propose a method to detect and segment objects in a 3d space using integral-imaging data obtained by a video camera array. Using objects' two-dimensional regions detected via deep learning, we employ local computational integral imaging in detectedobjects' depth tubes to estimate the depth positions of the objects along the viewing axis. This method analyzes object-based blurring characteristics in the 3d environment efficiently. Our camera array produces an array of multiple-view videos of the scene, called elemental videos. Thus, the proposed3d object detection applied to the video frames allows for 3d tracking of the objects with knowledge of their depth positions along the video. Results show successful 3d object detection with depth localization in a real-life scene based on passive integral imaging. Such outcomes have not been obtained in previous studies using integral imaging;mainly, the proposed method outperforms them in its ability to detect the depth locations of objects that are in close proximity to each other, regardless of the object size. This study may contribute when robust 3d object localization is desired with passive imaging, but it requires a camera or lens array imaging apparatus.
In recent years, the popularity of airborne, vehicle-borne, and terrestrial 3d laser scanners has driven the rapiddevelopment of 3d point cloud processing methods. The 3d laser scanning technology has the characteris...
详细信息
In recent years, the popularity of airborne, vehicle-borne, and terrestrial 3d laser scanners has driven the rapiddevelopment of 3d point cloud processing methods. The 3d laser scanning technology has the characteristics of non-contact, high density, high accuracy, anddigitalization, which can achieve comprehensive and fast 3d scanning of urban point clouds. To address the current situation that it is difficult to accurately segment urban point clouds in complex scenes from 3d laser scanned point clouds, a technical process for accurate and fast semantic segmentation of urban point clouds is proposed. In this study, the point clouds are first denoised, then the samples are annotated and sample sets are created based on the point cloud features of the category targets using CloudCompare software, followed by an end-to-end trainable optimization network-ShellNet, to train the urban point cloud samples, and finally, the models are evaluated on a test set. The method achieved IoU metrics of 89.83% and 73.74% for semantic segmentation of buildings and rods-like objects respectively. From the visualization results of the test set, the algorithm is feasible and robust, providing a new idea and method for semantic segmentation of large-scale urban scenes.
暂无评论