In server board assembly tasks, the effect of vision-based robot assembly schemes is not ideal due to the small installation gap and the blocking of vision. Adding force sensors and force controllers can be a good sol...
详细信息
ISBN:
(数字)9798350340266
ISBN:
(纸本)9798350340273
In server board assembly tasks, the effect of vision-based robot assembly schemes is not ideal due to the small installation gap and the blocking of vision. Adding force sensors and force controllers can be a good solution to the above problems and increase the flexibility in the assembly process. In this paper, we propose a force control strategy applied to a server board assembly task. The method is divided into two main parts. In the first part, the zero offset of the force sensor and the load gravity are calibrated and compensated, so that the external force on the load is accurately obtained. In the second part, the admittance controller is designed to achieve compliant behavior between the robot and the environment. Finally, the experimental verification of the board insertion is carried out on the experimental platform. Experimental results verify the effectiveness and practicability of the proposed method.
With the emergence of the COVID-19 pandemic, more and more non-contact mobile disinfection robots have appeared in the medical field, which have made great contributions to the fight against the epidemic. Aiming at th...
详细信息
Online High-Definition (HD) maps have emerged as the preferred option for autonomous driving, overshadowing the counterpart offline HD maps due to flexible update capability and lower maintenance costs. However, conte...
详细信息
Online High-Definition (HD) maps have emerged as the preferred option for autonomous driving, overshadowing the counterpart offline HD maps due to flexible update capability and lower maintenance costs. However, contemporary online HD map models embed parameters of visual sensors into training, resulting in a significant decrease in generalization performance when applied to visual sensors with different parameters. Inspired by the inherent potential of Inverse Perspective Mapping (IPM), where camera parameters are decoupled from the training process, we have designed a universal map generation framework, GenMapping. The framework is established with a triadic synergy architecture, including principal and dual auxiliary branches. When faced with a coarse road image with local distortion translated via IPM, the principal branch learns robust global features under the state space models. The two auxiliary branches are a dense perspective branch and a sparse prior branch. The former exploits the correlation information between static and moving objects, whereas the latter introduces the prior knowledge of OpenStreetMap (OSM). The triple-enhanced merging module is crafted to synergistically integrate the unique spatial features from all three branches. To further improve generalization capabilities, a Cross-View Map Learning (CVML) scheme is leveraged to realize joint learning within the common space. Additionally, a Bidirectional Data Augmentation (BiDA) module is introduced to mitigate reliance on datasets concurrently. A thorough array of experimental results shows that the proposed model surpasses current state-of-the-art methods in both semantic mapping and vectorized mapping, while also maintaining a rapid inference speed. Moreover, in cross-dataset experiments, the generalization of semantic mapping is improved by 17.3% in mIoU, while vectorized mapping is improved by 12.1% in mAP. The source code will be publicly available at https://***/lynn-yu/GenMappin
Vision sensors are widely applied in vehicles, robots, and roadside infrastructure. However, due to limitations in hardware cost and system size, camera Field-of-View (FoV) is often restricted and may not provide suff...
详细信息
Vision-based occupancy prediction, also known as 3D Semantic Scene Completion (SSC), presents a significant challenge in computer vision. Previous methods, confined to onboard processing, struggle with simultaneous ge...
详细信息
Simultaneous Localization And Mapping (SLAM) has become a crucial aspect in the fields of autonomous driving and robotics. One crucial component of visual SLAM is the Field-of-View (FoV) of the camera, as a larger FoV...
详细信息
Learning driving policies using an end-to-end network has been proved a promising solution for autonomous driving. Due to the lack of a benchmark driver behavior dataset that contains both the visual and the LiDAR dat...
详细信息
Learning driving policies using an end-to-end network has been proved a promising solution for autonomous driving. Due to the lack of a benchmark driver behavior dataset that contains both the visual and the LiDAR data, existing works solely focus on learning driving from visual sensors. Besides, most works are limited to predict steering angle yet neglect the more challenging vehicle speed control problem. In this paper, we propose a novel end-to-end network, FlowDriveNet, which takes advantages of sequential visual data and LiDAR data jointly to predict steering angle and vehicle speed. The main challenges of this problem are how to efficiently extract driving-related information from images and point clouds, and how to fuse them effectively. To tackle these challenges, we propose a concept of point flow and declare that image optical flow and LiDAR point flow are significant motion cues for driving policy learning. Specifically, we first create an enhanced dataset that consists of images, point clouds and corresponding human driver behaviors. Then, in FlowDriveNet, a deep but efficient visual feature extraction module and a point feature extraction module are utilized to extract spatial features from optical flow and point flow, respectively. Additionally, a novel temporal fusion and prediction module is designed to fuse temporal information from the extracted spatial feature sequences and predict vehicle driving commands. Finally, a series of ablation experiments verify the importance of optical flow and point flow and comparison experiments show that our flow-based method outperforms the existing image-based approaches on the task of driving policy learning.
Boundary equilibrium generative adversarial networks(BEGANs)are the improved version of generative adversarial networks(GANs).In this paper,an improved BEGAN with a skip-connection technique in the generator and the d...
详细信息
Boundary equilibrium generative adversarial networks(BEGANs)are the improved version of generative adversarial networks(GANs).In this paper,an improved BEGAN with a skip-connection technique in the generator and the discriminator is ***,an alternative time-scale update rule is adopted to balance the learning rate of the generator and the ***,the performance of the proposed method is quantitatively evaluated by Fréchet inception distance(FID)and inception score(IS).The test results show that the performance of the proposed method is better than that of the original BEGAN.
We propose a high-performance glass-plastic hybrid minimalist aspheric panoramic annular lens (ASPAL) to solve several major limitations of the traditional panoramic annular lens (PAL), such as large size, high weight...
详细信息
Temporal information plays a pivotal role in Bird’s-Eye-View (BEV) driving scene understanding, which can alleviate the visual information sparsity. However, the indiscriminate temporal fusion method will cause the b...
详细信息
暂无评论