咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Real-Time Soft Body 3D Proprio... 收藏

Real-Time Soft Body 3D Proprioception via Deep Vision-Based Sensing

作     者:Wang, Ruoyu Wang, Shiheng Du, Songyu Xiao, Erdong Yuan, Wenzhen Feng, Chen 

作者机构:NYU Tandon Sch Engn Brooklyn NY 11201 USA Carnegie Mellon Univ Inst Robot Pittsburgh PA 15213 USA 

出 版 物:《IEEE ROBOTICS AND AUTOMATION LETTERS》 (IEEE Robot. Autom.)

年 卷 期:2020年第5卷第2期

页      面:3382-3389页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0811[工学-控制科学与工程] 

主  题:Modeling control and learning for soft robots deep learning in robotics and automation 3D deep learning 

摘      要:Soft bodies made from flexible and deformable materials are popular in many robotics applications, but their proprioceptive sensing has been a long-standing challenge. In other words, there has hardly been a method to measure and model the high-dimensional 3D shapes of soft bodies with internal sensors. We propose a framework to measure the high-resolution 3D shapes of soft bodies in real-time with embedded cameras. The cameras capture visual patterns inside a soft body, and a convolutional neural network (CNN) produces a latent code representing the deformation state, which can then be used to reconstruct the body s 3D shape using another neural network. We test the framework on various soft bodies, such as a Baymax-shaped toy, a latex balloon, and some soft robot fingers, and achieve real-time computation (= 2.5 ms/frame) for robust shape estimation with high Precision (= 1% relative error) and high resolution. We believe the method could be applied to soft robotics and human-robot interaction for proprioceptive shape sensing. Our code is available at: https://***/DeepSoRo.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分