咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Inverse Kinematics Embedded Ne... 收藏

Inverse Kinematics Embedded Network for Robust Patient Anatomy Avatar Reconstruction From Multimodal Data

作     者:Zhou, Tongxi Chen, Mingcong Cao, Guanglin Hu, Jian Liu, Hongbin 

作者机构:Univ Chinese Acad Sci Sch Artificial Intelligence Beijing 100049 Peoples R China Chinese Acad Sci Inst Automat State Key Lab Multimodal Artificial Intelligence S Beijing 100190 Peoples R China City Univ Hong Kong Dept Biomed Engn Hong Kong Peoples R China Chinese Acad Sci Hong Kong Inst Sci & Innovat Ctr Artificial Intelligence & Robot Hong Kong Peoples R China Kings Coll London Sch Biomed Engn & Imaging Sci London SE1 7EU England 

出 版 物:《IEEE ROBOTICS AND AUTOMATION LETTERS》 (IEEE Robot. Autom.)

年 卷 期:2024年第9卷第4期

页      面:3395-3402页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0811[工学-控制科学与工程] 

基  金:InnoHK Program 

主  题:Image reconstruction Kinematics Three-dimensional displays Image color analysis Biomedical imaging Avatars Solid modeling Gesture posture and facial expressions deep learning for visual perception modeling and simulating humans RGB-D perception 

摘      要:Patient modelling has a wide range of applications in medicine and healthcare, such as clinical teaching, surgery navigation and automatic robotized scanning. While patients are typically covered or occluded in medical scenes, directly regressing human meshes from single RGB images is challenging. To this end, we design a deep learning-based patient anatomy reconstruction network from RGB-D images with three key modules: 1) the attention-based multimodal fusion module, 2) the analytical inverse kinematics module and 3) the anatomical layer module. In our pipeline, the color and depth modality are fully fused by the multimodal attention module to obtain a cover-insensitive feature map. The estimated 3D keypoints, learned from the fused feature, are further converted to patient model parameters through the embedded analytical inverse kinematics module. To capture more detailed patient structures, we also present a parametric anatomy avatar by extending the Skinned Multi-Person Linear Model (SMPL) with internal bone and artery models. Final meshes are driven by the predicted parameters via the anatomical layer module, generating digital twins of patients. Experimental results on the Simultaneously-Collected Multimodal Lying Pose Dataset demonstrate that our approach surpasses state-of-the-art human mesh recovery methods and shows robustness to occlusions.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分