The technologies for generating real-time animated avatars are very useful in the fields of VR/AR animation and entertainment. Most of the existing studies, however, always require the technology of time-consuming mot...
详细信息
ISBN:
(数字)9781665453653
ISBN:
(纸本)9781665453653
The technologies for generating real-time animated avatars are very useful in the fields of VR/AR animation and entertainment. Most of the existing studies, however, always require the technology of time-consuming motioncapture at high cost. This paper proposes an efficient lightweight framework of dynamic avatar animation, which can generate all the facial expressions, gestures, and torso movements properly in real time. The entire technique is driven only by monocular camera videos. Specifically, the 3D posture and facial landmarks of the monocular videos can be calculated by using Blaze-pose key points in our proposed framework. Then, a novel adaptor mapping function is proposed to transform the kinematic topology into the rigid skeletons of avatars. Without the dependency of a high-cost motioncapture instrument and also without the limitation of the topology, our approach produces avatar animations with a higher level of fidelity. Finally, animations, including lip movements, facial expressions, and limb motions, are generated in a unified framework, which allows our 3D virtual avatar to act exactly like a real person. We have conducted extensive experiments to demonstrate the efficacy of applications in real-time avatar-related research. Our project and software are publicly available for further research or practical use (https://***/xianfei/SysMocap/).
We envision a convenient telepresence system available to users anywhere, anytime. Such a system requires displays and sensors embedded in commonly worn items such as eyeglasses, wristwatches, and shoes. To that end, ...
详细信息
ISBN:
(纸本)9781665418386
We envision a convenient telepresence system available to users anywhere, anytime. Such a system requires displays and sensors embedded in commonly worn items such as eyeglasses, wristwatches, and shoes. To that end, we present a standalone real-time system for the dynamic 3D capture of a person, relying only on cameras embedded into a head-worn device, and on Inertial Measurement Units (IMUs) worn on the wrists and ankles. Our prototype system egocentrically reconstructs the wearer's motion via learning-based pose estimation, which fuses inputs from visual and inertial sensors that complement each other, overcoming challenges such as inconsistent limb visibility in head-worn views, as well as pose ambiguity from sparse IMUs. The estimated pose is continuously re-targeted to a prescanned surface model, resulting in a high-fidelity 3D reconstruction. We demonstrate our system by reconstructing various human body movements and show that our visual-inertial learning-based method, which runs in real time, outperforms both visual-only and inertial-only approaches. We captured an egocentric visual-inertial 3D human pose dataset publicly available at https://***/site/youngwooncha/egovip for training and evaluating similar methods.
This work addresses the problem of using real-world data captured from a single viewpoint by a low-cost 360-degree camera to create an immersive and interactive virtual reality scene. We combine different existing sta...
详细信息
ISBN:
(纸本)9781728113777
This work addresses the problem of using real-world data captured from a single viewpoint by a low-cost 360-degree camera to create an immersive and interactive virtual reality scene. We combine different existing state-of-the-art data enhancement methods based on pre -trained deep learning models to quickly and automatically obtain 3D scenes with animated character models from a 360-degree video. We provide details on our implementation and insight on how to adapt existing methods to 360 -degree inputs. We also present the results of a user study assessing the extent to which virtual agents generated by this process are perceived as present and engaging.
暂无评论