版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Dongguk Univ Grad Sch Dept Game Engn Seoul 100715 South Korea Dongguk Univ Dept Multimedia Engn Seoul 100715 South Korea
出 版 物:《MULTIMEDIA TOOLS AND APPLICATIONS》 (多媒体工具和应用)
年 卷 期:2013年第67卷第1期
页 面:71-95页
核心收录:
学科分类:0808[工学-电气工程] 08[工学] 0835[工学-软件工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
主 题:Video conferencing Programming by demonstration Maximin Selection Algorithm Virtual human Behavior generation
摘 要:Video conferencing provides an environment for multiple users linked on a network to have meetings. Since a large quantity of audio and video data are transferred to multiple users in real time, research into reducing the quantity of data to be transferred has been drawing attention. Such methods extract and transfer only the features of a user from video data and then reconstruct a video conference using virtual humans. The disadvantage of such an approach is that only the positions and features of hands and heads are extracted and reconstructed, whilst the other virtual body parts do not follow the user. In order to enable a virtual human to accurately mimic the entire body of the user in a 3D virtual conference, we examined what features should be extracted to express a user more clearly and how they can be reproduced by a virtual human. This 3D video conferencing estimates the user s pose by comparing predefined images with a photographed user s image and generates a virtual human that takes the estimated pose. However, this requires predefining a diverse set of images for pose estimation and, moreover, it is difficult to define behaviors that can express poses correctly. This paper proposes a framework to automatically generate the pose-images used to estimate a user s pose and the behaviors required to present a user using a virtual human in a 3D video conference. The method for applying this framework to a 3D video conference on the basis of the automatically generated data is also described. In the experiment, the framework proposed in this paper was implemented in a mobile device. The generation process of poses and behaviors of virtual human was verified. Finally, by applying programming by demonstration, we developed a system that can automatically collect the various data necessary for a video conference directly without any prior knowledge of the video conference system.