版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Ist Italian Tecnol Pattern Anal & Comp Vis PAVIS I-16163 Genoa Italy Gepin SpA I-00143 Rome Italy
出 版 物:《IMAGE AND VISION COMPUTING》 (图像与视觉计算)
年 卷 期:2012年第30卷第1期
页 面:26-37页
核心收录:
学科分类:0808[工学-电气工程] 08[工学] 0835[工学-软件工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 0702[理学-物理学]
主 题:Hand detection Articulated object recognition Model based techniques Geometric constraints Graph matching Curve matching
摘 要:Over the past few years there has been a growing interest in visual interfaces based on gestures. Using gestures as a mean to communicate with a computer can be helpful in applications such as gaming platforms, domotic environments, augmented reality or sign language interpretation to name a few. However, a serious bottleneck for such interfaces is the current lack of accurate hand localization systems, which are necessary for tracking (re-)initialization and hand pose understanding. In fact, human hand is an articulated object with a very large degree of appearance variability which is difficult to deal with. For instance, recent attempts to solve this problem using machine learning approaches have shown poor generalization capabilities over different viewpoints and finger spatial configurations. In this article we present a model based approach to articulated hand detection which splits this variability problem by separately searching for simple finger models in the input image. A generic finger silhouette is localized in the edge map of the input image by combining curve and graph matching techniques. Cluttered backgrounds and thick textured images, which usually make it hard to compare edge information with silhouette models (e.g., using chamfer distance or voting based methods) are dealt with in our approach by simultaneously using connected curves and topological information. Finally, detected fingers are clustered using geometric constraints. Our system is able to localize in real time a hand with variable finger configurations in images with complex backgrounds, different lighting conditions and different positions of the hand with respect to the camera. Experiments with real images and videos and a simple visual interface are presented to validate the proposed method. (C) 2011 Elsevier B.V. All rights reserved.