Gestures are used in day to day life like nodding and waving without us being aware of them. It has become an important part in the communication among the humans. In the recent years new methods of Human computer Int...
详细信息
ISBN:
(纸本)9781467357593
Gestures are used in day to day life like nodding and waving without us being aware of them. It has become an important part in the communication among the humans. In the recent years new methods of Human computer Interaction (HCI) are being developed. Some of them are based on interaction with machines through hand, head, facial expressions, voice, touch and many are still the current topic of research. However relying on just one of them reduces the accuracy of the whole HCI and is also limiting the options available to users. The objective of this paper is thus to use two of the important modes of interaction - hand and head to control any application running on computer using computervision algorithms. From input video stream, hand is segmented and the corresponding gesture is being recognized based on the shape and pattern of movement of hand. For head gesture recognition, head is first detected and then optical flow method is used to get the movement of head which is then recognized by finite state automata. Using the user interface of the software, an operator can control any interactive application (say VLC player, Image browser etc) using hand and head gestures which in turn are automatically mapped to the mouse and keyboard events through Windows API. The proposed multimodal approach is particularly useful to communicate with computers and other electronic appliances from a distance where mouse and keyboard are not convenient to work with.
暂无评论