Hand traking
Real –Time Hand Tracking and Gesture Recognition for Human-Computer Interaction
Cristina Manresa, Javier Varona, Ramon Mas and Francisco J. Perales
Unidad de Gráficos y Visión por Computador Departamento de Matemáticas e Informática Universitat de les Illes Balears Edificio Anselm Turmeda, Crta. Valldemossa km 7.5 07122– Palma de Mallorca - España Received 1 January 2000; revised 1 January 2000; accepted 1 January 2000
Abstract
The proposed work is part of a project that aims for the control of a videogame based on hand gesture recognition. This goal implies the restriction of real-time response and unconstrained environments. In this paper we present a real-time algorithm to track and recognise hand gesturesfor interacting with the videogame. This algorithm is based on three main steps: hand segmentation, hand tracking and gesture recognition from hand features. For the hand segmentation step we use the colour cue due to the characteristic colour values of human skin, its invariant properties and its computational simplicity. To prevent errors from hand segmentation we add a second step, handtracking. Tracking is performed assuming a constant velocity model and using a pixel labeling approach. From the tracking process we extract several hand features that are fed to a finite state classifier which identifies the hand configuration. The hand can be classified into one of the four gesture classes or one of the four different movement directions. Finally, using the system’s performanceevaluation results we show the usability of the algorithm in a videogame environment. Key Words: Hand Tracking, Gesture Recognition, Human-Computer Interaction, Perceptual User Interfaces.
1
Introduction
Nowadays, the majority of the human-computer interaction (HCI) is based on mechanical devices such as keyboards, mouses, joysticks or gamepads. In recent years there has been a growing interestin a class of methods based on computational vision due to its ability to recognise human gestures in a natural way [1]. These methods use as input the images acquired from a camera or from a stereo pair of cameras. The main goal of these algorithms is to measure the hand configuration in each time instant. To facilitate this process many gesture recognition applications resort to the use ofuniquely coloured gloves or markers on hands or fingers [2]. In addition, using a controlled background makes it possible to localize the hand efficiently and even in real-time [3]. These two conditions impose restrictions on the user and on the interface setup. We have specifically avoided solutions that require coloured gloves or markers
Correspondence to: cristina.manresa@uib.es Recommended foracceptance by ELCVIA ISSN: 1577-5097 Published by Computer Vision Center / Universitat Autonoma de Barcelona, Barcelona, Spain
2
Manresa et al. / Electronic Letters on Computer Vision and Image Analysis 0(0):1-7, 2000
Figure 1: Interactive game application workspace diagram. and a controlled background because of the initial requirements of our application. It must work for differentpeople, without any complement on them and for unpredictable backgrounds. Our application uses images from a low-cost web camera placed in front of the work area, see Fig. 1, where the recognised gestures act as the input for a computer 3D videogame. Thus, the players, rather than pressing buttons, must use different gestures that our application should recognise. This fact, adds the complexity thatthe response time must be very fast. Users should not appreciate a significant delay between the instant they perform a gesture or motion and the instant the computer responds. Therefore, the algorithm must provide real-time performance for a conventional processor. Most of the known hand tracking and recognition algorithms do not meet this requirement and are inappropiate for visual interface....
Regístrate para leer el documento completo.