Static hand gesture recognition (HGR) has drawn increasing attention in computer vision and human-computer interaction (HCI) recently because of its great potential. However, HGR is a challenging problem due to the variations of gestures. In this paper, we present a new framework for static hand gesture recognition. Firstly, the key joints of the hand, including the palm center, the fingertips and finger roots, are located. Secondly, we propose novel and discriminative features called root-center-angles to alleviate the influence of the variations of gestures. Thirdly, we design a distance metric called finger length weighted Mahalanobis distance (FLWMD) to measure the dissimilarity of the hand gestures. Experiments demonstrate the accuracy, efficiency and robustness of our proposed HGR framework.
Hand gesture recognition has attracted more interest in computer vision and image processing recently. Recent works for hand gesture recognition confronted 2 major problems. The former one is how to detect and extract the hand region from color-confusing background objects. The latter one is the expensive computational cost by considering the kinematic hand model with up to 27 degrees of freedom. This paper proposes a stable and real-time static hand gesture recognition system. Our contributions are listed as follows. First, to deal with color-confusing background objects, we take the RGB-D (RGB-Depth) information into account, where foreground and background objects can be segmented well. Additionally, a coarse-to-fine model is proposed, which utilizes the skin color and helps us extract the hand region robustly and accurately. Second, considering the principal direction of hand region is random, we introduce the principal component analysis (PCA) algorithm to estimate and then compensate the direction. Finally, to avoid the expensive computational cost of traditional optimization, we design a fingertip filter and detect extended fingers via calculating their distances to palm center and curvature easily. Then the number of extended fingers will be reported, which corresponds to the recognition result. Experiments have verified the stability and high-speed of our algorithm. On the data set captured by the depth camera, our algorithm recognizes the 6 pre-defined static hand gestures robustly with average accuracy about 98.0%. Furthermore, the average computational time for each image (with the resolution 640×480) is 37ms, which can be extended to many real-time applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.