Gradient Feature based Static Sign Language Recognition

Authors

  • Prasad MM Department of Studies in Electronics, Post Graduate Centre, University of Mysore, Hassan, India

DOI:

https://doi.org/10.26438/ijcse/v6i12.531534

Keywords:

Sign Language Recognition System, American Sign Language, Static Sign Language, Gradient Features

Abstract

In this paper, the work carried out to design the gradient feature based static sign language is presented. Sign languages are the gestures used by the hearing and speaking impaired people for communication. The sign languages are classified into static or dynamic or both static and dynamic sign languages. In static sign languages, still hand postures are used to convey information. In the dynamic sign languages, sequence of hand postures is used to convey information. In the present work, efforts have been made to design the computer vision based static sign language recognition system for the American Sign Language alphabets. The images that represent the static sign language alphabets are grouped into training and test images. The training sign language images are subjected to preprocessing. From the preprocessed images, magnitude and direction gradient features are extracted. These features are used to train the recognition system. The test images are subjected to preprocessing and feature extraction. The extracted features from the test sign language images are used to test the designed sign language recognition system. To classify the static sign language hand gestures, nearest neighbor classifier has been used. Independent experiments are carried out to evaluate the performance of the gradient magnitude and the gradient direction features. The average recognition accuracy of 95.4% for magnitude gradient feature and 80.3% for direction gradient feature are obtained.

References

[1] Mahmoud Elmezain, Ayoub Al-Hamadi, Omer Rashid, Bernd Michaelis, “Posture and Gesture Recognition for HumanComputer Interaction”, Advanced Technologies, Kankesu Jayanthakumaran (Ed.), InTech Publisher, pp. 415-440, 2009.

[2] Richard Bowden, Andrew Zisserman, Timor Kadir, Mike Brady, “Vision based Interpretation of Natural Sign Languages”, In the Proceedings of the 2003 Int. Conf. on Computer Vision System, pp. 391-401, 2003

[3] Prashan Premaratne, “Human Computer Interaction using Hand Gestures,” Springer, 2014.

[4] D. Karthikeyan, G. Muthulakshmi, “English Letters Finger Spelling Sign Language Recognition System”, Int. Jl. of Engineering Trends and Technology, Vol. 10, No. 7, pp. 334-339, 2014.

[5] Twinkel Verma, S.M. Kataria, “Hand Gesture Recognition Techniques”, Int. Research Jl. of Engineering and Technology, Vol. 03, Issue 4, pp. 2316-2319, 2016.

[6] R.M. McGuire, J. Hernandez-Rebollar, T. Starner, V. Henderson, H. Brashear, D.S. Ross, “Towards a One-way American Sign Language Translator”, In the Proceedings of IEEE Int. Conf. on Automatic Face and Gesture Recognition, pp. 620-625, 2004.

[7] Qi Wang, Xilin Chen, Liang-Guo Zhang, Chunli Wang, Wen

Gao, “Viewpoint Invariant Sign Language Recognition”, Computer Vision and Image Understanding, Vol. 1081, pp. 87–97, 2007.

[8] Nancy, Gianetan Singh Selhan, “An Analysis of Hand Gesture Technique using Finger Movement Detection based on Color Marker”, Int. Jl. of Computer Science and Communication, Vol. 3, No. 1, pp. 129-133, 2012.

[9] https://www.nidcd.nih.gov/sites/default/files/Content%20Images/ NIDCD-ASL-hands-2014.jpg

[10] https://en.wikipedia.org/w/index.php?title=American_manual_alp habet&oldid=873597841

[11] https://www.kaggle.com/grassknoted/asl-alphabet/data

[12] Gonzalez R.C., Woods R.E., “Digital Image Processing”, 3rd Edn., Pearson Education, Inc., 2013

Downloads

Published

2018-12-31
CITATION
DOI: 10.26438/ijcse/v6i12.531534
Published: 2018-12-31

How to Cite

[1]
M. M. Prasad, “Gradient Feature based Static Sign Language Recognition”, Int. J. Comp. Sci. Eng., vol. 6, no. 12, pp. 531–534, Dec. 2018.

Issue

Section

Research Article