Interpretation of Indian Sign Language through Video Streaming
Keywords:
Neural Networks, Video Processing, Indian Sign LanguageAbstract
Sign Language is the language used by the deaf and dumb people to communicate. However, this language is rarely learnt by the general public. So it becomes difficult for these people to communicate with the general masses. Various such methods and techniques have been developed for the American Sign Language. This paper proposes an interpretation technique for the Indian Sign Language which is equally complex in nature and uses various parts of the body to convey messages such as hand orientations, palm movement, fingertips, etc. Our proposed technique will be able to take in a live video stream consisting of gestures and convert it into an equivalent sentence in English. The solution offered consists of steps like frame extraction, segmentation and refining of images, feature extraction, and training of neural network. The various methods will have different accuracy and efficiency levels and thus training of the network to perfectly guess each sign is of utmost importance.
References
David M. Perlmutter, “The Language of Deaf”, The New York Review of Books, March 1991.
Geetha M and Manjusha U. C, "A Vision Based Recognition of Indian Sign Language Alphabets and Numerals Using B-Spline Approximation", International Journal on Computer Science and Engineering (IJCSE), March 2012.
Aradhana Kar and Pinaki Sankar Chatterjee, “A Video-based Approach for Translating Sign Language to Simple Sentence in English”, Proc. of Int. Conf. on Advances in Computer Science, AETACS, 2013.
Adithya V, Vinod P.R and Usha Gopalakrishnan, “Artificial Neural Network Based Method for Indian Sign Language Recognition”, Proceedings of 2013 IEEE Conference on Information and Communication Technologies (ICT), 2013.
Paulraj M P, Sazali Yaacob, Mohd Shuhanaz Zanar Azalan and Rajkumar Palaniappan, “A Phoneme Based Sign Language Recognition System using 2D Moment Invariant Interleaving feature and Neural Network”, IEEE Student Conference on Research and Development, 2011.
J. Yang, W. Lu and A. Waibel, “Skin-color modeling and adaptation”, ACCV98, 1998.
Yona Falinie bte Abdul Gaus, Farrah Wong and Kenneth teo, "Malaysian Sign Language Recognition Using Neural Network", Proceedings of 2009 Conference on Research and development (SCOReD), 2009.
Amit Kumar Mandal and Dilip Kumar Baruah, “Image Segmentation Using Local Thresholding And Ycbcr Color Space”, Int. Journal of Engineering Research and Applications, Vol. 3, Issue 6, Nov-Dec 2013.
Amanpreet Kaur and B.V Kranthi, “Comparison between YCbCr Color Space and CIELab Color Space for Skin Color Segmentation”, International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868 Volume 3, No.4, July 2012.
William K.Pratt, "Digital Image Processing-PIKS Scientific Inside lh ed ", A Wiley-Interscience publication.
Khadidja Sadeddine, Fatma Zohra Chelali and Rachida Djeradi, “Sign Language Recognition using PCA, Wavelet and Neural Network”, Control, Engineering & Information Technology (CEIT), 2015
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors contributing to this journal agree to publish their articles under the Creative Commons Attribution 4.0 International License, allowing third parties to share their work (copy, distribute, transmit) and to adapt it, under the condition that the authors are given credit and that in the event of reuse or distribution, the terms of this license are made clear.
