Sign and Voice Translation using Machine Learning and Computer Vision
DOI:
https://doi.org/10.26438/ijcse/v11i4.713Keywords:
Computer Vision,, Recognition of Sign Language,, Hand Gesture Recognition,, Features ExtractionAbstract
Sign and voice translation is a critical tool for individuals who cannot hear or speak , or for those who speak different languages. Machine learning techniques have been increasingly used to find or improve the accuracy and efficiency of sign and voice translation systems. These systems make use of machine learning models to analyze and interpret sign language or speech and translate them into written or spoken language. Machine learning models can recognize patterns in sign language gestures or speech, and convert them into text or speech output. The model`s accuracy is dependent on the quality of its training data and the complexity of the model architecture. Recent improvisation in machine learning has increased the performance of sign and voice translation systems, enabling them to recognize more complex gestures and accents. Overall, the use of machine learning in sign and voice translation has the potential to improve the accessibility of information and communication for individuals who are deaf or hard of hearing, or for those who speak different languages. However, there is still much room for improvement, and ongoing research and development are needed to optimize the performance of these systems
References
[1]Baranwal N, Nandi GC.”An efficient gesture based humanoid learning using wavelet descriptor and MFCC techniques. Int J Mach Learn Cybern” 2017
[2] D. Bragg, O. Koller, M. Bellard, L. Berke, P. Boudreault, A. Braffort,N. Caselli, M. Huenerfauth, H. Kacorri, T. Verhoef et al., “Sign language recognition, generation, and translation: An interdisciplinary perspective,” arXiv preprint arXiv:1908.08597, 2019
[3] Desa, Hazry. “SIGN LANGUAGE INTO VOICE SIGNAL CONVERSION USING HEAD AND HAND GESTURES.” 2008
[4] F. Ronchetti, F. Quiroga, C. A. Estribou, L. C. Lanzarini, and A. Rosete,“Lsa64: an argentinian sign language dataset,” in XXII Congreso Argentino de Ciencias de la Computación (CACIC 2016)., 2016.
[5] G. T. Papadopoulos and P. Daras, “Human action recognition using 3d reconstruction data,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 8, pp. 1807–1823, 2016.
[6] H. Cooper, B. Holt, and R. Bowden, “Sign language recognition,” in Visual Analysis of Humans. Springer, 2011, pp. 539–562.
[7]Kshitij Bantupalli,Ying Xie,“American Sign Language Recognition using Deep Learning and Computer Vision”, IEEE International Conference on Big Data (Big Data),2018
[8] K.S, Tamilselvan & Balakumar, P & Rajalakshmi, B & Roshini, C & S., Suthagar. “ Translation of Sign Language for Deaf and Dumb People. International Journal of Recent Technology and Engineering. 8. 2277-3878. 10.35940/ijrte.E6555.018520”, 2020
[9]Kurdyumov R, Ho P, Ng J. “Sign language classification using webcam images”, pp 1–4, 2011.
[10]Lancaster, Glenn & Alkoby, Karen & Campen, Jeff & Carter, Roymieco & Davidson, Mary & Ethridge, Dan & Furst, Jacob & Hinkle, Damien & Kroll, Bret & Leyesa, Ryan & Loeding, Barbara & Mcdonald, John & Ougouag, Nedjla & Smallwood, Lori & Srinivasan, Prabhakar & Toro, Jorge & Wolfe, Rosalee. “Voice activated display of American Sign Language for airport security.”, 2003
[11] O. Koller, S. Zargaran, H. Ney, and R. Bowden, “Deep sign: Enabling robust statistical continuous sign language recognition via hybrid cnn hmms,” International Journal of Computer Vision, vol. 126, no. 12, pp. 1311–1325, 2018.
[12]P. Molchanov, X. Yang, S. Gupta, K. Kim, S. Tyree, and J. Kauz, “Online detection and classification of dynamic hand gestures with recurrent 3D convolutional neural network”, in Proc. IEEE Conf. Comput. Vis. Pattern Recog, 2016.
[13]Rekha J, Bhattacharya J, Majumder S. “Hand gesture recognition for sign language: a new hybrid approach. In: International Conference on ImageProcessing, Computer Vision, and Pattern Recognition” (IPCV), pp 80–86, 2011
[14]R. Sharma et al.” Recognition of Single Handed Sign Language Gestures using Contour Tracing descriptor. Proceedings of the World Congress on Engineering”Vol. II, WCE 2013, July 3 - 5, 2013, London, U.K.,
[15]R. Sharma, R. Khapra, N. Dahiya. June 2020. Sign Language Gesture Recognition, pp.14-19
[16]Sepp Hochreiter et al.,“Long Short-Term Memory,”, Neural Computation 9(8): 1735-1780,1997.
[17]S. Shahriar et al., "Real-Time American Sign Language Recognition Using Skin Segmentation and Image Category Classification with Convolutional Neural Network and Deep Learning," TENCON 2018 - 2018 IEEE Region 10 Conference, 2018, pp. 1168-1171, doi: 10.1109/TENCON.2018.8650524.
[18]Wang RY, Popovi? J. 2009. Real-time hand-tracking with a color glove. ACM Trans Graph 28(3):63
[19]Zhang, F., Bazarewsky, V., Vakunov, A., Tkachenka, A., Sung, G., Chang, C. L., & Grundmann, M. 2020. MediaPipe Hands: On-device Real-time Hand Tracking. arXiv preprint arXiv:2006.10214
[20]Z. Ren, J. Yuan, J. Meng, and Z. Zha,“Robust PartBased Hand Gesture Recognition Using Kinect Sensor”, ” IEEE Trans. Multimedia, vol. 15, no. 5, pp. 1110–1120,2013
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors contributing to this journal agree to publish their articles under the Creative Commons Attribution 4.0 International License, allowing third parties to share their work (copy, distribute, transmit) and to adapt it, under the condition that the authors are given credit and that in the event of reuse or distribution, the terms of this license are made clear.
