According to the World Federation of the Deaf (2025), over 70 million deaf individuals worldwide use sign language as their primary form of communication. Unfortunately, not all caretakers or family members are fluent in sign language which can create significant communication barriers.
To address this challenge, a team of students from the SRM Institute of Science and Technology student branch, Tiruchirappalli Campus, in collaboration with Dolours High School for the Deaf, developed a wearable device that converts sign language into audio and text.
The system integrates AI, machine learning, and computer vision to recognize and interpret sign language in real time. Deep learning models, such as convolutional neural networks (CNNs), are trained on sign language datasets for accurate recognition. Image processing techniques like edge detection, segmentation, and contour analysis help isolate hand movement from video feeds.
This innovative solution aims to promote inclusivity by bridging the communication gap between sign language users and non-signers.
This project was made possible by $986.00 in funding from EPICS in IEEE and the Jon C. Taenzer Memorial Fund.
