FAU Researchers Develop AI System for Real-Time American Sign Language Interpretation

FAU’s groundbreaking study leverages AI to interpret American Sign Language in real-time, ensuring more inclusive communication for the deaf and hard-of-hearing. The study achieved remarkable accuracy rates, showcasing the potential for practical, real-time applications.

Researchers at Florida Atlantic University (FAU) have achieved a significant breakthrough in enhancing communication accessibility for individuals who are deaf or hard-of-hearing. By leveraging the power of artificial intelligence, the research team has developed a pioneering system capable of interpreting American Sign Language (ASL) gestures in real-time.

Bader Alsharif, the first author of the study and a doctoral candidate in FAU’s Department of Electrical Engineering and Computer Science, emphasized the innovation behind their approach.

“Combining MediaPipe and YOLOv8, along with fine-tuning hyperparameters for the best accuracy, represents a groundbreaking and innovative approach,” he said in a news release. “This method hasn’t been explored in previous research, making it a new and promising direction for future advancements.”

The study, published in the Elsevier journal Franklin Open, involved the creation of a custom dataset of 29,820 static images of ASL hand gestures. Each image was meticulously annotated with 21 key landmarks on the hand using MediaPipe, providing detailed spatial information about its structure and position. The integration of these annotations significantly enhanced the precision of YOLOv8, the deep learning model employed in the research.

By leveraging this detailed hand pose information, the system achieved remarkable results. The model performed with an accuracy of 98%, correctly identified gestures (recall) at 98% and attained an overall performance score (F1 score) of 99%. It also recorded a mean Average Precision (mAP) of 98% and a more detailed mAP50-95 score of 93%, highlighting its high reliability and precision.

“Our research demonstrates the potential of combining advanced object detection algorithms with landmark tracking for real-time gesture recognition, offering a reliable solution for American Sign Language interpretation,” added co-author Mohammad Ilyas, a professor in FAU’s Department of Electrical Engineering and Computer Science. “The success of this model is largely due to the careful integration of transfer learning, meticulous dataset creation and precise tuning of hyperparameters.”

This innovative system represents a crucial leap forward in assistive technology, as it can significantly reduce communication barriers. The model’s ability to maintain high recognition rates even under varying hand positions underscores its adaptability in diverse settings.

Looking to the future, the research team aims to expand the dataset to include a wider range of hand shapes and gestures, further enhancing the model’s accuracy. Additionally, optimizing the model for deployment on edge devices will ensure its real-time performance in resource-constrained environments, making it accessible for everyday use.

Stella Batalama, dean of FAU’s College of Engineering and Computer Science, highlighted the societal impact of this research.

“By improving American Sign Language recognition, this work contributes to creating tools that can enhance communication for the deaf and hard-of-hearing community,” she said in the news release. “The model’s ability to reliably interpret gestures opens the door to more inclusive solutions that support accessibility, making daily interactions — whether in education, healthcare, or social settings — more seamless and effective for individuals who rely on sign language. This progress holds great promise for fostering a more inclusive society where communication barriers are reduced.”

The study also involved contributions from Easa Alalwany, a recent doctoral graduate of FAU’s College of Engineering and Computer Science and an assistant professor at Taibah University in Saudi Arabia.