Abstract
Background
Sign language is an essential means of communication for hearing-impaired individuals.
Objective
We aimed to develop an American sign language recognition dataset and use it in the deep learning model which depends on neural networks to interpret gestures of sign language and hand poses to natural language.
Methods
In this study, we developed a dataset and a Convolutional Neural Network-based sign language interface system to interpret gestures of sign language and hand poses to natural language. The neural network developed in this study is a Convolutional Neural Network (CNN) which enhances the predictability of the American Sign Language alphabet (ASLA). This research establishes a new dataset of the American Sign Language alphabet which takes into consideration various conditions such as lighting and distance.
Results
The dataset created in this study is a new addition in the field of sign language recognition (SLR). This dataset may be used to develop SLR systems. Furthermore, our research compares the results of our dataset with two different datasets from other studies. The other datasets have invariant scene conditions, but our suggested CNN model demonstrated high accuracy for all the tested datasets. Despite the different conditions and volume of the new dataset, it achieved 99.38% accuracy with excellent prediction and small loss (0.0250).
Conclusions
The proposed system may be considered a promising solution in medical applications that use deep learning with superior accuracy. Moreover, our dataset was created under variable conditions which increases the number of contributions, comparisons, results and conclusions in the field of SLR and may enhance such systems.
Sign language is an essential means of communication for hearing-impaired individuals.
Objective
We aimed to develop an American sign language recognition dataset and use it in the deep learning model which depends on neural networks to interpret gestures of sign language and hand poses to natural language.
Methods
In this study, we developed a dataset and a Convolutional Neural Network-based sign language interface system to interpret gestures of sign language and hand poses to natural language. The neural network developed in this study is a Convolutional Neural Network (CNN) which enhances the predictability of the American Sign Language alphabet (ASLA). This research establishes a new dataset of the American Sign Language alphabet which takes into consideration various conditions such as lighting and distance.
Results
The dataset created in this study is a new addition in the field of sign language recognition (SLR). This dataset may be used to develop SLR systems. Furthermore, our research compares the results of our dataset with two different datasets from other studies. The other datasets have invariant scene conditions, but our suggested CNN model demonstrated high accuracy for all the tested datasets. Despite the different conditions and volume of the new dataset, it achieved 99.38% accuracy with excellent prediction and small loss (0.0250).
Conclusions
The proposed system may be considered a promising solution in medical applications that use deep learning with superior accuracy. Moreover, our dataset was created under variable conditions which increases the number of contributions, comparisons, results and conclusions in the field of SLR and may enhance such systems.
Original language | English |
---|---|
Article number | 100048 |
Journal | Computer Methods and Programs in Biomedicine-Update |
Volume | 2 |
DOIs | |
Publication status | Published - 1 Jan 2022 |