A real-time approach to recognition of Turkish sign language by using convolutional neural networks
Abstract
Sign language is a form of visual communication used by people with hearing problems to express themselves. The main purpose of this study is to make life easier for these people. In this study, a data set was created using 3200 RGB images for 32 classes (32 static words) taken from three different people. Data augmentation methods were applied to the data sets, and the number of images increased from 3200 to 19,200, 600 per class. A 10-layer convolutional neural network model was created for the classification of the signs, and VGG166, Inception, and ResNet deep network architectures, which are deep learning methods, were applied by using the transfer learning method. Also, the signs are classified using the support vector machines and k-nearest neighbor methods, which are the traditional machine learning methods, by using features obtained from the last layer of the convolutional neural network. The most successful method was determined by comparing the obtained results according to time and performance ratios. In addition to these analyses, an interface was developed. By using the interface, the static words belonging to Turkish sign language (TSL) are translated into real-time written language. With the real-time system designed, its success in recognizing the static words of TSL signs and printing its prediction on the computer screen was evaluated.
Description
Keywords
Image processing, Deep learning, Turkish sign language recognition, Convolutional neural network