Hand and Pose-Based Feature Selection for Zero-Shot Sign Language Recognition
No Thumbnail Available
Date
2024-08-22
Journal Title
Journal ISSN
Volume Title
Publisher
IEEE ACCESS
Abstract
Sign language functions as an indispensable interaction method for a certain portion of people in society, offering a unique way of communication. A significant challenge in advancing towards this objective is the difficulty in obtaining suitable training data for each sign in supervised learning. This challenge comes from the complex process of labeling signs and the limited number of skilled people available to do this job. This work introduces a new approach to the problem of Zero-Shot Sign Language Recognition (ZSSLR). We basically utilize and model hand and landmark data streams extracted from the body of the signer. Based on these extracted and modeled features, we employ a data grading approach to facilitate visual embedding with the self-attention mechanism. We utilize textual sign description features along with visual embedding in the Zero-Shot Learning (ZSL) settings. We assess the efficacy of our methodology in two of the suggested ZSL benchmarks.
Description
Keywords
zero-shot learning, zero-shot sign language recognition, Sign language recognition, Assistive technologies, Zero-shot learning, Feature extraction, Sign language, Visualization, Streams, Semantics, Long short term memory