Ozcan, Giray SercanBilge, Yunus CanSumer, Emre2025-04-282024-08-222169-3536https://hdl.handle.net/11727/12902Sign language functions as an indispensable interaction method for a certain portion of people in society, offering a unique way of communication. A significant challenge in advancing towards this objective is the difficulty in obtaining suitable training data for each sign in supervised learning. This challenge comes from the complex process of labeling signs and the limited number of skilled people available to do this job. This work introduces a new approach to the problem of Zero-Shot Sign Language Recognition (ZSSLR). We basically utilize and model hand and landmark data streams extracted from the body of the signer. Based on these extracted and modeled features, we employ a data grading approach to facilitate visual embedding with the self-attention mechanism. We utilize textual sign description features along with visual embedding in the Zero-Shot Learning (ZSL) settings. We assess the efficacy of our methodology in two of the suggested ZSL benchmarks.en-USzero-shot learningzero-shot sign language recognitionSign language recognitionAssistive technologiesZero-shot learningFeature extractionSign languageVisualizationStreamsSemanticsLong short term memoryHand and Pose-Based Feature Selection for Zero-Shot Sign Language RecognitionArticle0012884284000010012933695000022-s2.0-852002497702-s2.0-85195107649