Objective

As the mentor for this impactful project, I provided guidance in developing a solution that addresses communication challenges faced by individuals with speech disabilities. The project focuses on creating a sign language-to-speech converter device, offering a more efficient and versatile means of communication.

Technology Stack

  • Sony sPresence
  • nRF24L01+ Network
  • Sensor Fusion
  • Deep Learning

Key Features

The sign language-to-speech converter device is designed to enhance communication for individuals with speech disabilities. Key features include:
  • Sign Language Decoding: Utilizes finger movements and angular hand positions for decoding signs into specific words.
  • Custom Voice Set: Feeds decoded words to a speaker with a custom voice set, improving speech efficiency and accuracy.
  • Personalized Calibration: The system is tuned and calibrated to an individual’s hand and finger movements, enhancing accuracy and efficiency.
  • Input Sensors: Flex sensors and Inertial Measurement Units (IMU) capture and process input signals from sign language gestures.
  • Gesture Tracking: IMU data is used to predict hand angles, enabling accurate tracking of gestures.
  • Mobile and Unrestrictive: Battery-powered design ensures mobility and unrestricted use in various communication scenarios.

This project, incorporating Sony sPresence, nRF24L01+ Network, Sensor Fusion, and Deep Learning, represents a significant advancement in assistive technology, offering a streamlined and effective communication tool for individuals with speech disabilities.

Quick Links