Skip to content

Latest commit

 

History

History
42 lines (34 loc) · 1.73 KB

README.md

File metadata and controls

42 lines (34 loc) · 1.73 KB

GestureSpeak

An efficient path to communicate with Sign Language for people needed Powered by machine intelligence.

Table of Contents

Introduction

Our approach leverages n-gram modeling and Knowledge Distillation techniques to enhance performance and accuracy. The system integrates these methodologies to capture the temporal depen- dencies between gestures and reduce the computational burden typically associated with deep learning models. We utilize variants of EfficientNetv2 and MobileNetV3 architectures, pre-trained on the ImageNet dataset, to achieve a balance between efficiency and accuracy.

Features

  • Real-time Translation: Converts sign language gestures into text or spoken words in real-time.
  • Fast Speed: With the usage of EfficientNet and Knowledge Distillation, our system can be faster than traditional recognition systems with CNNs.
  • High Accuracy: Advanced machine learning algorithms ensure high accuracy in gesture recognition.
  • American language Support: Focus on American Sign Languages of Alphabets like, A, B, C, etc.

Usage

To set up GestureSpeak locally, follow these steps:

  1. Clone the repository:
    git clone https://github.com/YapWH/Understand-What-You-See.git
    cd GestureSpeak
    
  2. Validation:
    python real-time.py
    

Contributor

License

This project is licensed under the MIT License. See the LICENSE file for more details.