Skip to content

Walking humanoid robot with AI gesture recognition capabilities, powered by ESP32

Notifications You must be signed in to change notification settings

yakultbottle/Handy-Mandy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Handy Mandy

Webcam Gesture Recognition Model Instructions

This document provides instructions for setting up and running the gesture recognition model on your webcam

Prerequisites

Instructions for running the model

(Optional) It is recommended to install all Python dependencies in a virtual environment1, but you can skip this step

Windows Powershell:

python -m venv myenv
myenv\Scripts\activate
cd "myenv"

MacOS/Linux bash:

python3 -m venv myenv
source myenv/bin/activate
cd "myenv"
  1. Unzip the folder into virtual environment

  2. Install all dependencies

Windows Powershell:

cd "Handy Mandy"
pip install -r .\requirements.txt

MacOS/Linux bash:

cd "Handy Mandy"
pip install -r ./requirements.txt
  1. Adjust the settings on the model

Webcam dimensions(Line 7, webcam.py)

wCam, hCam = 1280, 720
  1. Run the model (Note: this will not ask for permission before running your webcam!)

Windows Powershell:

python3 .\webcam.py

MacOS/Linux bash:

python3 ./webcam.py
  1. If left/right hand is not accurately determined, check the following settings in the model:

Comment out Line 24, webcam.py

img = cv2.flip(img, 1)

Footnotes

  1. Best practice is to use virtual environments to manage dependency conflicts between projects

About

Walking humanoid robot with AI gesture recognition capabilities, powered by ESP32

Resources

Stars

Watchers

Forks

Packages

No packages published