-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to generate "the preprocessed TED dataset" #62
Comments
Yes, that's correct. You need to review the results of each step to make sure the scripts work well with the new videos. |
It seems https://github.com/youngwoo-yoon/youtube-gesture-dataset can not generate the ted_db, I check make_ted_dataset.py, it only stores these key-values:
I also check data_preprocessor.py in Gesture-Generation-from-Trimodal-Context, found it need these key-values:
as you can see, they are not matched. And the files under ted_db folder looks like this : but from the code of make_ted_dataset.py, it seems save files like
So I doubt if https://github.com/youngwoo-yoon/youtube-gesture-dataset can generate the correct files like ted_db. |
Sorry for the confusion. Those are similar but not exactly the same. You need to update the code to get missing information. You need to run a 3D pose estimator for 'skeleton_3d'. The other keys 'audio_raw' and frame numbers are obvious. 'audio_feat' was not used in the final model of the tirmodal paper. |
How to generate "the preprocessed TED dataset" on new videos, does is just need to execute the commands in https://github.com/youngwoo-yoon/youtube-gesture-dataset until
make_ted_dataset.py
?The text was updated successfully, but these errors were encountered: