Fine-Tuning ljspeech-neuralhmm #2830
umairjamali
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am Fine-Tuning ljspeech-neuralhmm using my own dataset. I am getting avg_loss between -3 to -4.
Learning rate is default i.e. 0.001
GPU is Nvidia A100 with 80GB memory.
Dataset distribution is 80:10:10 For training, validation and set.
Other parameters are set to default.
How can tune the hyperparameters to get best results possible ?
I don't know any programs or tools.
What should be my learning rate, batch_size etc?
Are there any other parameters I should try adjusting ?
Any other suggestions ?
Beta Was this translation helpful? Give feedback.
All reactions