You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for the open source of this awesome work. I'm trying to reproduce the results in the paper. But after 870 epochs of training (far more than the 160 epochs written in the paper), I only got PESQ: 1.95+-0.69. Is there anything I'm missing to run the codes? I run the codes with all the default settings.
The text was updated successfully, but these errors were encountered:
Thank you for the open source of this awesome work. I'm trying to reproduce the results in the paper. But after 870 epochs of training (far more than the 160 epochs written in the paper), I only got PESQ: 1.95+-0.69. Is there anything I'm missing to run the codes? I run the codes with all the default settings.
I trained the model with a batch size of 8 and a gradient accumulate step of 4 to approximate the batch of 32 on the 4 GPUs described in the paper. Other settings are the default and the PESQ: 2.76 ± 0.66. May I ask if have you tested with the pre-trained weight? I tested the pretrained weight on the VB-DMD dataset and the PESQ is 2.91 ± 0.63.
Thank you for the open source of this awesome work. I'm trying to reproduce the results in the paper. But after 870 epochs of training (far more than the 160 epochs written in the paper), I only got PESQ: 1.95+-0.69. Is there anything I'm missing to run the codes? I run the codes with all the default settings.
The text was updated successfully, but these errors were encountered: