-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
random result when inference 2 similar textual data #13
Comments
and I check the tensor after embedding which are all same. Something happen in evaluation. |
I was also facing the same issue, so, once you've loaded the model. put the |
Even open the eval mode and fixed random seed, it gives different inferences on single and batch prediction for the same textual pair. Example:
Probabilities: tensor([0.9999, 0.0167, 0.9953, 0.9995, 0.2451, 0.9998]) Now, predict only first 2 pairs:
probabilities: tensor([0.9999, 0.0159]) Now, predict only the second pair:
probability: tensor([0.0114]) I don't know how to fix this, and what is the problem here? |
Hi,
I fine-tuned this model and I met a curious issue when inference 2 similar textual data, for example like below:
[["PROTEXASE THERAPEUTICS", "PROTEXASE THERAPEUTICS, INC."]]
i tested it for 10 time which show 10 different prediction score:
[[0.6435145641728891, 0.35648543582711095],
[0.44686401791372865, 0.5531359820862713],
[0.4768868842385305, 0.5231131157614696],
[0.5165862730694053, 0.48341372693059464],
[0.45327667255908927, 0.5467233274409107],
[0.5023805971114581, 0.497619402888542],
[0.7547820757051401, 0.24521792429486],
[0.02058591123972741, 0.9794140887602728],
[0.7257732298308167, 0.27422677016918334],
[0.30100721825239946, 0.6989927817476006]]
and this result showed quiet unstable.
i check the from snippext.model import MultiTaskNet, and the dropout=0.1 which modify to 0. The random result also happen. In fact, the dropout layer does not work in inference processing.
i think this issue is unbelievable. I might miss something importance. i don't know what is the problem?
The text was updated successfully, but these errors were encountered: