-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A strange phenomenon about quiver sampler #122
Comments
Well, this is strange |
What is your |
UVA. But I remember that GPU mode has the same phenomenon, if my memory is correct. |
That's odd, let me check it |
hi, have you figured out the reason? |
Hi, I would like to ask how do you run the quiver. I install torch-quiver 0.1.0 from https://github.com/quiver-team/torch-quiver/blob/main/docker/README.md |
Hi, thank you for your wonderful opensouce works about Quiver. Recently I was trying to look deep into the sampler module, but met a quite strange phenomenon. We can see the following picture. I was testing about the quiver sampler speed and model training speed.
Machine 1: Tesla V100-SXM2-16GB, Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz, 20 physical CPU cores.
Model: GraphSage, batch_size=128, samples=[25,10], Reddit.
It seems very strange, because the time cost by sampler and model training is less than only using sampler.
I try several methods to figure out the reason, like DataLoader, CUDA Stream, etc, but still confused. Maybe you can help me?
The text was updated successfully, but these errors were encountered: