We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In the inference part, we should use the selected image to generate the latent z value()
as show in 'z_base = graph.generate_z(dataset[base_index]["x"])'
But in the glow/models.py, we will repeat to (B, 1, 1, 1).
I do not why and the operator will need too much GPU memory. And I always got out of memory.
Could you help me?
The text was updated successfully, but these errors were encountered:
Did "Out of memory" happen in inference phase? What's the batch size and the GPU memory size of your device?
Sorry, something went wrong.
yeap:
In the training phase, I set batch_size=48, and each device will assign 12 batches about 4GB in my TITAN V (12G)
In inference phase, the 'generate_z' operator need repeat to (B, 1, 1, 1) where B = Train.batch_size (48), and this will induce 4GB x 4 > 12G. so...
You can use small batch size in inference phase. It’s not necessary to be same as training phase.
No branches or pull requests
In the inference part, we should use the selected image to generate the latent z value()
as show in 'z_base = graph.generate_z(dataset[base_index]["x"])'
But in the glow/models.py, we will repeat to (B, 1, 1, 1).
I do not why and the operator will need too much GPU memory. And I always got out of memory.
Could you help me?
The text was updated successfully, but these errors were encountered: