Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[encoder part problem] generate_z() #19

Open
emigmo opened this issue Jan 5, 2020 · 3 comments
Open

[encoder part problem] generate_z() #19

emigmo opened this issue Jan 5, 2020 · 3 comments

Comments

@emigmo
Copy link

emigmo commented Jan 5, 2020

In the inference part, we should use the selected image to generate the latent z value()

as show in 'z_base = graph.generate_z(dataset[base_index]["x"])'

But in the glow/models.py, we will repeat to (B, 1, 1, 1).

I do not why and the operator will need too much GPU memory. And I always got out of memory.

Could you help me?

@chaiyujin
Copy link
Owner

Did "Out of memory" happen in inference phase?
What's the batch size and the GPU memory size of your device?

@emigmo
Copy link
Author

emigmo commented Jan 7, 2020

yeap:

In the training phase, I set batch_size=48, and each device will assign 12 batches about 4GB in my TITAN V (12G)

In inference phase, the 'generate_z' operator need repeat to (B, 1, 1, 1) where B = Train.batch_size (48), and this will induce 4GB x 4 > 12G. so...

@chaiyujin
Copy link
Owner

You can use small batch size in inference phase. It’s not necessary to be same as training phase.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants