Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] - Restorer does not work with distributed training #58

Open
EmanueleGhelfi opened this issue Mar 30, 2020 · 0 comments · May be fixed by #56
Open

[BUG] - Restorer does not work with distributed training #58

EmanueleGhelfi opened this issue Mar 30, 2020 · 0 comments · May be fixed by #56
Assignees
Labels
bug Something isn't working

Comments

@EmanueleGhelfi
Copy link
Contributor

Describe the bug
Run examples/gan/pix2pix_facades_multi_gpu.py in a multi-gpu scenario. If you try to restore the training once finished you get an error due to wrong input shapes.
This is because in a multi-gpu scenario the batch size gets updated based on the number of devices.
Simply move the call to build_or_restore after the batch size update.

Expected behavior
The restorer should restore the models.

Code to reproduce the issue
examples/gan/pix2pix_facades_multi_gpu.py

@EmanueleGhelfi EmanueleGhelfi linked a pull request Apr 1, 2020 that will close this issue
12 tasks
@mr-ubik mr-ubik added the bug Something isn't working label Apr 7, 2020
@mr-ubik mr-ubik changed the title [BUG/PERFORMANCE] - Restorer does not work with distributed training [BUG] - Restorer does not work with distributed training Apr 7, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants