Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Do not call clearState on model while training
Calling clearState() seems to cause issues that, after 4-5 days of debugging, I haven't been able to fix. See, for example: torch/nn#1141 torch/cunn#441 Further, it's unclear to me if `getParameters` and memory management in general works well when a call to `clearState` can destroy modules (and therefore weight tensors). The easiest solution to all of this is simply to never call clearState on the model while it is training. When saving the model, we create a copy of it on the CPU, and call clearState on this CPU copy, which we then save to disk.
- Loading branch information