You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now, model loader reconstructs model according to state_dict by using size of lstm.hh, which is not awful but also not too beautiful if the architecture changes to e.g. two layers, or if the model parameter names slightly change (e.g. lstm to rnn).
Could definitely be improved even though it is not an issue for models right now.
(The most generic solution would be to save the model directly (as a "binary") and not a state_dict, which makes the model instantly torch.load-able, but does introduce dependencies on the current exact RNN implementation/location of the definition, torch version, etc, which is why it is not the recommended way to save.)
The text was updated successfully, but these errors were encountered:
See #38 (comment)
Right now, model loader reconstructs model according to state_dict by using size of lstm.hh, which is not awful but also not too beautiful if the architecture changes to e.g. two layers, or if the model parameter names slightly change (e.g.
lstm
tornn
).Could definitely be improved even though it is not an issue for models right now.
(The most generic solution would be to save the model directly (as a "binary") and not a state_dict, which makes the model instantly torch.load-able, but does introduce dependencies on the current exact RNN implementation/location of the definition, torch version, etc, which is why it is not the recommended way to save.)
The text was updated successfully, but these errors were encountered: