-
Notifications
You must be signed in to change notification settings - Fork 907
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/onnx support #2620
base: master
Are you sure you want to change the base?
Feat/onnx support #2620
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #2620 +/- ##
==========================================
- Coverage 94.23% 94.06% -0.18%
==========================================
Files 141 141
Lines 15509 15534 +25
==========================================
- Hits 14615 14612 -3
- Misses 894 922 +28 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very cool feature, thanks a lot @madtoinou 🚀 Looks like we're close already to merge :)
I had a couple of suggestions regarding the example and the export logic, where I see potential to simplify things further.
@@ -350,6 +351,87 @@ model_finetune = SomeTorchForecastingModel(..., # use identical parameters & va | |||
model_finetune.load_weights("/your/path/to/save/model.pt") | |||
``` | |||
|
|||
#### Exporting model to ONNX format for inference |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reading this I think again about how nice it would be if all our Datasets returned feature / target arrays at fixed positions (e.g. all return a tuple of past target, past cov, historic future cov, future cov, static cov, ... even if they do not support all covariate types) :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, definitely, it would make a lot of things more intuitive and easier to tweak for users.
Checklist before merging this PR:
Fixes #553, fixes #2617.
Summary
_process_input_batch
and_get_batch_prediction()
methods to make the behavior uniform across PLModulesto_onnx
method to theTorchForecastingModel
class, taking care of creating theinput_names
andinput_sample
based on the model attributesOther Information
The consistency of the forecasts was tested in a notebook for all the possible configurations, I did not implement unit-test as onnx should remain an optional dependency. Let me know what you think.