Replies: 2 comments 6 replies
-
@TKassis thank you Timothy! |
Beta Was this translation helpful? Give feedback.
3 replies
-
@TKassis I'd be super interested to see when and how things fail with the itransformer setup, did you share your findings anywhere, or do you plan to in the future? |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
We've done a lot of work using this approach in the past (encoding individual series as tokens). It works very very well for held out data that is within the training distribution but fails catastrophically in terms of generalization to slight distribution shifts. We found that conventional ViT style tokenization (treating each series as a channel and then patchyfing) gives worse performance on held out tests sets but generalizes a lot better.
Beta Was this translation helpful? Give feedback.
All reactions