forked from llvm/torch-mlir
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump to upstream 532d297c #197
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…lvm#3028) Although we provide a wheel package for Python 3.8, it may actually throw the following exception: `TypeError: 'type' object is not subscriptable`
2 modifications: 1. torch.int64 is enum 4 in TORCH_DTYPE_TO_INT 2. add int32 support
nod-ai/SHARK-ModelDev#267 --------- Authored-by: [email protected] Co-authored-by: Vivek Khandelwal <[email protected]>
…llvm#3037) as that stablehlo is better than chlo as the boundary between frontend compiler and backend compiler.
The only difference between version 7 and newer versions is support for different data types. We should allow this pattern to match as early as 7. Earlier versions have a more manual broadcast specification through attributes, so I did not include those versions. See: [onnx.Div docs](https://onnx.ai/onnx/operators/onnx__Div.html#l-onnx-doc-divl)
Added support for dynamic shapes in `flattenusingints` op in tosa dialect. Due to this some Argmax tests pass This PR fixes this issue llvm#3004 The following tests pass after this PR ``` 1. "ArgmaxIntModule_basic" 2. "ArgmaxIntModule_multiple_maxs" 3. "ArgmaxModule_basic" ```
Lift this from 2-dim only to n-dim for n>=2
Set PyTorch and TorchVision version to nightly release 2024-03-18. Signed-Off By: Vivek Khandelwal <[email protected]>
This adds support for converting DynamicQuantizeLinear from torch-onnx to torch. I could not get an e2e test to pass, since there seems to be some issues with uint8 casting somewhere lower in the pipeline. For example compiling with IREE for llvm-cpu, I would get either the correct zero point (if zp < 128) or the correct zero-point minus 256 (if zp >= 128). The output tensor seems to always return a tensor of zeros, which also occurs when running uint8 examples through QuantizeLinear. Edit: the first problem can be resolved by casting the output back to uint8 on output, the second problem is resolved with PR llvm#3018
…supplied to the loop block (llvm#3040) Co-authored-by: Rob Suderman <[email protected]> Co-authored-by: Xida Ren <[email protected]>
SExtValue was used for `int` and `uint` clamp values. This caused the result to always be outputed as `zero`.
…#3045) At some point, this op became kwarg-only instead of arg/kwarg. Discovered when upgrading to PyTorch 2.3. Also adds a test as this was untested in-tree (was caught out of tree).
Signed-off-by: Gaurav Shukla <[email protected]>
This PR adds support for onnx.LogSoftmax both for old versions (<13, with axis >=0), and new versions (13).
The previous conversions for AtenAdaptiveAvgPool1dOp and AtenAdaptiveMaxPool2dOp are refactored into a general templated conversion that works for all of the AtenAdaptive...PoolNdOp's. New support is added for the following ops: 1. AtenAdaptiveMaxPool1d 2. AtenAdaptiveMaxPool3d 3. AtenAdaptiveAvgPool3d Support is also provided for passing inputs without batch dimensions. For example, applying adaptive_avg_pool2d to an input tensor of rank 3. After [pytorch #118162](pytorch/pytorch#118162) gets down to torch-mlir, I'll add a test for AdaptiveMaxPool1d with return_indices (which will pass with that upstream fix). --------- Co-authored-by: James Newling <[email protected]>
This PR adds lowering of diag_embed to linalg dilect. Tracked in nod-ai/SHARK-ModelDev#288 --------- Co-authored-by: sachink <[email protected]>
This commit adds the OnnxToTorch lowering for the Mish, Softplus, HardSwish, Trilu, ThresholdedRelu op Signed-Off By: Vivek Khandelwal <[email protected]>
This is a partial landing of llvm#3046 while waiting for an upstream change for the rest of it.
…es (llvm#3055) Reshaping tensors depend on directly matching individual dimensions to their corresponding dim in the `torch.view` reshape dimensions. This involves decoupling dynamic dimensions from their static counterparts and support cleanup / canonicalization.
This commit creates a build script used by e2eshark test suite CI
* Also adds the basic scaffolding for handling more of these, which will be needed for cond, while, etc. * Refactors some of the support in the generic OpOverload emitter so it can be shared with these other special forms. This has been on my list for a while, but it just so happens that as part of upgrading to PyTorch 2.3 and a pure upstream flow in Turbine, we were using a feature that required integration with auto_functionalized. This is perhaps the "weirdest" of the higher-order ops and a poor place to start, but needs must. We have testing for this in Turbine. Full support in Turbine has an entire custom ops facility. I've reduced this down to a unit test in torch-mlir.
support fold with literal vtensor. change it to canonicalize because this pattern will create new op.
…lvm#3065) llvm#3055 adds `lib/Dialect/Torch/Transforms/ScalarizeShapes.cpp`, which depends on `torch-mlir/Dialect/TorchConversion/Transforms/BackendTypeConversion.h`. ``` ERROR: /root/.cache/bazel/_bazel_root/b89349c08f7224396763d14fe35cba11/external/torch-mlir/BUILD.bazel:170:11: Compiling lib/Dialect/Torch/Transforms/ScalarizeShapes.cpp failed: (Exit 1): clang failed: error executing command /usr/lib/llvm-16/bin/clang -U_FORTIFY_SOURCE -fstack-protector -Wall -Wthread-safety -Wself-assign -Wunused-but-set-parameter -Wno-free-nonheap-object -fcolor-diagnostics -fno-omit-frame-pointer ... (remaining 101 arguments skipped) Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging external/torch-mlir/lib/Dialect/Torch/Transforms/ScalarizeShapes.cpp:18:10: fatal error: 'torch-mlir/Dialect/TorchConversion/Transforms/BackendTypeConversion.h' file not found #include "torch-mlir/Dialect/TorchConversion/Transforms/BackendTypeConversion.h" ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1 error generated. Target @torch-mlir//:torch-mlir-opt failed to build ``` This PR adds the dependency and brings bazel builds back to green. CI: https://github.com/sjain-stanford/torch-mlir/actions/runs/8445558053/job/23132941876
Co-authored-by: Xida Ren <[email protected]>
…nversion (llvm#3053) Two e2e tests (AdaptiveAveragePool1/2dUnitOutputSizeDynamic) were failing due to numerics. This was as a result of passing -1 as the kernel size in the lowering for the corresponding onnx op GlobalAveragePool.
Are you going to disable the torch-nightly tests in CI here, or do something else about the failing CI? |
The next available version is already 20240408, so this would only be temporary. |
7a89a27
to
5518ef6
Compare
5518ef6
to
3882a03
Compare
AtenInstanceNormModule_basic among others is XPASS
3882a03
to
71f1f9e
Compare
Set PyTorch and TorchVision version to nightly release 2024-04-01. Signed-Off By: Vivek Khandelwal <[email protected]>
Set PyTorch and TorchVision version to nightly release 2024-04-08. Signed-Off By: Vivek Khandelwal <[email protected]>
…orentin.merge_532d297c
TinaAMD
approved these changes
Jul 30, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Should pass tests. Merge done in several steps:
c51e2130f269615e0418ca421e5976f94022daee
fe59f1ee0d052086da5dfc10445eda4161e99151
9ae33e482ef02ff4bc345463128901a514480313
5f325749f98f58e6c9b7b861cd91c8ba9e56476a
c19fc9ba4767f4212f980002e6d3cef9cccec47b
532d297c46b13066da851af6eca53b91ff7a13c0
Next commit in line is:
The bumped LLVM commit is: