Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump to upstream 532d297c #197

Merged
merged 60 commits into from
Jul 30, 2024
Merged

Conversation

cferry-AMD
Copy link

@cferry-AMD cferry-AMD commented Jul 12, 2024

Should pass tests. Merge done in several steps:

  • c51e2130f269615e0418ca421e5976f94022daee
  • fe59f1ee0d052086da5dfc10445eda4161e99151
  • 9ae33e482ef02ff4bc345463128901a514480313
  • 5f325749f98f58e6c9b7b861cd91c8ba9e56476a
  • c19fc9ba4767f4212f980002e6d3cef9cccec47b
  • 532d297c46b13066da851af6eca53b91ff7a13c0

Next commit in line is:

commit ec4cb8be446da8c118675ff7db2ea279ce3effe1
Author: Rob Suderman <[email protected]>
Date:   Mon Apr 1 16:34:59 2024 -0700

    Bump LLVM to llvm/llvm-project@0030fc4ac74a9ce645adb9d59e108da4d4d11818 (#3079)
    
    Co-authored-by: Peiming Liu <[email protected]>

The bumped LLVM commit is:

commit 0030fc4ac74a9ce645adb9d59e108da4d4d11818
Author: Rob Suderman <[email protected]>
Date:   Fri Mar 29 15:04:40 2024 -0700

    [mlir]Fix dialect conversion drop uses (#86991)
    
    Before deleting the block we need to drop uses to the surrounding args.
    If this is not performed dialect conversion failures can result in a
    failure to remove args (despite the block having no remaining uses).

penguin-wwy and others added 30 commits March 15, 2024 08:29
…lvm#3028)

Although we provide a wheel package for Python 3.8, it may actually
throw the following exception:
`TypeError: 'type' object is not subscriptable`
2 modifications:
1. torch.int64 is enum 4 in TORCH_DTYPE_TO_INT
2. add int32 support
…llvm#3037)

as that stablehlo is better than chlo as the boundary between frontend
compiler and backend compiler.
The only difference between version 7 and newer versions is support for
different data types. We should allow this pattern to match as early as
7. Earlier versions have a more manual broadcast specification through
attributes, so I did not include those versions.

See: [onnx.Div
docs](https://onnx.ai/onnx/operators/onnx__Div.html#l-onnx-doc-divl)
Added support for dynamic shapes in `flattenusingints` op in tosa
dialect. Due to this some Argmax tests pass
This PR fixes this issue llvm#3004

The following tests pass after this PR
 ```
1. "ArgmaxIntModule_basic"
2. "ArgmaxIntModule_multiple_maxs"
3. "ArgmaxModule_basic"
```
Lift this from 2-dim only to n-dim for n>=2
Set PyTorch and TorchVision version to nightly release 2024-03-18.

Signed-Off By: Vivek Khandelwal <[email protected]>
This adds support for converting DynamicQuantizeLinear from torch-onnx
to torch.

I could not get an e2e test to pass, since there seems to be some issues
with uint8 casting somewhere lower in the pipeline. For example
compiling with IREE for llvm-cpu, I would get either the correct zero
point (if zp < 128) or the correct zero-point minus 256 (if zp >= 128).
The output tensor seems to always return a tensor of zeros, which also
occurs when running uint8 examples through QuantizeLinear.

Edit: the first problem can be resolved by casting the output back to
uint8 on output, the second problem is resolved with PR llvm#3018
…supplied to the loop block (llvm#3040)

Co-authored-by: Rob Suderman <[email protected]>
Co-authored-by: Xida Ren <[email protected]>
SExtValue was used for `int` and `uint` clamp values. This caused the
result to always be outputed as `zero`.
…#3045)

At some point, this op became kwarg-only instead of arg/kwarg.
Discovered when upgrading to PyTorch 2.3.

Also adds a test as this was untested in-tree (was caught out of tree).
This PR adds support for onnx.LogSoftmax both for old versions (<13,
with axis >=0), and new versions (13).
The previous conversions for AtenAdaptiveAvgPool1dOp and
AtenAdaptiveMaxPool2dOp are refactored into a general templated
conversion that works for all of the AtenAdaptive...PoolNdOp's.

New support is added for the following ops:

1. AtenAdaptiveMaxPool1d
2. AtenAdaptiveMaxPool3d
3. AtenAdaptiveAvgPool3d

Support is also provided for passing inputs without batch dimensions.
For example, applying adaptive_avg_pool2d to an input tensor of rank 3.

After [pytorch #118162](pytorch/pytorch#118162)
gets down to torch-mlir, I'll add a test for AdaptiveMaxPool1d with
return_indices (which will pass with that upstream fix).

---------

Co-authored-by: James Newling <[email protected]>
This PR adds lowering of diag_embed to linalg dilect.
Tracked in nod-ai/SHARK-ModelDev#288

---------

Co-authored-by: sachink <[email protected]>
This commit adds the OnnxToTorch lowering for the Mish, Softplus,
HardSwish, Trilu, ThresholdedRelu op

Signed-Off By: Vivek Khandelwal <[email protected]>
This is a partial landing of llvm#3046 while waiting for an upstream change
for the rest of it.
…es (llvm#3055)

Reshaping tensors depend on directly matching individual dimensions to
their corresponding dim in the `torch.view` reshape dimensions. This
involves decoupling dynamic dimensions from their static counterparts
and support cleanup / canonicalization.
This commit creates a build script used by e2eshark test suite CI
* Also adds the basic scaffolding for handling more of these, which will
be needed for cond, while, etc.
* Refactors some of the support in the generic OpOverload emitter so it
can be shared with these other special forms.

This has been on my list for a while, but it just so happens that as
part of upgrading to PyTorch 2.3 and a pure upstream flow in Turbine, we
were using a feature that required integration with auto_functionalized.
This is perhaps the "weirdest" of the higher-order ops and a poor place
to start, but needs must. We have testing for this in Turbine.

Full support in Turbine has an entire custom ops facility. I've reduced
this down to a unit test in torch-mlir.
support fold with literal vtensor.  
change it to canonicalize because this pattern will create new op.
…lvm#3065)

llvm#3055 adds
`lib/Dialect/Torch/Transforms/ScalarizeShapes.cpp`, which depends on
`torch-mlir/Dialect/TorchConversion/Transforms/BackendTypeConversion.h`.
```
ERROR: /root/.cache/bazel/_bazel_root/b89349c08f7224396763d14fe35cba11/external/torch-mlir/BUILD.bazel:170:11: Compiling lib/Dialect/Torch/Transforms/ScalarizeShapes.cpp failed: (Exit 1): clang failed: error executing command /usr/lib/llvm-16/bin/clang -U_FORTIFY_SOURCE -fstack-protector -Wall -Wthread-safety -Wself-assign -Wunused-but-set-parameter -Wno-free-nonheap-object -fcolor-diagnostics -fno-omit-frame-pointer ... (remaining 101 arguments skipped)

Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging
external/torch-mlir/lib/Dialect/Torch/Transforms/ScalarizeShapes.cpp:18:10: fatal error: 'torch-mlir/Dialect/TorchConversion/Transforms/BackendTypeConversion.h' file not found
#include "torch-mlir/Dialect/TorchConversion/Transforms/BackendTypeConversion.h"
         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
Target @torch-mlir//:torch-mlir-opt failed to build
```

This PR adds the dependency and brings bazel builds back to green.

CI:
https://github.com/sjain-stanford/torch-mlir/actions/runs/8445558053/job/23132941876
…nversion (llvm#3053)

Two e2e tests (AdaptiveAveragePool1/2dUnitOutputSizeDynamic) were
failing due to numerics. This was as a result of passing -1 as the
kernel size in the lowering for the corresponding onnx op
GlobalAveragePool.
@mgehre-amd
Copy link
Collaborator

Are you going to disable the torch-nightly tests in CI here, or do something else about the failing CI?

@mgehre-amd
Copy link
Collaborator

The next available version is already 20240408, so this would only be temporary.

@josel-amd josel-amd force-pushed the corentin.merge_532d297c branch 10 times, most recently from 7a89a27 to 5518ef6 Compare July 29, 2024 16:19
@josel-amd josel-amd force-pushed the corentin.merge_532d297c branch from 5518ef6 to 3882a03 Compare July 29, 2024 16:20
AtenInstanceNormModule_basic among others is XPASS
@josel-amd josel-amd force-pushed the corentin.merge_532d297c branch from 3882a03 to 71f1f9e Compare July 29, 2024 16:22
josel-amd and others added 5 commits July 29, 2024 17:22
Set PyTorch and TorchVision version to nightly release 2024-04-01.

Signed-Off By: Vivek Khandelwal <[email protected]>
Set PyTorch and TorchVision version to nightly release 2024-04-08.

Signed-Off By: Vivek Khandelwal <[email protected]>
@josel-amd josel-amd marked this pull request as ready for review July 30, 2024 09:59
@josel-amd josel-amd requested a review from TinaAMD July 30, 2024 09:59
Copy link
Collaborator

@TinaAMD TinaAMD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@josel-amd josel-amd merged commit 730c850 into feature/backport_ea1_ops Jul 30, 2024
3 checks passed
@josel-amd josel-amd deleted the corentin.merge_532d297c branch July 30, 2024 10:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.