Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AutoBump] Merge with 0913b967 (Nov 22) (120) #510

Merged
merged 27 commits into from
Feb 6, 2025

Conversation

mgehre-amd
Copy link
Collaborator

No description provided.

yyp0 and others added 14 commits November 18, 2024 10:31
…shape calculation of `upsample_nearest2d`) (llvm#3764)

As per title. See also
[PR](llvm#3750) for
`torch.aten.mul.float_int`.

---------

Co-authored-by: zjgarvey <[email protected]>
…lvm#3887)

Addresses a bug when trying to materialize a non fp64 attr to a constant
float op in scalarize shapes.
Fixes nod-ai/SHARK-ModelDev#888

If stash_type is different from input_dtype/result_dtype:
1. convert x dtype to stash_type
2. calculate mean and var in stash_type since x is in stash_type already
3. convert back to result_dtype before stage two calculation
4. convert mean_dtype and var_dtype if they are different from
stash_type

e2e test added in nod-ai/SHARK-TestSuite#399
TODO: support multiple batches and classes
…rsion to Arith. (llvm#3750)

Folder is required to simplify the shape calculation of
`torch.aten.__interpolate.size_list_scale_list`:

https://github.com/llvm/torch-mlir/blob/5eab669c4ab0c3aab3dab5b95d0172ab0a8395b8/lib/Dialect/Torch/Transforms/AbstractInterpLibrary.cpp#L6900-L6907

(I've re-run `build_tools/update_abstract_interp_lib.sh`)

---------

Co-authored-by: zjgarvey <[email protected]>
- Add `AtenFftRfftOp` to Torch dialect.
- Add conversion of `AtenFftRfftOp` to Linalg, using a `linalg.matmul`
per output component (real and imaginary). Computing the DFT is
_O(n^2)_.
- Add decomposition of `AtenFftRfftOp` into Torch-level ops (same
paradigm as above).
- Add unit and end-to-end tests.
Fixes `onnx.Pow(float,int)` and `Pow(int,float)` accuracy. Torch uses
`double` internally to compute pow if one argument is integer and the
other one is floating point (due to C++ promotion rules).

This PR keeps `onnx.Pow(int,int)` as is, which still produces numeric
mismatches for values that overflow. torch uses a pure-integer
implementation, where torch-mlir currently maps it to `Pow(float,float)`
…isc (llvm#3886)

- Add Torch to TOSA lowering for the following ops:
  + torch.aten.upsample_nearest2d
  + torch.aten.upsample_nearest2d.vec
  + torch.aten.outer
  + torch.prims.split_dim
- Add Tanh approximation mode for GELU lowering
- Add different types support for compare ops
- Add different input and output types support for linalg vector norm
lowering
- Update xfail with new e2e results
- Add new LIT tests to basic.mlir


Change-Id: I7b1d44d94319cf94fcc9d234cc07708ef9ce321e

Signed-off-by: Justin Ngo <[email protected]>
Base automatically changed from bump_to_95f77817 to bump_to_b6f04fa3 February 4, 2025 14:10
Base automatically changed from bump_to_b6f04fa3 to feature/backport_ea1_ops February 4, 2025 14:27
@mgehre-amd mgehre-amd requested a review from jorickert February 5, 2025 08:10
@mgehre-amd mgehre-amd enabled auto-merge February 5, 2025 08:10
[AutoBump] Merge with fixes of 8711d3e (Dec 02) (126)
@mgehre-amd mgehre-amd requested a review from cferry-AMD February 5, 2025 20:49
@mgehre-amd mgehre-amd merged commit e60e026 into feature/backport_ea1_ops Feb 6, 2025
4 checks passed
@mgehre-amd mgehre-amd deleted the bump_to_0913b967 branch February 6, 2025 07:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants