Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AutoBump] Merge with 05929f91 (May 27) (46) #279

Merged
merged 9 commits into from
Sep 9, 2024

Conversation

mgehre-amd
Copy link
Collaborator

No description provided.

angelz913 and others added 9 commits May 22, 2024 17:19
This PR fixes the bugs for `Torch::AtenOneHotOp` by:

1) Using `Torch::kUnknownSize` as the default value for `numClasses` in
   the pattern matching stage in `DecomposeAtenOneHotOp`
2) Adding `AtenIntScalarOp` to the patterns in `TorchToArith`
3) Handling both `int` and `float` types for `off` and `on` values in
`TorchOnnxToTorch` conversion

It also includes:

1) A new test in `TorchToArith/basic.mlir`, for `torch.aten.Int.Scalar`,
and
2) A new test in `decompose-complex-ops.mlir`, for `torch.aten.one_hot`

**Dependencies**

This PR is dependent on llvm#3334.
std::accumulate needs 64-bit init value to perform 64-bit arithmetic on
a list of integers.

Signed-off-by: Gaurav Shukla <[email protected]>
)

* implement detailed lowering template pattern
`ConvertAtenReduceAllDimsOp` and `ConvertAtenReduceKeepDimOp`
* support `aten.amin`'s lowering.
so that `python3 e2e_testing/main.py -v` would print intermediate IR.
Base automatically changed from bump_to_f4bfe3f9 to bump_to_297c2709 September 9, 2024 14:43
An error occurred while trying to automatically change base from bump_to_f4bfe3f9 to bump_to_297c2709 September 9, 2024 14:43
@mgehre-amd mgehre-amd merged commit 5d648ed into bump_to_297c2709 Sep 9, 2024
4 checks passed
@mgehre-amd mgehre-amd deleted the bump_to_05929f91 branch September 9, 2024 14:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants