Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Relax restrictions when inserting ttnn.to_layout and ttnn.{to,from}_device ops #322

Open
wants to merge 65 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
65 commits
Select commit Hold shift + click to select a range
650152e
Relax restrictions for inserting `ttnn.to_layout` and `ttnn.{to,from}…
kevinwuTT Oct 16, 2024
7a1e9d6
Merge branch 'main' into kw/layout_device_ops
kevinwuTT Oct 16, 2024
db1c3c9
Restrict reshape for tensors with rank > 4
kevinwuTT Oct 17, 2024
fcc42ff
Fix restrictions to reshape again
kevinwuTT Oct 17, 2024
a6873b8
Consider reshape to 1-D
kevinwuTT Oct 17, 2024
b236038
Merge branch 'main' into kw/layout_device_ops
kevinwuTT Oct 18, 2024
7567f38
Keep util helper function names consistent
kevinwuTT Oct 21, 2024
9c99eba
Merge branch 'main' into kw/layout_device_ops
kevinwuTT Oct 22, 2024
61e3074
Merge branch 'main' into kw/layout_device_ops
kevinwuTT Oct 22, 2024
3f722c1
Fix reporting of device related ttnn ops
kevinwuTT Oct 22, 2024
fb16e96
Merge branch 'main' into kw/layout_device_ops
kevinwuTT Oct 22, 2024
723dc62
Remove a reshape input variant that has a workaround
kevinwuTT Oct 22, 2024
3f2bfa2
Remove check for torch.fx.Node since PsuedoNode is being used for tt_…
kevinwuTT Oct 23, 2024
86619a0
Revise some restrictions and interactions
kevinwuTT Oct 23, 2024
4d6f5cc
Merge branch 'main' into kw/layout_device_ops
kevinwuTT Oct 24, 2024
0c3de8e
Revise reshape restrictions
kevinwuTT Oct 24, 2024
6bc8976
Fix slice
kevinwuTT Oct 31, 2024
e99cb00
Merge branch 'main' into kw/layout_device_ops
kevinwuTT Oct 31, 2024
f7676e8
Merge branch 'main' into kw/layout_device_ops
kevinwuTT Nov 4, 2024
b3f0a2b
Add some inputs to blacklist for view
kevinwuTT Nov 5, 2024
6455d2c
Fix data movement with different layouts and host/device requirements…
jerrysky3 Nov 1, 2024
0aa9b3c
Remove the blocklist related to issue #358
swimdi Nov 7, 2024
67ec52f
Move aten.add.Tensor restricted from to_tt_guard_autogen to to_tt_pass
swimdi Nov 7, 2024
2b6d028
Add tests/pattern/test_vilt.py
swimdi Nov 7, 2024
fe4e4bf
Add lowering to ttnn.div and other guards
kevinwuTT Nov 7, 2024
31aac9d
Merge branch 'main' into kw/layout_device_ops
kevinwuTT Nov 7, 2024
4dac2c9
Merge branch 'stage1_swimdi_add' into kw/layout_device_ops
kevinwuTT Nov 7, 2024
6a6c7be
Fix data movement with different layouts and host/device requirements…
jerrysky3 Nov 1, 2024
0444cd0
Remove the blocklist related to issue #358
swimdi Nov 7, 2024
4910029
Move aten.add.Tensor restricted from to_tt_guard_autogen to to_tt_pass
swimdi Nov 7, 2024
c7c22dd
Add tests/pattern/test_vilt.py
swimdi Nov 7, 2024
39c812b
Not calculate_accuracy speecht5-tts in confest
swimdi Nov 8, 2024
3e6463c
Add tests/pattern/test_retinanet_pattern.py
swimdi Nov 8, 2024
1b68c5a
Relax more restrictions
kevinwuTT Nov 8, 2024
d21f1df
Merge branch 'stage1_swimdi_add' of github.com:tenstorrent/pytorch2.0…
kevinwuTT Nov 8, 2024
6f7878c
Merge branch 'stage1_swimdi_add' into kw/layout_device_ops
kevinwuTT Nov 8, 2024
5003605
Force TILE_LAYOUT and force fallback for lowerings that do not suppor…
kevinwuTT Nov 9, 2024
00c9b67
Unqueeze reshape does not support 5D
kevinwuTT Nov 11, 2024
6979151
Merge branch 'main' into kw/layout_device_ops
kevinwuTT Nov 11, 2024
449f170
Fix unsqueeze for 4D inputs
kevinwuTT Nov 11, 2024
fe33687
Clean up
kevinwuTT Nov 12, 2024
e03cd9b
Remove remaining reshape restrictions
kevinwuTT Nov 12, 2024
f8bc7db
Merge branch 'main' into kw/layout_device_ops
kevinwuTT Nov 12, 2024
6b3c2dc
Add blacklist for xglm and reshape to 1D
kevinwuTT Nov 12, 2024
5d30722
not rm autogen/all/conv
swimdi Nov 13, 2024
1a8786a
remove original_input_varations of to_torch and aten._to_copy because…
swimdi Nov 13, 2024
bca86b2
Merge branch 'main' into kw/layout_device_ops
kevinwuTT Nov 13, 2024
2836c83
Merge remote-tracking branch 'origin/fix-docs' into kw/layout_device_ops
kevinwuTT Nov 13, 2024
e9cd2a9
Optimize to_copy
kevinwuTT Nov 13, 2024
8a8fe5e
Merge branch 'main' into kw/layout_device_ops
kevinwuTT Nov 14, 2024
f2dfdd7
Rework after node layout change to mirror before node. Handle issues …
kevinwuTT Nov 15, 2024
b70b754
Include reshape when finding user of current target
kevinwuTT Nov 15, 2024
a455ffd
Use output_size for aten slice
kevinwuTT Nov 15, 2024
df6ce56
Handle input aliasing for get_attr nodes
kevinwuTT Nov 19, 2024
ab485dc
Merge branch 'main' into kw/layout_device_ops
kevinwuTT Nov 19, 2024
7cac473
Lower to fallback if input > 4D for aten.select
kevinwuTT Nov 19, 2024
62db956
Skip testing avg pool for now
kevinwuTT Nov 19, 2024
de003cc
Insert clone nodes after each get_attr instead of end
kevinwuTT Nov 19, 2024
7f98bd6
get_attr can non FakeTensor types. Only add clone to these
kevinwuTT Nov 19, 2024
d2862f5
Convert fallback unsafe_view to reshape
kevinwuTT Nov 20, 2024
c7bcb51
Merge branch 'main' into kw/layout_device_ops
kevinwuTT Nov 21, 2024
30e7f49
Merge branch 'main' into kw/layout_device_ops
kevinwuTT Nov 21, 2024
b397236
Disable reshaping or unsqueezing for outputs > 4D for now
kevinwuTT Nov 21, 2024
f3303b6
Last cleanup
kevinwuTT Nov 23, 2024
10f381a
Merge branch 'main' into kw/layout_device_ops
kevinwuTT Nov 23, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 1 addition & 6 deletions tests/lowering/creation/test_clone.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,9 +38,6 @@ def test_clone_from_arg(device, input_shapes):
result_after = m.forward(*inputs)
option._out_fx_graphs[0].print_tabular()

# Check the graph has be rewritten and contain ttnn ops
nodes = list(option._out_fx_graphs[0].nodes)
assert [node.target for node in nodes].count(torch_ttnn.target_wrappers.clone) == 1
# Check inference result
assert torch.allclose(result_before, result_after)

Expand All @@ -63,8 +60,6 @@ def test_clone_from_node(device, input_shapes):
# Check the graph has be rewritten and contain ttnn ops
nodes = list(option._out_fx_graphs[0].nodes)
target = [node.target for node in nodes]
assert target.count(torch_ttnn.target_wrappers.clone) == 1
clone_arg_0 = nodes[target.index(torch_ttnn.target_wrappers.clone)].args[0].target
assert isinstance(clone_arg_0, ttnn.decorators.FastOperation) or isinstance(clone_arg_0, ttnn.decorators.Operation)
assert target.count("call_function") == 0
# Check inference result
assert torch.allclose(result_before, result_after)
44 changes: 28 additions & 16 deletions tests/lowering/creation/test_to_copy.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,8 @@ class ToCopyWithOpAfterModule(torch.nn.Module):
def __init__(self):
super().__init__()

def forward(self, x):
to = x.to(torch.bfloat16)
def forward(self, x, dtype):
to = x.to(dtype)
return torch.add(to, to)


Expand Down Expand Up @@ -52,23 +52,29 @@ def test_to_copy(device, input_shapes):
# If there is a ttnn.from_torch that follows aten._to_copy and is casting to bfloat, then convert.
@pytest.mark.parametrize(
"input_shapes",
[[(4, 4)]],
[(4, 4)],
)
def test_to_copy_with_op_after(device, input_shapes):
@pytest.mark.parametrize("dtype", ((torch.bfloat16), (torch.int64)))
def test_to_copy_with_op_after(device, input_shapes, dtype):
m = ToCopyWithOpAfterModule()
inputs = [torch.rand(shape) for shape in input_shapes]
result_before = m.forward(*inputs)
inputs = torch.rand(input_shapes)
result_before = m.forward(inputs, dtype)
option = torch_ttnn.TorchTtnnOption(device=device)
option.gen_graphviz = True
# The compilation is lazy, so we need to run forward once to trigger the compilation
m = torch.compile(m, backend=torch_ttnn.backend, options=option)
result_after = m.forward(*inputs)
result_after = m.forward(inputs, dtype)
option._out_fx_graphs[0].print_tabular()

# Check the graph has be rewritten and contain ttnn ops
nodes = list(option._out_fx_graphs[0].nodes)
target = [node.target for node in nodes]
assert target.count(torch.ops.aten._to_copy.default) == 0
# try_add_data_move_out: ttnn.to_torch will be followed by a to_copy
if dtype == torch.bfloat16:
count = 0
else:
count = 2
assert target.count(torch.ops.aten._to_copy.default) == count
assert target.count(ttnn.add) == 1
# Check inference result
assert torch.allclose(result_before, result_after, rtol=0.2)
Expand All @@ -78,9 +84,9 @@ class ToCopyViewModule(torch.nn.Module):
def __init__(self):
super().__init__()

def forward(self, x, y, target_shape):
def forward(self, x, y, target_shape, dtype):
view = torch.ops.aten.view.default(x, target_shape)
_to_copy = torch.ops.aten._to_copy.default(view, dtype=torch.bfloat16)
_to_copy = torch.ops.aten._to_copy.default(view, dtype=dtype)
abs = torch.abs(y)
return torch.add(_to_copy, abs)

Expand All @@ -92,9 +98,9 @@ class ToCopyExpand(torch.nn.Module):
def __init__(self):
super().__init__()

def forward(self, x, y, target_shape):
def forward(self, x, y, target_shape, dtype):
expand = torch.ops.aten.expand.default(x, target_shape)
_to_copy = torch.ops.aten._to_copy.default(expand, dtype=torch.bfloat16)
_to_copy = torch.ops.aten._to_copy.default(expand, dtype=dtype)
abs = torch.abs(y)
return torch.add(_to_copy, abs)

Expand All @@ -109,23 +115,29 @@ def input_shapes(self):
(ToCopyExpand(), torch_ttnn.target_wrappers.repeat),
],
)
def test_reshape_test1(device, module, ttnn_op):
@pytest.mark.parametrize("dtype", ((torch.bfloat16), (torch.int64)))
def test_reshape_test1(device, module, ttnn_op, dtype):
m = module
input_shape1, input_shape2, target_shape = m.input_shapes()
x = torch.rand(input_shape1, dtype=torch.bfloat16)
y = torch.rand(input_shape2, dtype=torch.bfloat16)
result_before = m.forward(x, y, target_shape)
result_before = m.forward(x, y, target_shape, dtype)
option = torch_ttnn.TorchTtnnOption(device=device)
option.gen_graphviz = True
# The compilation is lazy, so we need to run forward once to trigger the compilation
m = torch.compile(m, backend=torch_ttnn.backend, options=option)
result_after = m.forward(x, y, target_shape)
result_after = m.forward(x, y, target_shape, dtype)
option._out_fx_graphs[0].print_tabular()

# Check the graph has be rewritten and contain ttnn ops
nodes = list(option._out_fx_graphs[0].nodes)
target = [node.target for node in nodes]
assert target.count(torch.ops.aten._to_copy.default) == 0
# try_add_data_move_out: ttnn.to_torch will be followed by a to_copy
if dtype == torch.bfloat16:
count = 0
else:
count = 2
assert target.count(torch.ops.aten._to_copy.default) == count
assert [node.target for node in nodes].count(ttnn_op) == 1
# Check inference result
assert_with_pcc(result_before, result_after, 0.99)
37 changes: 16 additions & 21 deletions tests/lowering/eltwise/binary/test_div.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,19 +13,21 @@ def forward(self, numerator, denominator):
return torch.div(numerator, denominator)


# ttnn.div does not support broadcasting some combination of shapes. Fallback to reciprocal and multiply.
@pytest.mark.parametrize(
"input_shapes",
"input_shapes, use_ttnn_div",
(
((32, 32), (32, 32)),
((64,), (32, 64)),
((64, 32), (64, 1)),
(((32, 32), (32, 32)), True),
(((64,), (32, 64)), False),
(((64, 32), (64, 1)), False),
pytest.param(
((64, 1), (1, 64)),
False,
marks=pytest.mark.xfail(reason="broadcasting issues (#64)"),
),
),
)
def test_div(device, input_shapes):
def test_div(device, input_shapes, use_ttnn_div):
m = DivModule()
inputs = [torch.randint(1, 15, shape).to(torch.bfloat16) for shape in input_shapes]
result_before = m.forward(*inputs)
Expand All @@ -39,18 +41,21 @@ def test_div(device, input_shapes):
# Check the graph has be rewritten and contain ttnn ops
nodes = list(option._out_fx_graphs[0].nodes)
target = [node.target for node in nodes]
assert target.count(ttnn.reciprocal) == 1
assert target.count(ttnn.mul) == 1
assert target.index(ttnn.reciprocal) < target.index(ttnn.mul)
assert nodes[target.index(ttnn.mul)].args[1].target == ttnn.reciprocal
if use_ttnn_div:
assert target.count(ttnn.div) == 1
else:
assert target.count(ttnn.reciprocal) == 1
assert target.count(ttnn.mul) == 1
assert target.index(ttnn.reciprocal) < target.index(ttnn.mul)
assert nodes[target.index(ttnn.mul)].args[1].target == ttnn.reciprocal

# Check inference result
assert_with_pcc(result_before, result_after)


@pytest.mark.parametrize(
"input_shapes",
[[(4, 4)], [(32, 32)]],
[[(4, 4)], [(32, 32)], [(1, 197, 1024)]],
)
def test_div_scalar_denom(device, input_shapes):
m = DivModule()
Expand All @@ -66,15 +71,5 @@ def test_div_scalar_denom(device, input_shapes):
# Check the graph has be rewritten and contain ttnn ops
nodes = list(option._out_fx_graphs[0].nodes)
target = [node.target for node in nodes]
assert target.count(ttnn.full) == 1
assert target.count(ttnn.reciprocal) == 1
assert target.count(ttnn.mul) == 1
assert target.index(ttnn.full) < target.index(ttnn.reciprocal)
assert target.index(ttnn.reciprocal) < target.index(ttnn.mul)
assert nodes[target.index(ttnn.mul)].args[1].target == ttnn.reciprocal
# Intermediate node meta check if preserved
for node in nodes:
if node.target == ttnn.full or node.target == ttnn.reciprocal:
assert node.meta["val"].size() == input_shapes[0]
# Check inference result
assert target.count(ttnn.div) == 1
assert_with_pcc(result_before, result_after)
10 changes: 3 additions & 7 deletions tests/lowering/eltwise/binary/test_sub.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,12 +118,8 @@ def test_rsub_scalar(device, input_shapes):
# Check the graph has be rewritten and contain ttnn ops
nodes = list(option._out_fx_graphs[0].nodes)
target = [node.target for node in nodes]
assert target.count(ttnn.full) == 1
assert target.count(ttnn.sub) == 1
assert target.index(ttnn.full) < target.index(ttnn.sub)
# Intermediate node meta check if preserved
for node in nodes:
if node.target == ttnn.full:
assert node.meta["val"].size() == input_shapes[0]
assert target.count(ttnn.neg) == 1
assert target.count(ttnn.add) == 1
assert target.index(ttnn.neg) < target.index(ttnn.add)
# Check inference result
assert_with_pcc(result_before, result_after, 0.998)
1 change: 1 addition & 0 deletions tests/lowering/eltwise/unary/test_remainder.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ def forward(self, input, mod):
return input % mod


@pytest.mark.skip_platform("grayskull")
@pytest.mark.parametrize(
"input_shape, mod, converted",
[
Expand Down
1 change: 1 addition & 0 deletions tests/lowering/pool/test_avg_pool_2d.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ def forward(self, input):
return torch._adaptive_avg_pool2d(input, (1, 1))


@pytest.mark.skip()
@pytest.mark.parametrize(
"input_shapes",
[(1, 2048, 7, 7)],
Expand Down
4 changes: 4 additions & 0 deletions tests/lowering/tensor_manipulation/test_slice.py
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,10 @@ def forward(self, input, dim, start, end):
((1, 4251, 192), 0, 0, END_MAX),
((1, 4251, 192), 1, -100, END_MAX),
((1, 4251, 192), 1, 1, -100),
# Hardnet (train)
((1, 782, 7, 7), 1, 0, 160),
# Clip
((1, 77), 1, 0, 7),
),
)
def test_aten_slice(device, input_shape, dim, start, end, module):
Expand Down
12 changes: 6 additions & 6 deletions tests/lowering/tensor_manipulation/test_squeeze.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ def forward(self, input, dim):
[
((1, 32, 16), 0),
((1, 256, 1), -1),
((33, 44, 1, 32, 16), 1),
((33, 44, 1, 32, 16), 2),
pytest.param((33, 44, 1, 32, 16), 1, marks=pytest.mark.xfail(reason="Cannot reshape from 5D to 4D.")),
pytest.param((33, 44, 1, 32, 16), 2, marks=pytest.mark.xfail(reason="Cannot reshape from 5D to 4D.")),
],
)
def test_squeeze_dim(device, input_shape, dim):
Expand Down Expand Up @@ -53,10 +53,10 @@ def forward(self, input):
@pytest.mark.parametrize(
"input_shape",
[
((64, 1, 32, 16, 1, 32, 32)),
((1, 1, 55, 23, 44, 32, 32)),
((22, 1, 55, 23, 44, 32, 1)),
((1, 1, 55, 1, 1, 1, 1)),
pytest.param((64, 1, 32, 16, 1, 32, 32), marks=pytest.mark.xfail(reason="Does not support TILE_LAYOUT.")),
pytest.param((1, 1, 55, 23, 44, 32, 32), marks=pytest.mark.xfail(reason="Does not support TILE_LAYOUT.")),
pytest.param((22, 1, 55, 23, 44, 32, 1), marks=pytest.mark.xfail(reason="Does not support TILE_LAYOUT.")),
pytest.param((1, 1, 55, 1, 1, 1, 1), marks=pytest.mark.xfail(reason="Does not support TILE_LAYOUT.")),
],
)
def test_squeeze_none_dim(device, input_shape):
Expand Down
1 change: 1 addition & 0 deletions tests/lowering/tensor_manipulation/test_transpose.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ def forward(self, x, dim0, dim1):
# If not, this runtime error will be thrown:
# RuntimeError: TT_FATAL @ ../tt_metal/impl/buffers/buffer.cpp:41: page_size % sizeof(uint32_t) == 0
((5, 3, 2), 0, 2),
((1, 4150, 192), 1, 2),
((5, 3, 1), 0, 2),
((5, 3, 1), 1, 2),
((5, 3, 1), 0, 1),
Expand Down
21 changes: 19 additions & 2 deletions tests/lowering/tensor_manipulation/test_unsqueeze.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,18 @@ def forward(self, x, y):

@pytest.mark.parametrize(
"input_shape, dim",
[((5, 2, 4, 3), 1)],
[
pytest.param(
(5, 2, 4, 3),
1,
marks=pytest.mark.xfail(reason="Fails if output is > 4D, using TILE_LAYOUT, and W dim is >= 32."),
),
pytest.param(
(50, 1, 3, 1024),
0,
marks=pytest.mark.xfail(reason="Fails if output is > 4D, using TILE_LAYOUT, and W dim is >= 32."),
),
],
)
def test_unsqueeze1(device, input_shape, dim):
mod = UnsqueezeModule()
Expand Down Expand Up @@ -64,7 +75,13 @@ def test_unsqueeze2(device, input_shape, dim):

@pytest.mark.parametrize(
"input_shape, dim",
[((5, 2, 4, 3), -2)],
[
pytest.param(
(5, 2, 4, 3),
-2,
marks=pytest.mark.xfail(reason="Fails if output is > 4D, using TILE_LAYOUT, and W dim is >= 32."),
)
],
)
def test_unsqueeze3(device, input_shape, dim):
mod = UnsqueezeModule()
Expand Down
26 changes: 26 additions & 0 deletions tests/lowering/tensor_manipulation/test_view.py
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,32 @@ def forward(self, x, new_shape):
((256, 4096), (1, 256, 4096)),
((1, 32, 16, 96), (1, 32, 1536)),
((1, 192, 4150), (1, 192, 50, 83)),
((1, 100, 192), (100, 192)),
((1, 1445, 192), (1, 1445, 3, 64)),
((1, 1445, 192), (1445, 192)),
((1, 1445, 3, 64), (1, 1445, 192)),
((1, 1445, 768), (1445, 768)),
((1, 192, 32, 42), (1, 192, 1344)),
((1, 192, 4150), (1, 192, 50, 83)),
((1, 3, 1445, 1445), (3, 1445, 1445)),
((1, 3, 1445, 64), (3, 1445, 64)),
((1, 3, 64, 1445), (3, 64, 1445)),
((100, 192), (1, 100, 192)),
((100, 4), (1, 100, 4)),
((100, 92), (1, 100, 92)),
((1445, 192), (1, 1445, 192)),
((1445, 768), (1, 1445, 768)),
((192), (1, 192, 1, 1)),
((1), (1, 1, 1, 1)),
((3, 1445, 1445), (1, 3, 1445, 1445)),
((3, 1445, 64), (1, 3, 1445, 64)),
((32), (1, 1, 32, 1)),
((42), (1, 1, 1, 42)),
pytest.param(
(1, 10),
(10,),
marks=pytest.mark.xfail(reason="Does not support TILE_LAYOUT."),
),
],
)
def test_reshape(device, input_shape, new_shape, module):
Expand Down
4 changes: 0 additions & 4 deletions tools/collect_metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -221,8 +221,6 @@ def _join_br(str_list: list):
opname = op["opname"]
inputs = _join_br(op["inputs"])
self[opname][inputs]
if opname == "aten.cat.default":
print(op)
# If exist, map converted ops to the original op
if compiled_schema_metrics:
# Hold ops that require revisiting the original dict to determine the status
Expand All @@ -232,8 +230,6 @@ def _join_br(str_list: list):
opname = op["opname"]
original_opname = op["original_inputs"]["opname"]
original_inputs = _join_br(op["original_inputs"]["inputs"])
if opname == "aten.cat.default":
print(op)
# NOTE(kevinwuTT): Some ttnn ops are wrapped, so they have no `ttnn` prefix. Should this be more strict?
if opname != original_opname:
# Some aten ops are converted to other aten ops
Expand Down
Loading
Loading