Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uplift third_party/tt-mlir to 2025-03-01 #373

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Uplift third_party/tt-mlir to 2025-03-01 #373

wants to merge 1 commit into from

Conversation

vmilosevic
Copy link
Contributor

This PR uplifts the third_party/tt-mlir to the

@vmilosevic vmilosevic enabled auto-merge (squash) February 26, 2025 01:30
Copy link

TestsPassed ☑️Skipped ⚠️Failed ❌️
TT-Torch Tests435 ran426 passed7 skipped2 failed
TestResult
TT-Torch Tests
pytest
test_conv2d.test_conv2d[dtype1-1-64-3-256-256-7-7-2-2-3]❌ failure
test_conv2d.test_conv2d[dtype1-1-128-64-128-128-2-2-2-2-0]❌ failure

@codecov-commenter
Copy link

codecov-commenter commented Feb 26, 2025

❌ 2 Tests Failed:

Tests completed Failed Passed Skipped
428 2 426 7
View the full list of 2 ❄️ flaky tests
tests.torch.test_conv2d::test_conv2d[dtype1-1-128-64-128-128-2-2-2-2-0]

Flake rate in main: 42.86% (Passed 4 times, Failed 3 times)

Stack Traces | 0.459s run time
batch_size = 1, output_channels = 128, input_channels = 64, input_height = 128
input_width = 128, filter_height = 2, filter_width = 2, stride_h = 2
stride_w = 2, padding = 0, dtype = torch.float32

    @pytest.mark.parametrize(
        "batch_size, output_channels, input_channels, input_height, input_width, filter_height, filter_width, stride_h, stride_w, padding",
        ((1, 64, 3, 256, 256, 7, 7, 2, 2, 3), (1, 128, 64, 128, 128, 2, 2, 2, 2, 0)),
    )
    @pytest.mark.parametrize("dtype", [torch.bfloat16, torch.float32])
    def test_conv2d(
        batch_size,
        output_channels,
        input_channels,
        input_height,
        input_width,
        filter_height,
        filter_width,
        stride_h,
        stride_w,
        padding,
        dtype,
    ):
        class Basic(nn.Module):
            def __init__(self):
                super().__init__()
                self.conv2d = nn.Conv2d(
                    input_channels,
                    output_channels,
                    kernel_size=(filter_height, filter_width),
                    stride=(stride_h, stride_w),
                    padding=padding,
                    bias=False,
                    dtype=dtype,
                )
    
            def forward(self, x):
                return self.conv2d(x)
    
>       verify_module(
            Basic(),
            input_shapes=[(batch_size, input_channels, input_height, input_width)],
            input_data_types=[dtype],
            required_atol=10,
        )

tests/torch/test_conv2d.py:47: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tt_torch/tools/verify.py:252: in verify_module
    _verify_torch_module(
tt_torch/tools/verify.py:146: in _verify_torch_module
    ret = tt_mod(*inputs)
.../venv/lib/python3.11.../nn/modules/module.py:1736: in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
.../venv/lib/python3.11.../nn/modules/module.py:1747: in _call_impl
    return forward_call(*args, **kwargs)
.../venv/lib/python3.11.../torch/_dynamo/eval_frame.py:465: in _fn
    return fn(*args, **kwargs)
.../venv/lib/python3.11.../nn/modules/module.py:1736: in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
.../venv/lib/python3.11.../nn/modules/module.py:1747: in _call_impl
    return forward_call(*args, **kwargs)
tests/torch/test_conv2d.py:44: in forward
    def forward(self, x):
.../venv/lib/python3.11.../torch/_dynamo/eval_frame.py:632: in _fn
    return fn(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <tt_torch.dynamo.backend.Executor object at 0x7fb440b68610>
inputs = (Parameter containing:
tensor([[[[-0.0005,  0.0335],
          [-0.0514, -0.0460]],

         [[-0.0241,  0.0168],
   ...  0.1049,  ...,  0.4944,  0.0473, -0.0291],
          [-0.1079, -0.3563, -0.4745,  ..., -0.4814, -0.4681,  0.4106]]]]))
new_inputs = (Parameter containing:
tensor([[[[-0.0005,  0.0335],
          [-0.0514, -0.0460]],

         [[-0.0241,  0.0168],
   ...  0.1049,  ...,  0.4944,  0.0473, -0.0291],
          [-0.1079, -0.3563, -0.4745,  ..., -0.4814, -0.4681,  0.4106]]]]))
input = tensor([[[[-0.1674,  0.2599, -0.3615,  ...,  0.2775, -0.2959,  0.0303],
          [-0.2474,  0.1794,  0.4380,  ..., -0...,  0.1049,  ...,  0.4944,  0.0473, -0.0291],
          [-0.1079, -0.3563, -0.4745,  ..., -0.4814, -0.4681,  0.4106]]]])
input_type = torch.float32

    def __call__(self, *inputs):
        new_inputs = ()
        for input in inputs:
            # Handle scalar inputs.
            if not hasattr(input, "dtype"):
                assert (
                    type(input) is not bool
                ), "Conversion for scalar boolean is not supported."
                new_inputs = new_inputs + ((input),)
                continue
    
            # Apply type conversion if required.
            input_type = input.dtype
            if input_type in self.type_conversion.keys():
                new_inputs = new_inputs + (
                    (input.to(dtype=self.type_conversion[input_type])),
                )
                continue
    
            # No conversion required.
            new_inputs = new_inputs + ((input),)
    
        inputs = new_inputs
    
        if self.compiler_config.compile_depth == CompileDepth.EXECUTE:
            assert self.binary is not None, "Binary must be set for EXECUTE mode"
>           return tt_mlir.run(inputs + self.graph_constants, self.binary)
E           RuntimeError: std::get: wrong index for variant

tt_torch/dynamo/backend.py:558: RuntimeError
tests.torch.test_conv2d::test_conv2d[dtype1-1-64-3-256-256-7-7-2-2-3]

Flake rate in main: 42.86% (Passed 4 times, Failed 3 times)

Stack Traces | 1.01s run time
batch_size = 1, output_channels = 64, input_channels = 3, input_height = 256
input_width = 256, filter_height = 7, filter_width = 7, stride_h = 2
stride_w = 2, padding = 3, dtype = torch.float32

    @pytest.mark.parametrize(
        "batch_size, output_channels, input_channels, input_height, input_width, filter_height, filter_width, stride_h, stride_w, padding",
        ((1, 64, 3, 256, 256, 7, 7, 2, 2, 3), (1, 128, 64, 128, 128, 2, 2, 2, 2, 0)),
    )
    @pytest.mark.parametrize("dtype", [torch.bfloat16, torch.float32])
    def test_conv2d(
        batch_size,
        output_channels,
        input_channels,
        input_height,
        input_width,
        filter_height,
        filter_width,
        stride_h,
        stride_w,
        padding,
        dtype,
    ):
        class Basic(nn.Module):
            def __init__(self):
                super().__init__()
                self.conv2d = nn.Conv2d(
                    input_channels,
                    output_channels,
                    kernel_size=(filter_height, filter_width),
                    stride=(stride_h, stride_w),
                    padding=padding,
                    bias=False,
                    dtype=dtype,
                )
    
            def forward(self, x):
                return self.conv2d(x)
    
>       verify_module(
            Basic(),
            input_shapes=[(batch_size, input_channels, input_height, input_width)],
            input_data_types=[dtype],
            required_atol=10,
        )

tests/torch/test_conv2d.py:47: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
tt_torch/tools/verify.py:252: in verify_module
    _verify_torch_module(
tt_torch/tools/verify.py:146: in _verify_torch_module
    ret = tt_mod(*inputs)
.../venv/lib/python3.11.../nn/modules/module.py:1736: in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
.../venv/lib/python3.11.../nn/modules/module.py:1747: in _call_impl
    return forward_call(*args, **kwargs)
.../venv/lib/python3.11.../torch/_dynamo/eval_frame.py:465: in _fn
    return fn(*args, **kwargs)
.../venv/lib/python3.11.../nn/modules/module.py:1736: in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
.../venv/lib/python3.11.../nn/modules/module.py:1747: in _call_impl
    return forward_call(*args, **kwargs)
tests/torch/test_conv2d.py:44: in forward
    def forward(self, x):
.../venv/lib/python3.11.../torch/_dynamo/eval_frame.py:632: in _fn
    return fn(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <tt_torch.dynamo.backend.Executor object at 0x7fb443568250>
inputs = (Parameter containing:
tensor([[[[-0.0006,  0.0442, -0.0679,  ..., -0.0318,  0.0221, -0.0016],
          [ 0.0654, -0.... -0.4588,  ..., -0.3514, -0.4965,  0.3036],
          [ 0.3726, -0.2234,  0.2158,  ..., -0.2494, -0.0489, -0.4841]]]]))
new_inputs = (Parameter containing:
tensor([[[[-0.0006,  0.0442, -0.0679,  ..., -0.0318,  0.0221, -0.0016],
          [ 0.0654, -0.... -0.4588,  ..., -0.3514, -0.4965,  0.3036],
          [ 0.3726, -0.2234,  0.2158,  ..., -0.2494, -0.0489, -0.4841]]]]))
input = tensor([[[[-0.3725, -0.3886,  0.4439,  ..., -0.4821,  0.4407, -0.2102],
          [-0.0409, -0.3014, -0.1411,  ..., -0..., -0.4588,  ..., -0.3514, -0.4965,  0.3036],
          [ 0.3726, -0.2234,  0.2158,  ..., -0.2494, -0.0489, -0.4841]]]])
input_type = torch.float32

    def __call__(self, *inputs):
        new_inputs = ()
        for input in inputs:
            # Handle scalar inputs.
            if not hasattr(input, "dtype"):
                assert (
                    type(input) is not bool
                ), "Conversion for scalar boolean is not supported."
                new_inputs = new_inputs + ((input),)
                continue
    
            # Apply type conversion if required.
            input_type = input.dtype
            if input_type in self.type_conversion.keys():
                new_inputs = new_inputs + (
                    (input.to(dtype=self.type_conversion[input_type])),
                )
                continue
    
            # No conversion required.
            new_inputs = new_inputs + ((input),)
    
        inputs = new_inputs
    
        if self.compiler_config.compile_depth == CompileDepth.EXECUTE:
            assert self.binary is not None, "Binary must be set for EXECUTE mode"
>           return tt_mlir.run(inputs + self.graph_constants, self.binary)
E           RuntimeError: std::get: wrong index for variant

tt_torch/dynamo/backend.py:558: RuntimeError

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

@vmilosevic vmilosevic changed the title Uplift third_party/tt-mlir to 2025-02-26 Uplift third_party/tt-mlir to 2025-02-27 Feb 27, 2025
Copy link

TestsPassed ☑️Skipped ⚠️Failed ❌️
TT-Torch Tests435 ran426 passed7 skipped2 failed
TestResult
TT-Torch Tests
pytest
test_conv2d.test_conv2d[dtype1-1-64-3-256-256-7-7-2-2-3]❌ failure
test_conv2d.test_conv2d[dtype1-1-128-64-128-128-2-2-2-2-0]❌ failure

@vmilosevic vmilosevic changed the title Uplift third_party/tt-mlir to 2025-02-27 Uplift third_party/tt-mlir to 2025-02-28 Feb 28, 2025
Copy link

TestsPassed ❌️SkippedFailed
TT-Torch Tests0 ran0 passed0 skipped0 failed
TestResult
No test annotations available

@vmilosevic vmilosevic changed the title Uplift third_party/tt-mlir to 2025-02-28 Uplift third_party/tt-mlir to 2025-03-01 Mar 1, 2025
Copy link

github-actions bot commented Mar 1, 2025

TestsPassed ❌️SkippedFailed
TT-Torch Tests0 ran0 passed0 skipped0 failed
TestResult
No test annotations available

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants