Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when compiling Bloom.py #753

Open
amalbasaTT opened this issue Feb 12, 2025 · 1 comment
Open

Error when compiling Bloom.py #753

amalbasaTT opened this issue Feb 12, 2025 · 1 comment
Assignees
Labels
bug Something isn't working

Comments

@amalbasaTT
Copy link

Bloom compilation causes FATAL error:

File "/home/ubuntu/tt-metal/python_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.16", line 57, in forward
ttnn_expand = ttnn_decorators_ttnn_expand(ttnn_to_device_4, [1, 1, 32, 32]); ttnn_to_device_4 = None
File "/home/ubuntu/tt-metal/ttnn/ttnn/decorators.py", line 333, in call
return self.function(*function_args, **function_kwargs)
RuntimeError: TT_FATAL @ /home/ubuntu/tt-metal/ttnn/cpp/ttnn/operations/data_movement/tilize/device/tilize_op.cpp:26: input_tensor_a.get_dtype() == DataType::BFLOAT16 or input_tensor_a.get_dtype() == DataType::FLOAT32
info:
data type must be bfloat16 or float32

To reproduce:

  1. Run command:
pytest /home/ubuntu/pytorch2.0_ttnn/tests/models/bloom/test_bloom.py --gen_op_accuracy_tests
@amalbasaTT amalbasaTT added the bug Something isn't working label Feb 12, 2025
@kevinwuTT kevinwuTT self-assigned this Feb 21, 2025
@kevinwuTT
Copy link
Contributor

kevinwuTT commented Feb 27, 2025

Here are the minimized versions that reproduce the same error.

import torch
import ttnn

arg295_1 = torch.randint(0, 10, (32, 1), dtype=torch.int64)

with ttnn.manage_device(device_id=0) as device:
    ttnn_from_torch_4 = ttnn.from_torch(arg295_1, layout = ttnn.ROW_MAJOR_LAYOUT, dtype = ttnn.uint32)
    ttnn_reshape_6 = ttnn.reshape(ttnn_from_torch_4, (1, 1, 32))
    ttnn_from_device_9 = ttnn.from_device(ttnn_reshape_6)
    ttnn_to_layout_9 = ttnn.to_layout(ttnn_from_device_9, ttnn.ROW_MAJOR_LAYOUT)
    ttnn_reshape_7 = ttnn.reshape(ttnn_to_layout_9, (1, 1, 1, 32))
    ttnn_from_device_10 = ttnn.from_device(ttnn_reshape_7)
    ttnn_to_layout_10 = ttnn.to_layout(ttnn_from_device_10, ttnn.TILE_LAYOUT)
    ttnn_to_device_4 = ttnn.to_device(ttnn_to_layout_10, device = device)
    ttnn_to_layout_10 = ttnn.to_layout(ttnn_from_device_10, ttnn.TILE_LAYOUT)
    ttnn_to_device_4 = ttnn.to_device(ttnn_to_layout_10, device = device)
    ttnn_expand = ttnn.expand(ttnn_to_device_4, [1, 1, 32, 32])

Minimized even further:

import torch
import ttnn

arg = torch.randint(0, 10, (1, 1, 1, 32), dtype=torch.int64)

with ttnn.manage_device(device_id=0) as device:
    ttnn_from_torch_4 = ttnn.from_torch(arg, layout = ttnn.TILE_LAYOUT, dtype = ttnn.uint32, device = device)
    ttnn_expand = ttnn.expand(ttnn_from_torch_4, [1, 1, 32, 32])

I think it comes down to whether ttnn.expand supports integer tensors anymore. If not, then we will fix this in pytorch2.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants