Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#0: untilize tensor on device #12001

Closed
wants to merge 1 commit into from
Closed

Conversation

sraizada-tt
Copy link
Contributor

Ticket

Link to Github Issue

Problem description

Unitize tensor on device in the ttnn.to_torch function

What's changed

Unitize tensor on device in the ttnn.to_torch function

Checklist

  • Post commit CI passes
  • Blackhole Post commit (if applicable)
  • Model regression CI testing passes (if applicable)
  • New/Existing tests provide coverage for changes

@bbradelTT
Copy link
Contributor

It would be good to run at least 1 T3000 and 1 TG pipeline.

@sraizada-tt
Copy link
Contributor Author

sraizada-tt commented Aug 28, 2024

It would be good to run at least 1 T3000 and 1 TG pipeline.

TG Llama fails, issue open: #11509 @tarafdarTT

@bbradelTT
Copy link
Contributor

It would be good to run at least 1 T3000 and 1 TG pipeline.

TG Llama fails, issue open: #11509

Could you run.either https://github.com/tenstorrent/tt-metal/actions/workflows/tg-nightly-tests.yaml or https://github.com/tenstorrent/tt-metal/actions/workflows/tg-unit-tests.yaml ?

@@ -309,12 +309,12 @@ def to_torch(
tensor([[-0.3008, -0.8438, 0.3242],
[ 0.9023, -0.5820, 0.5312]], dtype=torch.bfloat16)
"""
if tensor.layout != ttnn.ROW_MAJOR_LAYOUT:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we convert to row-major on-device before moving it from device because it activates untilize on device by default?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are there differences in the output between untilize on host vs. device or is it negligible?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if there are changes, we might want to consider adding a toggleable option for switching between host vs. device untilizer

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is a perf hit for utilizing on host vs device. Adding this as a toggleable option is a good idea

@svuckovicTT
Copy link

Hey folks, just wanted to check if this is still happening?

@github-actions github-actions bot closed this Mar 4, 2025
@github-actions github-actions bot deleted the ttnn-to-torch-untilize branch March 4, 2025 12:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants