Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding memory config to a reshape op #2275

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

sdjordjevicTT
Copy link
Contributor

Ticket

The following GH issue contains all the ops that require memory_config:
#1637

Reshape op is one on the list.

Problem description

TTNN Reshape op requires a memory config attribute.

What's changed

I added the memory config trait to a reshape op and implemented the memory config e2e.

Checklist

  • New/Existing tests provide coverage for changes

let summary = "Reshape op.";
let description = [{
Reshape tensor.
}];

let arguments = (ins AnyRankedTensor:$input,
I32ArrayAttr:$shape);
I32ArrayAttr:$shape,
OptionalAttr<TTNN_MemoryConfigAttr>:$memory_config);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you update the ttnntoemitc conversion as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done this for concat op as well. :) Please take one more look at the emitC conversion changes.

@sdjordjevicTT sdjordjevicTT force-pushed the sdjordjevic/adding_mem_config_to_reshape_op branch from d0e549a to f4a0e16 Compare February 28, 2025 13:10
@sdjordjevicTT
Copy link
Contributor Author

sdjordjevicTT commented Feb 28, 2025

@odjuricicTT @azecevicTT @jnie-TT FYI
I changed the runtime implementations to use:

  std::optional<::ttnn::MemoryConfig> memoryConfig =
      ::tt::runtime::ttnn::utils::createMemoryConfigIfNeeded(
          ::tt::runtime::ttnn::utils::getTensorRefMemoryConfig(op->out()));

I created a following issue to create a plan on transitioning to using memory config down to the runtime:
#2330

Copy link
Contributor

@azecevicTT azecevicTT left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for creating an issue!

@sdjordjevicTT sdjordjevicTT force-pushed the sdjordjevic/adding_mem_config_to_reshape_op branch 2 times, most recently from da3b8ed to 2737273 Compare February 28, 2025 16:06
@@ -20,7 +20,7 @@ void run(const ::tt::target::ttnn::ConcatOp *op, ProgramContext &context) {
int32_t dim = op->dim();
std::optional<::ttnn::MemoryConfig> memoryConfig =
::tt::runtime::ttnn::utils::createMemoryConfigIfNeeded(
op->memory_config());
::tt::runtime::ttnn::utils::getTensorRefMemoryConfig(op->out()));
Copy link
Contributor

@jnie-TT jnie-TT Feb 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wondering why we're getting the memory config from the output tensor here. If the op has a memory_config parameter we should prioritize that

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is mostly related to this:

@odjuricicTT @azecevicTT @jnie-TT FYI I changed the runtime implementations to use:

  std::optional<::ttnn::MemoryConfig> memoryConfig =
      ::tt::runtime::ttnn::utils::createMemoryConfigIfNeeded(
          ::tt::runtime::ttnn::utils::getTensorRefMemoryConfig(op->out()));

I created a following issue to create a plan on transitioning to using memory config down to the runtime: #2330

Currently, the optimizer is the only component in the compiler that modifies the output layout of ops. However, in its current form, it does not generate memory configurations. To accommodate this, I opted for a temporary solution. I’ve also opened an internal issue to discuss the best long-term approach. The ideal goal is for each op to pass its configuration directly to the runtime, and the runtime doesn't assume anything.

Copy link
Contributor

@jnie-TT jnie-TT Feb 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sdjordjevicTT what if the op didn't go through the optimizer and has a memory_config parameter? I don't think we should ignore that either.

Also if the op going into the optimizer had a memory_config parameter, will the optimizer remove that parameter? If not then there will be a mismatch between the memory_config parameter of the op and the overridden memory config of the output tensor. If it does then maybe we should update the optimizer to update the memory_config parameter of the op instead of removing it.

I feel like this should be a bigger, more dedicated effort than simply overriding the prioritization for concat op only, as it can cause confusion and breaks consistency in runtime assumptions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I see your point. Thanks for the clarification. I agree with your statement that this needs to be handled on the bigger scale, which is why I created the above-mentioned issue to tackle that systematically.

To unblock @odjuricicTT and optimizer, I will for now change reshape and concat op in the following way:
If the memory config attr is present, use that one, if it isn't, create one from the output layout.

Later we will handle this on a full scale. Do we agree? :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the runtime code with the above conclusions, please @jnie-TT take a look.

@sdjordjevicTT sdjordjevicTT force-pushed the sdjordjevic/adding_mem_config_to_reshape_op branch from 2737273 to d7343e1 Compare March 1, 2025 01:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants