Skip to content

Commit

Permalink
#0: Use tt::stl::Span
Browse files Browse the repository at this point in the history
  • Loading branch information
sminakov-tt committed Oct 27, 2024
1 parent 929c76a commit 46187de
Show file tree
Hide file tree
Showing 23 changed files with 85 additions and 85 deletions.
14 changes: 7 additions & 7 deletions best_practices.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,13 @@ void write_buffer(queue_id cq_id, Tensor& dst, std::vector<std::shared_ptr<void>
void write_buffer(queue_id cq_id, Tensor& dst, const std::vector<std::shared_ptr<void>>& src, const std::optional<std::size_t>& transfer_size = std::nullopt); // Right!
```

## 2. Use `std::span` for Input Parameters
## 2. Use `tt::stl::Span` for Input Parameters

### Practice
Consider using `std::span` as input instead of `std::vector`. This allows `std::array` to be used as an argument as well.
Consider using `tt::stl::Span` as input instead of `std::vector`. This allows `std::array` to be used as an argument as well.

### Explanation
`std::span` is a lightweight view over a contiguous sequence of objects, such as arrays and vectors. It provides a safe and flexible way to handle array-like data structures without copying them.
`tt::stl::Spann` is a lightweight view over a contiguous sequence of objects, such as arrays and vectors. It provides a safe and flexible way to handle array-like data structures without copying them.

### Motivation
- **Flexibility**: Enables functions to accept both `std::vector` and `std::array`.
Expand All @@ -33,7 +33,7 @@ Consider using `std::span` as input instead of `std::vector`. This allows `std::
### Example
```
template <typename T>
void print_elements(std::span<T> data) {
void print_elements(tt::stl::Span<const T> data) {
for (const auto& element : data) {
std::cout << element << " ";
}
Expand Down Expand Up @@ -217,7 +217,7 @@ Use the Copy-and-Swap idiom to avoid duplicating code between different construc
### Explanation
The Copy-and-Swap idiom is a robust and elegant method to implement copy assignment operators. It leverages the copy constructor and the swap method to provide strong exception safety and reduce code duplication.

### Example
### Example
https://stackoverflow.com/questions/3279543/what-is-the-copy-and-swap-idiom


Expand Down Expand Up @@ -279,7 +279,7 @@ Prefer:
enum class ThreadingOption { SingleCore, MultiCore };
tensor = tt::tt_metal::tilize_with_val_padding(tensor, output_shape, 0, output_memory_config, dtype, ThreadingOption::MultiCore);
```
Also consider giving enums power-of-2 values to pass them all as a single argument, e.g.
Also consider giving enums power-of-2 values to pass them all as a single argument, e.g.
```cpp
Options::FOO | Options::BAR
```
Expand Down Expand Up @@ -343,7 +343,7 @@ void doSomething(...) {
Prefer:
```cpp
void doSomething(...) {
if (!contractCheck)
if (!contractCheck)
return;
// Do a lot of things
Expand Down
2 changes: 1 addition & 1 deletion ttnn/cpp/pybind11/pytensor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ void log_external_operation(
#endif

template <typename T>
Tensor create_owned_tensor(T* data_ptr, size_t num_elements, std::span<const uint32_t> shape, DataType data_type, Layout layout, const std::optional<Tile>& optional_tile = std::nullopt)
Tensor create_owned_tensor(T* data_ptr, size_t num_elements, tt::stl::Span<const uint32_t> shape, DataType data_type, Layout layout, const std::optional<Tile>& optional_tile = std::nullopt)
{
auto data = std::vector(data_ptr, data_ptr + num_elements);
auto buffer = owned_buffer::create(std::move(data));
Expand Down
2 changes: 1 addition & 1 deletion ttnn/cpp/ttnn/operations/data_movement/pad/pad.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ static ttnn::Tensor pad_impl(
ttnn::Tensor ExecutePad::invoke(
uint8_t queue_id,
const ttnn::Tensor& input_tensor,
std::span<const std::pair<uint32_t, uint32_t>> padding,
tt::stl::Span<const std::pair<uint32_t, uint32_t>> padding,
const float value,
const bool use_multicore,
const std::optional<MemoryConfig>& memory_config_arg) {
Expand Down
2 changes: 1 addition & 1 deletion ttnn/cpp/ttnn/operations/data_movement/pad/pad.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ struct ExecutePad {
// Any rank tensor supported
static ttnn::Tensor invoke(uint8_t queue_id,
const ttnn::Tensor& input_tensor,
std::span<const std::pair<uint32_t, uint32_t>> padding,
tt::stl::Span<const std::pair<uint32_t, uint32_t>> padding,
const float value,
const bool use_multicore,
const std::optional<MemoryConfig>& memory_config_arg);
Expand Down
12 changes: 6 additions & 6 deletions ttnn/cpp/ttnn/operations/data_movement/permute/permute.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ ttnn::Tensor permute_impl(const ttnn::Tensor &a, const SmallVector<uint32_t>& di
return output;
}

ttnn::Tensor permute_launch(const ttnn::Tensor &a, std::span<const int64_t> dims, const MemoryConfig& output_mem_config) {
ttnn::Tensor permute_launch(const ttnn::Tensor &a, tt::stl::Span<const int64_t> dims, const MemoryConfig& output_mem_config) {
std::vector<ttnn::Tensor> output_tensors = {ttnn::Tensor(operation::get_workers_for_op_output({a}))};
operation::launch_with_autoformat(
[dims, output_mem_config] (const std::vector<ttnn::Tensor>& input_tensors, const std::vector<std::optional<const ttnn::Tensor>>& optional_input_tensors, const std::vector<std::optional<ttnn::Tensor>>& optional_output_tensors) mutable -> std::vector<ttnn::Tensor> {
Expand All @@ -147,7 +147,7 @@ ttnn::Tensor permute_launch(const ttnn::Tensor &a, std::span<const int64_t> dims

Tensor composite_invoke(
const ttnn::Tensor& input_tensor,
std::span<const int64_t> dims,
tt::stl::Span<const int64_t> dims,
const std::optional<MemoryConfig>& memory_config) {

auto output_tensor = permute_launch(input_tensor, dims, memory_config.value_or(input_tensor.memory_config()));
Expand All @@ -159,7 +159,7 @@ Tensor composite_invoke(
ttnn::Tensor ExecutePermute::invoke(
uint8_t queue_id,
const ttnn::Tensor& input_tensor,
std::span<const int64_t> dims,
tt::stl::Span<const int64_t> dims,
const std::optional<MemoryConfig>& memory_config,
bool composite) {

Expand All @@ -175,7 +175,7 @@ ttnn::Tensor ExecutePermute::invoke(
input_rank == dims.size(),
"The number of dimensions in the tensor input does not match the length of the desired ordering");

auto adjust_order = [](std::span<const int64_t> dims) {
auto adjust_order = [](tt::stl::Span<const int64_t> dims) {
ttnn::SmallVector<int64_t> new_order;
TT_FATAL(dims.size() <= 4, "Error");
int additional_ranks = 4 - dims.size();
Expand Down Expand Up @@ -218,12 +218,12 @@ ttnn::Tensor ExecutePermute::invoke(

ttnn::Tensor ExecutePermute::invoke(
const ttnn::Tensor& input_tensor,
std::span<const int64_t> dims,
tt::stl::Span<const int64_t> dims,
const std::optional<MemoryConfig>& memory_config) {
return invoke(DefaultQueueId, input_tensor, dims, memory_config);
}

ttnn::Tensor ExecutePermute::invoke(const ttnn::Tensor& input_tensor, std::span<const int64_t> dims) {
ttnn::Tensor ExecutePermute::invoke(const ttnn::Tensor& input_tensor, tt::stl::Span<const int64_t> dims) {
return invoke(input_tensor, dims, std::nullopt);
}

Expand Down
6 changes: 3 additions & 3 deletions ttnn/cpp/ttnn/operations/data_movement/permute/permute.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -13,16 +13,16 @@ struct ExecutePermute {
static ttnn::Tensor invoke(
uint8_t queue_id,
const ttnn::Tensor& input_tensor,
std::span<const int64_t> dims,
tt::stl::Span<const int64_t> dims,
const std::optional<MemoryConfig>& memory_config,
bool composite = true);

static ttnn::Tensor invoke(
const ttnn::Tensor& input_tensor,
std::span<const int64_t> dims,
tt::stl::Span<const int64_t> dims,
const std::optional<MemoryConfig>& memory_config);

static ttnn::Tensor invoke(const ttnn::Tensor& input_tensor, std::span<const int64_t> dims);
static ttnn::Tensor invoke(const ttnn::Tensor& input_tensor, tt::stl::Span<const int64_t> dims);
};

} // namespace operations::data_movement
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -94,15 +94,15 @@ ttnn::Tensor ReshapeOperation::invoke(const ttnn::Tensor& input_tensor, const tt
return invoke(DefaultQueueId, input_tensor, shape, std::nullopt);
}

ttnn::Tensor ReshapeOperation::invoke(uint8_t queue_id, const ttnn::Tensor& input_tensor, std::span<const int32_t> shape_vector, const std::optional<MemoryConfig>& memory_config_arg) {
ttnn::Tensor ReshapeOperation::invoke(uint8_t queue_id, const ttnn::Tensor& input_tensor, tt::stl::Span<const int32_t> shape_vector, const std::optional<MemoryConfig>& memory_config_arg) {
return invoke(queue_id, input_tensor, ttnn::Shape(infer_dims_for_reshape(input_tensor, shape_vector).view()), memory_config_arg);
}

ttnn::Tensor ReshapeOperation::invoke(const ttnn::Tensor& input_tensor, std::span<const int32_t> shape_vector, const std::optional<MemoryConfig>& memory_config_arg) {
ttnn::Tensor ReshapeOperation::invoke(const ttnn::Tensor& input_tensor, tt::stl::Span<const int32_t> shape_vector, const std::optional<MemoryConfig>& memory_config_arg) {
return invoke(DefaultQueueId, input_tensor, shape_vector, memory_config_arg);
}

ttnn::Tensor ReshapeOperation::invoke(const ttnn::Tensor& input_tensor, std::span<const int32_t> shape_vector) {
ttnn::Tensor ReshapeOperation::invoke(const ttnn::Tensor& input_tensor, tt::stl::Span<const int32_t> shape_vector) {
return invoke(input_tensor, shape_vector, std::nullopt);
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@ struct ReshapeOperation {

static ttnn::Tensor invoke(const ttnn::Tensor& input_tensor, const ttnn::Shape& shape);

static ttnn::Tensor invoke(uint8_t queue_id, const ttnn::Tensor& input_tensor, std::span<const int32_t> shape_vector, const std::optional<MemoryConfig>& memory_config_arg);
static ttnn::Tensor invoke(const ttnn::Tensor& input_tensor, std::span<const int32_t> shape_vector, const std::optional<MemoryConfig>& memory_config_arg);
static ttnn::Tensor invoke(const ttnn::Tensor& input_tensor, std::span<const int32_t> shape_vector);
static ttnn::Tensor invoke(uint8_t queue_id, const ttnn::Tensor& input_tensor, tt::stl::Span<const int32_t> shape_vector, const std::optional<MemoryConfig>& memory_config_arg);
static ttnn::Tensor invoke(const ttnn::Tensor& input_tensor, tt::stl::Span<const int32_t> shape_vector, const std::optional<MemoryConfig>& memory_config_arg);
static ttnn::Tensor invoke(const ttnn::Tensor& input_tensor, tt::stl::Span<const int32_t> shape_vector);
};


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ ttnn::Tensor ReshapeViewOperation::invoke(const ttnn::Tensor& tensor, const ttnn

ttnn::Tensor ReshapeViewOperation::invoke(
const ttnn::Tensor& tensor,
std::span<const int32_t> shape_vector
tt::stl::Span<const int32_t> shape_vector
) {
return invoke(tensor, tt::tt_metal::infer_dims_for_reshape(tensor, shape_vector));
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ namespace operations::data_movement {
struct ReshapeViewOperation {
static ttnn::Tensor invoke(const ttnn::Tensor& input_tensor, const ttnn::Shape& shape);
static ttnn::Tensor invoke(const ttnn::Tensor& input_tensor, const ttnn::SimpleShape& logical_shape);
static ttnn::Tensor invoke(const ttnn::Tensor& input_tensor, std::span<const int32_t> shape_vector);
static ttnn::Tensor invoke(const ttnn::Tensor& input_tensor, tt::stl::Span<const int32_t> shape_vector);
};


Expand Down
42 changes: 21 additions & 21 deletions ttnn/cpp/ttnn/operations/data_movement/slice/slice.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,9 @@ template<typename T>
ttnn::Tensor SliceOperation::invoke(
uint8_t queue_id,
const ttnn::Tensor& input_tensor,
std::span<const T> begins,
std::span<const T> ends,
std::span<const T> step,
tt::stl::Span<const T> begins,
tt::stl::Span<const T> ends,
tt::stl::Span<const T> step,
const std::optional<MemoryConfig>& memory_config_arg,
const std::optional<Tensor>& optional_output_tensor) {

Expand Down Expand Up @@ -181,9 +181,9 @@ ttnn::Tensor SliceOperation::invoke(
template<typename T>
ttnn::Tensor SliceOperation::invoke(
const ttnn::Tensor& input_tensor,
std::span<const T> begins,
std::span<const T> ends,
std::span<const T> step,
tt::stl::Span<const T> begins,
tt::stl::Span<const T> ends,
tt::stl::Span<const T> step,
const std::optional<MemoryConfig>& memory_config_arg,
const std::optional<Tensor>& optional_output_tensor) {
return SliceOperation::invoke<T>(ttnn::DefaultQueueId, input_tensor, begins, ends, step, memory_config_arg);
Expand Down Expand Up @@ -306,9 +306,9 @@ ttnn::Tensor SliceOperation::invoke(
const std::array<T, N> &step,
const std::optional<MemoryConfig>& memory_config_arg,
const std::optional<Tensor>& optional_output_tensor) {
std::span<const T> start(output_tensor_start.begin(), output_tensor_start.end());
std::span<const T> end(output_tensor_end.begin(), output_tensor_end.end());
std::span<const T> step_vec(step.begin(), step.end());
tt::stl::Span<const T> start(output_tensor_start.begin(), output_tensor_start.end());
tt::stl::Span<const T> end(output_tensor_end.begin(), output_tensor_end.end());
tt::stl::Span<const T> step_vec(step.begin(), step.end());
return SliceOperation::invoke<T>(queue_id, input_tensor, start, end, step_vec, memory_config_arg);
}

Expand All @@ -326,35 +326,35 @@ ttnn::Tensor SliceOperation::invoke(
template ttnn::Tensor SliceOperation::invoke<int>(
uint8_t queue_id,
const ttnn::Tensor& input_tensor,
std::span<const int> begins,
std::span<const int> ends,
std::span<const int> step,
tt::stl::Span<const int> begins,
tt::stl::Span<const int> ends,
tt::stl::Span<const int> step,
const std::optional<MemoryConfig>& memory_config_arg,
const std::optional<Tensor>& optional_output_tensor);

template ttnn::Tensor SliceOperation::invoke<int>(
const ttnn::Tensor& input_tensor,
std::span<const int> begins,
std::span<const int> ends,
std::span<const int> step,
tt::stl::Span<const int> begins,
tt::stl::Span<const int> ends,
tt::stl::Span<const int> step,
const std::optional<MemoryConfig>& memory_config_arg,
const std::optional<Tensor>& optional_output_tensor);


template ttnn::Tensor SliceOperation::invoke<uint32_t>(
uint8_t queue_id,
const ttnn::Tensor& input_tensor,
std::span<const uint32_t> begins,
std::span<const uint32_t> ends,
std::span<const uint32_t> step,
tt::stl::Span<const uint32_t> begins,
tt::stl::Span<const uint32_t> ends,
tt::stl::Span<const uint32_t> step,
const std::optional<MemoryConfig>& memory_config_arg,
const std::optional<Tensor>& optional_output_tensor);

template ttnn::Tensor SliceOperation::invoke<uint32_t>(
const ttnn::Tensor& input_tensor,
std::span<const uint32_t> begins,
std::span<const uint32_t> ends,
std::span<const uint32_t> step,
tt::stl::Span<const uint32_t> begins,
tt::stl::Span<const uint32_t> ends,
tt::stl::Span<const uint32_t> step,
const std::optional<MemoryConfig>& memory_config_arg,
const std::optional<Tensor>& optional_output_tensor);

Expand Down
16 changes: 8 additions & 8 deletions ttnn/cpp/ttnn/operations/data_movement/slice/slice.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -15,18 +15,18 @@ struct SliceOperation {
static ttnn::Tensor invoke(
uint8_t queue_id,
const ttnn::Tensor& input_tensor,
std::span<const T> begins,
std::span<const T> ends,
std::span<const T> step,
tt::stl::Span<const T> begins,
tt::stl::Span<const T> ends,
tt::stl::Span<const T> step,
const std::optional<MemoryConfig>& memory_config_arg = std::nullopt,
const std::optional<Tensor>& optional_output_tensor = std::nullopt);

template<typename T>
static ttnn::Tensor invoke(
const ttnn::Tensor& input_tensor,
std::span<const T> output_tensor_start,
std::span<const T> output_tensor_end,
std::span<const T> step,
tt::stl::Span<const T> output_tensor_start,
tt::stl::Span<const T> output_tensor_end,
tt::stl::Span<const T> step,
const std::optional<MemoryConfig>& memory_config_arg = std::nullopt,
const std::optional<Tensor>& optional_output_tensor = std::nullopt);

Expand All @@ -39,7 +39,7 @@ struct SliceOperation {
const ttnn::SmallVector<T>& step,
const std::optional<MemoryConfig>& memory_config_arg = std::nullopt,
const std::optional<Tensor>& optional_output_tensor = std::nullopt) {
return invoke(queue_id, input_tensor, std::span<const T>(begins.begin(), begins.end()), std::span<const T>(ends.begin(), ends.end()), std::span<const T>(step.begin(), step.end()), memory_config_arg, optional_output_tensor);
return invoke(queue_id, input_tensor, tt::stl::Span<const T>(begins), tt::stl::Span<const T>(ends), tt::stl::Span<const T>(step), memory_config_arg, optional_output_tensor);
}

template<typename T>
Expand All @@ -50,7 +50,7 @@ struct SliceOperation {
const ttnn::SmallVector<T>& step,
const std::optional<MemoryConfig>& memory_config_arg = std::nullopt,
const std::optional<Tensor>& optional_output_tensor = std::nullopt) {
return invoke(input_tensor, std::span<const T>(begins.begin(), begins.end()), std::span<const T>(ends.begin(), ends.end()), std::span<const T>(step.begin(), step.end()), memory_config_arg, optional_output_tensor);
return invoke(input_tensor, tt::stl::Span<const T>(begins), tt::stl::Span<const T>(ends), tt::stl::Span<const T>(step), memory_config_arg, optional_output_tensor);
}

template<typename T, std::size_t N>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ operation::ProgramWithCallbacks FastReduceNCDeviceOperation::create_program(
Tensor fast_reduce_nc(
uint8_t queue_id,
const ttnn::Tensor& input,
std::span<const int32_t> dims,
tt::stl::Span<const int32_t> dims,
const std::optional<const ttnn::Tensor> output,
const MemoryConfig& output_mem_config,
std::optional<const ttnn::DeviceComputeKernelConfig> compute_kernel_config) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ struct FastReduceNCDeviceOperation {
Tensor fast_reduce_nc(
uint8_t queue_id,
const ttnn::Tensor &input,
std::span<const int32_t> dims,
tt::stl::Span<const int32_t> dims,
const std::optional<const ttnn::Tensor> output = std::nullopt,
const MemoryConfig &output_mem_config = operation::DEFAULT_OUTPUT_MEMORY_CONFIG,
std::optional<const ttnn::DeviceComputeKernelConfig> compute_kernel_config = std::nullopt);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ namespace operations::experimental::reduction{
ttnn::Tensor FastReduceNCOperation::invoke(
uint8_t queue_id,
const ttnn::Tensor& input,
std::span<const int32_t> dims,
tt::stl::Span<const int32_t> dims,
const std::optional<const Tensor> output,
const ttnn::MemoryConfig memory_config,
std::optional<const ttnn::DeviceComputeKernelConfig> compute_kernel_config) {
Expand All @@ -23,7 +23,7 @@ ttnn::Tensor FastReduceNCOperation::invoke(

ttnn::Tensor FastReduceNCOperation::invoke(
const ttnn::Tensor& input,
std::span<const int32_t> dims,
tt::stl::Span<const int32_t> dims,
const std::optional<const Tensor> output,
const ttnn::MemoryConfig memory_config,
std::optional<const ttnn::DeviceComputeKernelConfig> compute_kernel_config) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,14 +16,14 @@ struct FastReduceNCOperation {
static ttnn::Tensor invoke(
uint8_t queue_id,
const ttnn::Tensor& input,
std::span<const int32_t> dims,
tt::stl::Span<const int32_t> dims,
const std::optional<const Tensor> output,
const ttnn::MemoryConfig memory_config,
std::optional<const ttnn::DeviceComputeKernelConfig> compute_kernel_config);

static ttnn::Tensor invoke(
const ttnn::Tensor& input,
std::span<const int32_t> dims,
tt::stl::Span<const int32_t> dims,
const std::optional<const Tensor> output,
const ttnn::MemoryConfig memory_config,
std::optional<const ttnn::DeviceComputeKernelConfig> compute_kernel_config);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ std::tuple<MorehSumBackwardOperation::operation_attributes_t, MorehSumBackwardOp
MorehSumBackwardOperation::invoke(
const Tensor& output_grad,
const std::optional<Tensor>& input,
std::span<const int64_t> dims,
tt::stl::Span<const int64_t> dims,
bool keepdim,
const std::optional<Tensor>& input_grad,
const std::optional<MemoryConfig>& memory_config,
Expand Down
Loading

0 comments on commit 46187de

Please sign in to comment.