Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#0: Fix typos in ttnn docs #15517

Merged
merged 1 commit into from
Dec 2, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ TT-NN library natively supports multi-device operations, enabling users to scale

- **MeshDevice**: This "virtual device" abstraction defines a logical 2-D mesh of connected physical devices. Operations that "run on device" are distributed through SPMD across all devices captured in the mesh.

- **Input Data Distribution**: Defines how input data resident in host-memory is distributed to DeviceMesh on-device memory. When operations are distributed to MeshDevice, the operation within a single-device scope works on its local input data.
- **Input Data Distribution**: Defines how input data resident in host-memory is distributed to MeshDevice on-device memory. When operations are distributed to MeshDevice, the operation within a single-device scope works on its local input data.

- **Tensor**: Defines a N-dimensional matrix containing elements of a single data type. In a MeshDevice context, a Tensor, or colloquially referred to as MeshTensor, represents a collection of tensor shards distributed across devices in a 2D Mesh.

Expand Down Expand Up @@ -138,7 +138,7 @@ torch_tensor[..., 0:32] = 1.0
torch_tensor[..., 32:64] = 2.0

# Convert to ttnn.Tensor; MeshTensor holds buffers to two shards in host-memory
mesh_tensor: ttnn.Tensor = ttnn.from_torch(
mesh_tensor = ttnn.from_torch(
torch_tensor,
mesh_mapper=ttnn.ShardTensorToMesh(mesh_device, dim=3),
layout=ttnn.TILE_LAYOUT,
Expand All @@ -165,7 +165,7 @@ ttnn.Tensor([[[[ 2.00000, 2.00000, ..., 2.00000, 2.00000],
Let's now transfer to device:

```py
> mesh_tensor = ttnn.to_device(mesh_tensor, device_mesh)
> mesh_tensor = ttnn.to_device(mesh_tensor, mesh_device)
> mesh_tensor

device_id:0
Expand Down Expand Up @@ -194,7 +194,7 @@ We can also visualize this tensor distributed across our MeshDevice. The visuali
ttnn.visualize_mesh_device(mesh_device, tensor=mesh_tensor)

>
DeviceMesh(rows=1, cols=2):
MeshDevice(rows=1, cols=2):
┌──────────────────────────────┬──────────────────────────────┐
│ Dev. ID: 0 │ Dev. ID: 1 │
│ (0, 0) │ (0, 1) │
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ void bind_concat(py::module& module) {
const auto doc = R"doc(

Args:
input_tensor (ttnn.Tensor): the input tensor.
input_tensor (List of ttnn.Tensor): the input tensors.
dim (number): the concatenating dimension.

Keyword Args:
Expand All @@ -32,13 +32,11 @@ Keyword Args:

Example:

>>> tensor = ttnn.concat(ttnn.from_torch(torch.zeros((1, 1, 64, 32), ttnn.from_torch(torch.zeros((1, 1, 64, 32), dim=3)), device)

>>> tensor1 = ttnn.from_torch(torch.zeros((1, 1, 64, 32), dtype=torch.bfloat16), device=device)
>>> tensor2 = ttnn.from_torch(torch.zeros((1, 1, 64, 32), dtype=torch.bfloat16), device=device)
>>> output = ttnn.concat([tensor1, tensor2], dim=4)
>>> output = ttnn.concat([tensor1, tensor2], dim=3)
>>> print(output.shape)
[1, 1, 32, 64]
[1, 1, 64, 64]

)doc";

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ void bind_non_zero(py::module& module) {
queue_id (int, optional): command queue id. Defaults to `0`.

Returns:
List of ttnn.Tensor: the output tensor.
List of ttnn.Tensor: the output tensors.

Example:

Expand Down
Loading