Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Use TILE_LAYOUT during data move-in * Insert a ttnn.to_layout(ttnn.TILE_LAYOUT) between ttnn.from_torch and ttnn.to_device when adding data move-in ops * ttnn.reshape will skip inserting ttnn.to_layout * Update the tests to reflect newly inserted function * Fix reshape * Add conversion from torch.relu and torch.addmm to ttnn * Add conversion from torch.div, torch.bmm, and torch.gelu to ttnn * Add workaround to handle input aliasing * Add conversion from aten.rsub and aten.embedding * Add conversion from aten.split * Move GraphCleanup method to a new file * Move Dummy string repr to separate utils file * Fix rsub elif * Add torch.clone conversion to ttnn.clone ttnn.clone requires extra arguments compared to torch.clone: MemoryConfig type and output clone type * Construct ttnn.MemoryConfig for DRAM * Retrieve metadata from original torch op and translate to ttnn type * Add support for kwargs * Add conversion from torch.nn.functional.layer_norm torch.nn.LayerNorm does not have parameters for custom weights and bias and produces values that differ quite a bit from ttnn.layer_norm. This is not supporterd yet. However, torch.nn.functional.layer_norm can produce values that are very close to ttnn.layer_norm and this commit will test against the aten op that is lowered by this higher level torch op. aten.native_layer_norm returns 3 outputs: layer norm, mean(?), rstd(?). However, torch.nn.functional.layer_norm only cares about the layer norm output. Currently, this transformation replaces the mean and rstd with layer norm output. This should be be fixed later. * Add conversion from torch.neg, torch.ones, and torch.tril to ttnn counterparts * ttnn.ones require passing the device object manually * A default device has to be set up for AutoFormat, since ttnn.tril uses it * Use custom class for kwarg object instead of a generic tuple. * Add transformation aten.{eq.Tensor, eq.Scalar, logical_not, zeros_like, mean.dim} * Fix torch.compile options for other tests * Move transformations to ttnn.add and ttnn.mul to ToTtPass This requires a patch to ttnn.decorators.Operation * Fix test_fall_back * Add transformations for several more ops * Pow (Tensor base, scalar exponent) * Rsqrt * Silu * Adaptive Avg Pool * Clamp * Squeeze (dim argument) * Fix transformations for torch.eq and add transformation for torch.full torch.eq (scalar) -> ttnn.full + ttnn.eq (tensor) Previously ttnn.eq supports a scalar argument, but this errors now. * Disable torch to ttnn.split test since fallback is disabled and op is not implemented yet * Update torch to ttnn.reshape tests to match some limitations * Implement transformation for torch.lq.{scalar,tensor} and generalize relational ops for cleaner implementation * Implement aten.baddbmm transformation to ttnn * Add transformation from torch.cos * Remove conversion for split because ttnn.split is removed entirely See: #5389 * Add transformation for torch.sigmoid * Cast all model input arguments to bfloat16 * Set aten.view to fallback. Need to handle restrictions from ttnn.reshape * Fix layer_norm conversion to handle cases where ttnn ops follow * Handle case where aten.full has an empty shape * Remove split conversion from to_tt pass * Add fallback to squeeze conversion since ttnn.squeeze only supports dim 0. * Add aten.rsub.Scalar conversion * Match restrictions from ttnn.arange * Add workaround for relational op conversion for certain input sizes * Restrict embedding conversion to only support TILE_LAYOUT for now * Handle case where the denominator is a scalar for div op * Add workaround for when the model output takes the output from argmax. aten.argmax outputs integer values, but ttnn.argmax outputs floating point * Remove extraneous prints * Add bert and falcon-7b models for testing with torch_stat backend * These models have "/" in the names. Small fix in torch_stat backend. * Update AutoModelForCausalLM models and add bigscience/bloom-1b1 model * Add mamba, llama, gpt2, and yolos models * Fix e2e run with torch_ttnn backend * Add option to select a model * Remove dependency on tests module from tt-metal and copy relevant test utility functions to this repo * Update readme to include instructions on running transformer model with ttnn backend * Run black formatter on files * Fix formatting for run_transformers * Remove unneeded tt_lib module
- Loading branch information