Skip to content

Actions: NVIDIA/TransformerEngine

Documentation

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
3,443 workflow runs
3,443 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

[PyTorch] Add caching for attention backend selection results
Documentation #5881: Pull request #1381 synchronize by pre-commit-ci bot
December 19, 2024 11:15 1m 4s cyanguwa:attn_caching
December 19, 2024 11:15 1m 4s
[PyTorch] Add caching for attention backend selection results
Documentation #5880: Pull request #1381 opened by cyanguwa
December 19, 2024 11:15 1m 2s cyanguwa:attn_caching
December 19, 2024 11:15 1m 2s
[JAX] Move parallel encoder tests to L0 distributed test set.
Documentation #5876: Pull request #1356 synchronize by pre-commit-ci bot
December 18, 2024 13:54 1m 19s phu0ngng:jax_multi_test
December 18, 2024 13:54 1m 19s
[JAX] Move parallel encoder tests to L0 distributed test set.
Documentation #5875: Pull request #1356 synchronize by phu0ngng
December 18, 2024 13:54 1m 14s phu0ngng:jax_multi_test
December 18, 2024 13:54 1m 14s
[JAX] Move parallel encoder tests to L0 distributed test set.
Documentation #5874: Pull request #1356 synchronize by phu0ngng
December 18, 2024 13:36 1m 23s phu0ngng:jax_multi_test
December 18, 2024 13:36 1m 23s
Enabling FP8 all-gather for TE Float8Tensor when using Torch FSDP2
Documentation #5861: Pull request #1358 synchronize by youngeunkwon0405
December 16, 2024 23:33 1m 0s youngeunkwon0405:fsdp2
December 16, 2024 23:33 1m 0s
[common/PyTorch] Add FusedAttention support for SWA (left, right)
Documentation #5860: Pull request #1369 synchronize by pre-commit-ci bot
December 16, 2024 13:13 1m 6s cyanguwa:swa_padding_brcm
December 16, 2024 13:13 1m 6s
[common/PyTorch] Add FusedAttention support for SWA (left, right)
Documentation #5858: Pull request #1369 synchronize by pre-commit-ci bot
December 16, 2024 13:04 Action required cyanguwa:swa_padding_brcm
December 16, 2024 13:04 Action required