v0.6.33
Finch v0.6.33
Merged pull requests:
- add atomic element level (#515) (@willow-ahrens)
- fix 609 (#610) (@willow-ahrens)
- SparseList Follow Protocol Speedup (#613) (@kylebd99)
- Fixed missing Finch. before @closure (#616) (@Paramuths)
- Rename links (#619) (@willow-ahrens)
- Update ⚙ in tensor_formats.md for consistency (#620) (@AbdAlazezAhmed)
- update level names (#625) (@willow-ahrens)
- fix tensordot (#626) (@willow-ahrens)
- Wma/backwards fusion (#627) (@willow-ahrens)
- quick fix (#628) (@willow-ahrens)
- Wma/fix608 (#629) (@willow-ahrens)
- add benchmarks for special structures (#632) (@willow-ahrens)
- update version (#633) (@willow-ahrens)
- Introduce a de-parser to return Finch expressions to their readable macro input form (#634) (@willow-ahrens)
- Kbd small finch logic changes (#636) (@kylebd99)
- Add back tests for Galley to CI (#637) (@kylebd99)
- Version Bump to "0.6.33" (#638) (@kylebd99)
Closed issues:
- Documentation needed for separated memory level and Atomic Level (#531)
- When does Finch store fill values? (#605)
- Limit the scope of Looplet/Unfurl closures (#608)
- Copying swizzles (#609)
- Backwards Fusion Heuristic in Autoscheduler (#614)
- Matmul and reduce in lazy mode fails with
ArgumentError
(#615) - Rename levels to match thesis (#618)
- Reorganize docs to center high-level interfaces (#621)
- FinchProgram => @finch invocation (#624)
- TTM @Einsum versus handwritten performance (#630)