Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Layer Normalization #1109

Open
rianbrooksflynn opened this issue Nov 4, 2024 · 3 comments
Open

Add support for Layer Normalization #1109

rianbrooksflynn opened this issue Nov 4, 2024 · 3 comments

Comments

@rianbrooksflynn
Copy link

I've got a branch adding support for Layer Normalization using either Keras or PyTorch with the Vivado backend in io_parallel mode, and I'd like to submit a pull request.

The implementation uses a lookup table for inverse square root; the inputs to the lookup table follow a logarithmic distribution for better accuracy. Tests have been added for both Keras and Pytorch parsing.

Credit is due to @Ethan0Jiang and @LostEcho365 (Zhixing Jiang and Dennis Yin) for their Vivado implementation and Keras parsing support; my contributions were making a change to the inverse square root lookup table implementation, implementing PyTorch parsing, and adding unit tests. (Here's a link to their pre-print.) The original code authors have given permission for their code to be merged into hls4ml.

While I haven't run this on an actual board, below I have some latency / resource usage estimations from Vitis HLS 2023.2.

keras_layernorm_report.txt
pytorch_layernorm_report.txt

I believe that transformer architecture is a widely requested feature for hls4ml, and Layer Normalization is a key step in that direction.

@rianbrooksflynn
Copy link
Author

PR up here: #1110

@The-Padi
Copy link

The-Padi commented Jan 2, 2025

Small question @rianbrooksflynn : When you say

input size is not currently supported by hls4ml, only dim3 is supported

does this take into account the "None" that is automatically added at the start of the list ? Because my input shape is currently (5, 125, 96) but on line 129 of pytorch_to_hls.py there is this :

# first element needs to 'None' as placeholder for the batch size, insert it if not present
input_shapes = [[None] + list(shape) if shape[0] is not None else list(shape) for shape in input_shapes]

So my list is now [None, 20, 125, 96] when it arrives at your len check

@rianbrooksflynn
Copy link
Author

Hi @The-Padi - thanks for your question.

Yes, the "dim3" limit for the input size is counting the "None" at the start as the batch size placeholder. Perhaps this wording is unclear; do you have any suggestions for a better error message? Maybe something like "input size is not currently supported by hls4ml, only dim3 (including 'None' first dimension) is supported"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants