Skip to content

An implementation of a mutli-head attention encoder that encorporates rotary positional encodings (RoPE) as described in Su et al. (2021, https://arxiv.org/abs/2104.09864).

License

Notifications You must be signed in to change notification settings

JannisZeller/rope-multi-head-attention

Repository files navigation

rope-multi-head-attention


In this repo I examine the usage of Rotary Positional Encodings (RoPE) as presented by Su et al. (2021). The actual "Encoder"-class RoPEAttnLayer containing the RoPE is in ./src/rope_attention_layer.py. Pre-compiled attention-kernels (like torch.nn.functional.scaled_dot_product_attention) can not be used, if one wants to stick to the exact configuration of the paper, because in the original encoding procedure described there the rotation have to be "inserted" in the attention mechanism itself (p. 6, eq. 19). I do not want to show any fancy performance, just the setup. Any feedback is welcome.

RoPEs are interesting not only because they are used e. g. in the LLaMA-model family but also because recent work by Chen et al. (2023) shows, that the context windows of RoPE-based models can be extended very efficiently and with high performance by interpolation of the embedding function.

I used some general guidance for data loading and pre-processing as well as inspiration for the basic training loop by this notebook from pytorch.org. Additionally to parallelize the multi-head attention in a loop-free fashion I used the code from this implementation of the transformer architecture by Andrej Karpathy. I implemented just a basic encoder attention-layer without causal masking or similar building blocks.

About

An implementation of a mutli-head attention encoder that encorporates rotary positional encodings (RoPE) as described in Su et al. (2021, https://arxiv.org/abs/2104.09864).

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published