Skip to content

Commit

Permalink
add kwargs to _flash_attention_forward
Browse files Browse the repository at this point in the history
  • Loading branch information
Cyrilvallez committed Dec 18, 2024
1 parent ec3bef3 commit fc74e39
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions src/transformers/modeling_flash_attention_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -247,6 +247,7 @@ def _flash_attention_forward(
max_length_q: Optional[int] = None,
max_length_k: Optional[int] = None,
target_dtype: Optional[torch.dtype] = None,
**kwargs,
):
"""
Calls the forward method of Flash Attention - if the input hidden states contain at least one padding token
Expand Down

0 comments on commit fc74e39

Please sign in to comment.