Skip to content

Commit

Permalink
Disable flash attention 2 by default
Browse files Browse the repository at this point in the history
  • Loading branch information
leng-yue committed Dec 29, 2023
1 parent e8e366e commit 2919eaf
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion fish_speech/models/text2semantic/llama.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ class ModelArgs:
codebook_padding_idx: int = 0

# Use flash attention
use_flash_attention: bool = is_flash_attn_2_available()
use_flash_attention: bool = False

# Gradient checkpointing
use_gradient_checkpointing: bool = True
Expand Down

0 comments on commit 2919eaf

Please sign in to comment.