Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(fish-speech v1.5) bigger real time factor on short texts #744

Open
6 tasks done
twocode opened this issue Dec 12, 2024 · 8 comments
Open
6 tasks done

(fish-speech v1.5) bigger real time factor on short texts #744

twocode opened this issue Dec 12, 2024 · 8 comments
Labels
bug Something isn't working

Comments

@twocode
Copy link

twocode commented Dec 12, 2024

Self Checks

  • This template is only for bug reports. For questions, please visit Discussions.
  • I have thoroughly reviewed the project documentation (installation, training, inference) but couldn't find information to solve my problem. English 中文 日本語 Portuguese (Brazil)
  • I have searched for existing issues, including closed ones. Search issues
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template and fill in all required fields.

Cloud or Self Hosted

Self Hosted (Docker)

Environment Details

Tesla T4

Steps to Reproduce

  1. Server starts in Docker as
    "python", "-m", "tools.api_server", \
    "--listen", "0.0.0.0:8080", \
    "--llama-checkpoint-path", "checkpoints/fish-speech-1.5", \
    "--decoder-checkpoint-path", "checkpoints/fish-speech-1.5/firefly-gan-vq-fsq-8x1024-21hz-generator.pth", \
    "--decoder-config-name", "firefly_gan_vq", \
    "--compile", \
    "--half" \
  1. Upload reference audios

  2. Client makes request specifying reference_id.

✔️ Expected Behavior

I hope to see a tts latency similar to fish-speech v1.4 at around 500ms for a non-referenced audio generation from a short text with only a few characters.

❌ Actual Behavior

The real time factor for short text chunks is bigger than longer texts.

{"level":"info","timestamp":"2024-12-12T17:59:15.231Z","caller":"mando/engine.go:366","msg":"TTS performance","pid":1,"audio_duration_ms":1646,"latency_ms":1913,"text":"好的,"}
{"level":"info","timestamp":"2024-12-12T17:59:20.715Z","caller":"mando/engine.go:366","msg":"TTS performance","pid":1,"audio_duration_ms":6009,"latency_ms":2822,"text":"让我们开始另一个故事!\n\n在一个神秘的王国里,住着一位勇敢的小骑士,"}
{"level":"info","timestamp":"2024-12-12T17:59:25.770Z","caller":"mando/engine.go:366","msg":"TTS performance","pid":1,"audio_duration_ms":13428,"latency_ms":4964,"text":"名叫亚瑟。亚瑟非常渴望成为一名伟大的骑士,保护他的村庄和朋友们。有一天,村庄里传来了一个坏消息:一条凶猛的龙来到了附近的山上,"}

In my application log above, audio_duration_ms is the length of the audio and latency_ms is the tts duration.
The shortest text here had a real-time-factor < 1.

@twocode twocode added the bug Something isn't working label Dec 12, 2024
@twocode
Copy link
Author

twocode commented Dec 12, 2024

I tested without inference audio and the issue remains.

@johnwick123f
Copy link

johnwick123f commented Dec 13, 2024

Yep same exact thing, 500ms latency with very short text on a T4 gpu with api. Slower then realtime(4sec to gen 3sec of audio). Only on longer text, it is much faster then realtime.

@leng-yue
Copy link
Member

T4 is a very old GPU, it may not work well when doing the prefilling.

@twocode
Copy link
Author

twocode commented Dec 20, 2024 via email

@leng-yue
Copy link
Member

This is quite interesting, as you see, we didn't change model architecture a lot during the update. One thing I am consider is that the embedding table is much larger now, it may then cause some memory bound?

@twocode
Copy link
Author

twocode commented Dec 21, 2024 via email

@twocode
Copy link
Author

twocode commented Dec 23, 2024

Hi @leng-yue can you let know what are the sizes of embedding table for both 1.4 and 1.5? Would a g5.xlarge (with 24GiB GPU memory) be sufficient?
Thank you.

@leng-yue
Copy link
Member

Even 6G is enough, vocab size 32k -> 100k.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants