Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Whisper language detection #1097

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

ae9is
Copy link

@ae9is ae9is commented Dec 13, 2024

See #302

Adds support for automatically detecting language to Whisper tasks.

The existing HuggingFace and Whisper implementations in Python were used as reference:
Hugging Face Transformers
Original Whisper

Also updates the existing Whisper test suites, including adding a string similarity check for actual model output (as opposed to just output length). Please note that the "new" development dependency for these tests, "fastest-levenshtein" is already used by "webpack-cli".

@xenova
Copy link
Collaborator

xenova commented Dec 13, 2024

Thanks for the PR! This will certainly be a useful feature. Regarding the implementation, I think it can be greatly simplified as follows:

  • Instead of using .generate, perform a single forward pass of the inputs
  • Then, consider all logits which correspond to the language token ids
  • Choose the language with the highest score

Currently, the implementation seems to perform a full generation step (could be hundreds of forward passes).

@ae9is
Copy link
Author

ae9is commented Dec 15, 2024

Sorry about that, it was simpler to code and the performance impact for my app was minimal! I've reworked things now to only run one pass for language detection.

Thanks for all the work on this library.

@ae9is ae9is force-pushed the add-whisper-language-detection branch from 7bbc92f to db84540 Compare December 16, 2024 11:46
@ZhangPeng4242
Copy link

hey there, please approve this feature, it is a quite useful feature :)

Comment on lines +3148 to +3158
const output = await this.generate({
...options,
generation_config: {
...generation_config,
good_words_ids,
num_beams: 1,
do_sample: false,
},
stopping_criteria,
decoder_input_ids,
});
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should be able to replace this with a single forward pass (by called this.forward(...) instead of using a generation step.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a lot of user options for (and logic in) generate and I wanted to respect it while running language detection. It was simpler to extend generate to just stop after one pass than to duplicate that and use forward directly.

Like, hypothetically, a user adds a logits processor that suppresses the first 10 seconds worth of tokens. There is a 15s audio clip in two languages, and the context switches at 10s. The language detection should detect the second language not the first.

* @returns {Promise<number[]>} A list of language token IDs detected.
*/
async _detect_language(options) {
const inputs = options.inputs
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When testing this PR "inputs" was in my case not present, instead I had "input_features".

I noticed the type returns: "(Tensor of varying shape depending on the modality, optional): The sequence used as a prompt for the generation or as model inputs to the encoder. If null the method initializes it with bos_token_id and a batch size of 1. For decoder-only models inputs should be in the format of input_ids. For encoder-decoder models inputs can represent any of input_ids, input_values, input_features, or pixel_values."

By the way thanks for adding language detection, hope it will be merged soon :)

Copy link
Author

@ae9is ae9is Jan 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I can't reproduce. And reading the typing, sounds like the input_ids/input_values/input_features should always be stored as inputs.

And even if the typing is sometimes wrong, patching _detect_language() to use for ex. options?.inputs ?? options?.input_features still won't fix the generate() function which is currently in main. So it sounds like maybe worth filing a separate issue and or PR.

But if you're interested in just trying an alternative build, my "develop" branch is a fork of v3.0.2 with the language detection patch applied that works for me in a real app. Hope it helps!

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it depends on the model used, you probably can reproduce it with https://huggingface.co/onnx-community/whisper-large-v3-turbo - I was able to fix it with const inputs = options.inputs ?? options.input_features; in _detect_language on my side.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've already used turbo and it works fine for me, sorry! (I do get an unrelated error when using turbo instead of small in the test suite.)

I guess it's up to the maintainer to decide what to do with this PR, and edits are enabled.

But I don't understand why you're not also getting issues with the generate() code that's currently in main. And if so, that's worth a separate issue and PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants