Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Warning 'The attention mask is not set' #35524

Open
2 of 4 tasks
AlessandroSpallina opened this issue Jan 6, 2025 · 0 comments
Open
2 of 4 tasks

Warning 'The attention mask is not set' #35524

AlessandroSpallina opened this issue Jan 6, 2025 · 0 comments
Labels

Comments

@AlessandroSpallina
Copy link

AlessandroSpallina commented Jan 6, 2025

Having the same warning appearing in a closed pull request #33509

System Info

  • transformers version: 4.47.1
  • Platform: Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.39
  • Python version: 3.12.3
  • Huggingface_hub version: 0.27.0
  • Safetensors version: 0.5.0
  • Accelerate version: 1.2.1
  • Accelerate config: not found
  • PyTorch version (GPU?): 2.5.1+cu124 (True)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using distributed or parallel set-up in script?: no
  • Using GPU in script?: yes
  • GPU type: NVIDIA RTX 4000 Ada Generation Laptop GPU

Who can help?

@ylacombe

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

code:

        pipe = pipeline(
            "automatic-speech-recognition",
            model=self.model,
            torch_dtype=torch.float16,
            chunk_length_s=30,
            batch_size=24,
            return_timestamps=True,
            device=self.device,
            tokenizer=processor.tokenizer,
            feature_extractor=processor.feature_extractor,
            model_kwargs={"use_flash_attention_2": True},
            generate_kwargs={
                "max_new_tokens": 128,
            },
        )

warning:

The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Whisper did not predict an ending timestamp, which can happen if audio is cut off in the middle of a word. Also make sure WhisperTimeStampLogitsProcessor was used during generation.

Expected behavior

No warning

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant