-
Notifications
You must be signed in to change notification settings - Fork 27.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wav2Vec2BertForSequenceClassification. return_attention_mask work wrong #35495
Comments
cc @eustlb |
hey @HERIUN , could you verify if you strictly expected a left padded output , or if even a right padded output like [1,1,1..... 0,0,0] would be equally accurate for you ? I was giving this issue a look and used the following script for replication :
and the output I got was
which is different from your finding of all 1's . I was just checking to verify if this is in line with your expected outputs or not . if not , I will give a look into what is the cause if this issue . |
you're right. I was confused.. |
System Info
transformers
version: 4.47.1Who can help?
@ylacombe
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
I'am using https://github.com/huggingface/transformers/blob/main/examples/pytorch/audio-classification/run_audio_classification.py
Expected behavior
When padding in batch, attention_mask will always [1,1,1,...,1]
but, I expect [0,0,0,...,1,1]
The text was updated successfully, but these errors were encountered: