-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to visualize attention map #1
Comments
Looks good to me but one thing you should pay attention to is that vit-model-1 is finetuned on the cassava-leaf-disease-classification task. You may expect to visualize an image from that dataset. It is quite different from object classification and focuses on the low-level texture of the input leaf. To visualize the attention map of a dog, you can utilize pre-trained models here. Anyway, it is a good first try. I'm still hesitating about the operation of extracting the "attention Map" since I don't want it to affect the inference process, that is, to modify the forward function. Maybe later I will check some best practices about hooks. If u r willing to, u can make a PR of your implement. |
Thanks for answer. Here is result for a dog. Original attention map in repo for https://github.com/jeonsworld/ViT-pytorch/blob/main/visualize_attention_map.ipynb is below: It seems to be something, but I'm not sure. If you think this part is okay, If |
For people still looking for a solution, my package NoPdb allows capturing attention weights from pretty much any Transformer implementation without any modifications to the code. See a Colab notebook showing how to do this for ViT (a different implementation). In this case, it would be something like: with nopdb.capture_calls(SelfAttention.forward) as calls:
logits = model(x)
calls[0].locals["attn_weights"] # attention weights of the first layer |
Hi, when I try to implement the changes by @piantic, this is the error I am getting: Traceback (most recent call last): Is there anything else I need to do? I feel that there might be some change that needs to be made in the EncoderBlock part of the model.py file |
Hi, @Suryanshg. This is my example notebook for visualizing attention map using this github. And you can see visualized version of ViT in below link. I hope this helps you. |
Hi,
I want to visualize attention map.
I found https://github.com/jeonsworld/ViT-pytorch/blob/main/visualize_attention_map.ipynb
In this repo, I did not found
vis
option for attention map.(If any, please let me know and I'd appreciate it.)
So, I decided to add this to
model.py
.like this:
And I got the result.
But I don't know that it is right or not.
Because the result of attention map above link is quite different for me.
(I used pretrained weights in here).
I am not sure if my results are correct.
I would be happy if I could hear the answer.
Thanks.
The text was updated successfully, but these errors were encountered: