-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feature request] specify layer / timesteps #6
Comments
Thanks for the feature requests, and I hope my code has been helpful. For layer specification, the module path and its attention map are saved as key and value in the # head_dim, seq_len, height, width
for k,v in attn_maps.items():
print(k, v.shape)
> down_blocks.0.attentions.0.transformer_blocks.0.attn2 torch.Size([10, 77, 64, 96])
> ... For timesteps, I’m thinking of adding an argument in the forward hook function to save attention maps at each timestep. I’ll notify you when it’s done. |
@wooyeolBaek Thanks for sharing this. Looking forward to the attention map visualization by timesteps! |
@enkeejunior1 @XLechter |
Thank you for your awesome work! It is easy to use without much modification.
BTW, since attention map often use as a tool for analyze the layer's behavior, specify what layer / timesteps to visualize greately improve capability.
The text was updated successfully, but these errors were encountered: