Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

One-layer MLP Possibly Missing #3

Open
ni9elf opened this issue May 13, 2017 · 0 comments
Open

One-layer MLP Possibly Missing #3

ni9elf opened this issue May 13, 2017 · 0 comments

Comments

@ni9elf
Copy link

ni9elf commented May 13, 2017

The attention layer works directly on the GRU embeddings (denoted by h_it in the HAN paper) in the call function of the AttentionLayer. In the paper description, h_it should be fed to a one-layer MLP with a tanh activation to obtain u_it by u_it = tanh(W.h_it + b). The attention weights are then computed on u_it. Is this happening in the code and I have missed it out, or has this been (intentionally) left out? Please clarify.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant