Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SwinViT pretrained weights do not completely match the SSLHead model #150

Open
lbuess opened this issue Nov 25, 2022 · 1 comment
Open

Comments

@lbuess
Copy link

lbuess commented Nov 25, 2022

Thank you for providing the code for pretraining of SwinViT.
I want to start a pretraining on my own data and initialize the pretraining SSLHead model (not only SwinViT encoder) with the provided pretrained weights. However, there is a small difference between the weights provided and the SSLHead model.

Describe the bug
The provided weights have one layer (two keys) more than the SSLHead model.

To Reproduce
Steps to reproduce the behavior:

  1. Download provided pretrained weights from here
  2. Load pretrained weights into the SSLHead model: model.load_state_dict(pretrained_dict, strict=False)

Expected behavior
All keys of the pretrained dict should also be present in the SSLHead model. But two keys (norm.weight and norm.bias) are missing in SSLHead model.

Screenshots
Difference of keys between pretrained weights and SSLHead:
image

Pretrained dict has two unexpected keys:
image

Missing keys in red box:
image

Why do the pretrained weights have an extra norm layer?

@OeslleLucena
Copy link

Still no answer for that issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants