Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error with some pretrained models: EOFError: Ran out of input when loading model #33

Open
PabloSczn opened this issue Oct 3, 2024 · 0 comments

Comments

@PabloSczn
Copy link

PabloSczn commented Oct 3, 2024

Hello,

I'm encountering an issue when loading pretrained models from the OpenXAI repository. Specifically, when I attempt to generate explanations, I receive the following error with model files for various datasets with logistic regression:

Data: german, Model: lr
Traceback (most recent call last):
  File "...\generate_explanations.py", line 42, in <module>
    model = LoadModel(data_name, model_name, pretrained=pretrained)
  File "...\model.py", line 42, in LoadModel
    state_dict = torch.load(model_path+model_filename, map_location=torch.device('cpu'))
  File "...\torch\serialization.py", line 1114, in load
    return _legacy_load(
  File "...\torch\serialization.py", line 1338, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
EOFError: Ran out of input

It appears that the .pt model file being loaded is either incomplete or corrupted.
I tried downloading the model file directly from the Dataverse link provided:

After downloading the .pt file from this link, I attempted to generate explanations again, but it failed once again

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant