You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm encountering an issue when loading pretrained models from the OpenXAI repository. Specifically, when I attempt to generate explanations, I receive the following error with model files for various datasets with logistic regression:
Data: german, Model: lr
Traceback (most recent call last):
File "...\generate_explanations.py", line 42, in <module>
model = LoadModel(data_name, model_name, pretrained=pretrained)
File "...\model.py", line 42, in LoadModel
state_dict = torch.load(model_path+model_filename, map_location=torch.device('cpu'))
File "...\torch\serialization.py", line 1114, in load
return _legacy_load(
File "...\torch\serialization.py", line 1338, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
EOFError: Ran out of input
It appears that the .pt model file being loaded is either incomplete or corrupted.
I tried downloading the model file directly from the Dataverse link provided:
Hello,
I'm encountering an issue when loading pretrained models from the OpenXAI repository. Specifically, when I attempt to generate explanations, I receive the following error with model files for various datasets with logistic regression:
It appears that the .pt model file being loaded is either incomplete or corrupted.
I tried downloading the model file directly from the Dataverse link provided:
After downloading the .pt file from this link, I attempted to generate explanations again, but it failed once again
The text was updated successfully, but these errors were encountered: