Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trying to cache the loaded model in Django #139

Open
had6r3 opened this issue Jul 17, 2023 · 0 comments
Open

Trying to cache the loaded model in Django #139

had6r3 opened this issue Jul 17, 2023 · 0 comments

Comments

@had6r3
Copy link

had6r3 commented Jul 17, 2023

In Django, if I set the loaded model in cache:

model = predict.load_model('path\model.h5')
cache.set('model', model)

When I try to retrieve it:

cache.get('model')

I get this error:

Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ram://cf101767-7409-47ba-8f98-0921cc47a20a/variables/variables
You may be trying to load on a different device from the computational device. Consider setting the experimental_io_device option in tf.saved_model.LoadOptions to the io_device such as '/job:localhost'.

Is it possible to load the model once and keep using it in the Django webapp?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant