This repository has been archived by the owner on Jan 27, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 12
Serve pretrained model as API #11
Comments
Hi, this is an old fork of OpenNMT-py. This seems like an issue specific to that repository. I never used the server.py script so I don't know if the changes in this fork have an effect on it in any way |
Ok, thank you. |
Not that I know of. |
You can try to see the documentation for server.py in the original OpenNMT-py repository. If you find the issue, I'll gladly accept a PR :) |
Alright, will check, thank you 👍 |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I am trying to start a server using a pretrained model. Am I correct in assuming the server.py is used to start an inference server?
As far as I understand the available models are configured in the /available_models/conf.json file. Is there any documentation on the options available for this config file?
The text was updated successfully, but these errors were encountered: