You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
In app/api/docker_control/views.py we also attempt to deploy the agent container for every model. This does not make sense for non-LLM models, causing the API call to "fail" but the actual model deployment is successful.
To Reproduce
Steps to reproduce the behavior:
Go to 'deploy YOLOv4 inference server'
Deploy button will fail and turn red, but inference server was launched successfully
Expected behavior
We should launch the agent container depending on the model type. This is an exact scenario where implementing #167 will fix all of our problems. I think the solution from #167 should be when deciding whether to deploy the agent container.
The text was updated successfully, but these errors were encountered:
I don't mean work in terms of accuracy. I mean do they raise errors, if any? I expect the small LLMs like 3.1-1B to still deploy properly but are kind of buggy.
Describe the bug
In
app/api/docker_control/views.py
we also attempt to deploy the agent container for every model. This does not make sense for non-LLM models, causing the API call to "fail" but the actual model deployment is successful.To Reproduce
Steps to reproduce the behavior:
Expected behavior
We should launch the agent container depending on the model type. This is an exact scenario where implementing #167 will fix all of our problems. I think the solution from #167 should be when deciding whether to deploy the agent container.
The text was updated successfully, but these errors were encountered: