-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to support different models with different tensor_para_size? #67
Comments
Besides, I tried 'mpirun -n 1 /opt/tritonserver/bin/tritonserver' three times with different CUDA_VISIBLE_DEVICES, server port and model-repository . However, that doesn't work, the processes was blocked when loading models. |
You should launch three tritonserver, first one use CUDA_VISIBLE_DEVICES=0, second one uses CUDA_VISIBLE_DEVICES=1, third one uses CUDA_VISIBLE_DEVICES=2,3. They may need to use different configuration and set with different names. |
@byshiue I did so, but it still cannot work. I use supervisord to run tritonserver. CUDA_VISIBLE_DEVICES is set in program environment section Here is the meidum model output: After the process break down, the supervisord start it again: The sencond time: |
I cannot see the results of first time. Can you post again? |
@byshiue I am sorry, I place "the sencond time" in wrong area. Now it is ok |
@byshiue From the log, It seems like only one process can load model, and others would be block. But the one which can load all models cannot work,too |
The error is PTX compiled with an unsupported toolchain. You don't load any model successfully. |
@byshiue Docker version 20.10.21 |
@byshiue But when there is only one tritonserver, it works fine. |
Can you post your result one by one? What happen when you launch first one, and what happen when the second one? From the graph you post, the first launch is fail. And what the docker image you use? |
Sorry, can you refine your format? It is too chaos to read now. |
@byshiue sorry, I reformat it |
What's the meaning of "second time" for medium log? Do you re-launch again and first time crash, but second time works? Do you check that you have clean all old processes? What happen when you only launch one sever each time for these three models? |
@byshiue Yes. After the first time the medium break down, supervisord restart it automaticly, and the second time seems good at beginning, but blocked then. |
Can you try to start only one model each time for these three cases? |
@byshiue Did you mean that I start the three models one by one ? |
Yes. |
@byshiue The first model is working fine: |
I mean that only launch one process each time. When you launch the second server, you should kill the first one. |
@byshiue At that condition, all models work fine. |
@byshiue It seems that if /opt/tritonserver/backends/python/triton_python_backend_stub is still running, the new tritonserver must blocked. If I killed it, the new tritonserver can work fine. |
Can you try adding the verbose like |
the second model blocked at this |
Can you try to only launch the fastertransformer, but exclude the pre/post processing? |
@byshiue Now all the processes started, but I don't know why. The pre/post processing code is based on https://github.com/triton-inference-server/fastertransformer_backend/tree/main/all_models/gptneox, the only thing I did is change the tokenizer to my own |
Can you launch the server with original pre/post processing? |
@byshiue Yes, it works... but I don't know why. My only change is to use huggingface transfomers.T5Tokenizer to replace original tokenizer |
@TopIdiot @byshiue Hi, there. I have same problem when I use multiple triton server to loading different models with different GPUs. Any update of this issue? Tokenizer is huggingface's tokenizer (AutoTokenizer), model is bloom. My situation is all models are loaded to GPU, but when I send gprc request, the triton and log are just stuck and show nothing. |
I have 4 GPUs and 3 models called small, medium and large. I want to deploy small model on GPU 0, medium model on GPU 1, and large model on GPU 2 and GPU3 with tensor_para_size=2 due to large model is too huge that cannot be placed on single GPU.
However, the instance_group can only be KIND_CPU, so I can do nothing about it.
Is there any way to handler this?
The text was updated successfully, but these errors were encountered: