We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The output of `python collect_env.py`
vLLM Version: 0.6.1.post2@9ba0817ff1eb514f51cc6de9cb8e16c98d6ee44f model: Qwen2-VL-7B-Instruct
My starting cmd python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model /data/modelscope_cache/Qwen/Qwen2-VL-7B-Instruct
python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model /data/modelscope_cache/Qwen/Qwen2-VL-7B-Instruct
The log is showing These configs are not the same as ones in the generation_config.json.
Two questions:
The text was updated successfully, but these errors were encountered:
Sorry for late reply. vLLM does use generation config, as shown in https://github.com/vllm-project/vllm/blob/main/vllm/transformers_utils/config.py#L386
However I'm not sure whether this works for modelscope.
Sorry, something went wrong.
Your current environment The output of `python collect_env.py` vLLM Version: 0.6.1.post2@9ba0817ff1eb514f51cc6de9cb8e16c98d6ee44f model: Qwen2-VL-7B-Instruct How would you like to use vllm My starting cmd python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model /data/modelscope_cache/Qwen/Qwen2-VL-7B-Instruct The log is showing These configs are not the same as ones in the generation_config.json. Two questions: Does vllm use the generation_config.json as the default generation config? If not, how to set?
I was facing the same issue and this is how I solved it: #11861 (comment)
No branches or pull requests
Your current environment
vLLM Version: 0.6.1.post2@9ba0817ff1eb514f51cc6de9cb8e16c98d6ee44f
model: Qwen2-VL-7B-Instruct
How would you like to use vllm
My starting cmd
python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model /data/modelscope_cache/Qwen/Qwen2-VL-7B-Instruct
The log is showing
These configs are not the same as ones in the generation_config.json.
Two questions:
The text was updated successfully, but these errors were encountered: