| About Ascend | Developer Slack (#sig-ascend) |
English | 中文
Latest News 🔥
- [2024/12] We are working with the vLLM community to support [RFC]: Hardware pluggable.
vLLM Ascend plugin (vllm-ascend
) is a backend plugin for running vLLM on the Ascend NPU.
This plugin is the recommended approach for supporting the Ascend backend within the vLLM community. It adheres to the principles outlined in the [RFC]: Hardware pluggable, providing a hardware-pluggable interface that decouples the integration of the Ascend NPU with vLLM.
By using vLLM Ascend plugin, popular open-source models, including Transformer-like, Mixture-of-Expert, Embedding, Multi-modal LLMs can run seamlessly on the Ascend NPU.
- Atlas A2 Training series (Atlas 800T A2, Atlas 900 A2 PoD, Atlas 200T A2 Box16, Atlas 300T A2)
- Atlas 800I A2 Inference series (Atlas 800I A2)
Requirement | Supported version | Recommended version | Note |
---|---|---|---|
vLLM | main | main | Required for vllm-ascend |
Python | >= 3.9 | 3.10 | Required for vllm |
CANN | >= 8.0.RC2 | 8.0.RC3 | Required for vllm-ascend and torch-npu |
torch-npu | >= 2.4.0 | 2.5.1rc1 | Required for vllm-ascend |
torch | >= 2.4.0 | 2.5.1 | Required for torch-npu and vllm |
Find more about how to setup your environment in here.
Note
Currently, we are actively collaborating with the vLLM community to support the Ascend backend plugin, once supported you can use one line command pip install vllm vllm-ascend
to compelete installation.
Installation from source code:
# Install vllm main branch according:
# https://docs.vllm.ai/en/latest/getting_started/installation/cpu/index.html#build-wheel-from-source
git clone --depth 1 https://github.com/vllm-project/vllm.git
cd vllm
pip install -r requirements-build.txt
VLLM_TARGET_DEVICE=empty pip install .
# Install vllm-ascend main branch
git clone https://github.com/vllm-project/vllm-ascend.git
cd vllm-ascend
pip install -e .
Run the following command to start the vLLM server with the Qwen/Qwen2.5-0.5B-Instruct model:
# export VLLM_USE_MODELSCOPE=true to speed up download
vllm serve Qwen/Qwen2.5-0.5B-Instruct
curl http://localhost:8000/v1/models
Please refer to vLLM Quickstart for more details.
git clone https://github.com/vllm-project/vllm-ascend.git
cd vllm-ascend
pip install -e .
git clone https://github.com/vllm-project/vllm-ascend.git
cd vllm-ascend
docker build -t vllm-ascend-dev-image -f ./Dockerfile .
See Building and Testing for more details, which is a step-by-step guide to help you set up development environment, build and test.
Feature | Supported | Note |
---|---|---|
Chunked Prefill | ✗ | Plan in 2025 Q1 |
Automatic Prefix Caching | ✅ | Imporve performance in 2025 Q1 |
LoRA | ✗ | Plan in 2025 Q1 |
Prompt adapter | ✅ | |
Speculative decoding | ✅ | Impore accuracy in 2025 Q1 |
Pooling | ✗ | Plan in 2025 Q1 |
Enc-dec | ✗ | Plan in 2025 Q1 |
Multi Modality | ✅ (LLaVA/Qwen2-vl/Qwen2-audio/internVL) | Add more model support in 2025 Q1 |
LogProbs | ✅ | |
Prompt logProbs | ✅ | |
Async output | ✅ | |
Multi step scheduler | ✅ | |
Best of | ✅ | |
Beam search | ✅ | |
Guided Decoding | ✗ | Plan in 2025 Q1 |
The list here is a subset of the supported models. See supported_models for more details:
Model | Supported | Note |
---|---|---|
Qwen 2.5 | ✅ | |
Mistral | Need test | |
DeepSeek v2.5 | Need test | |
LLama3.1/3.2 | ✅ | |
Gemma-2 | Need test | |
baichuan | Need test | |
minicpm | Need test | |
internlm | ✅ | |
ChatGLM | ✅ | |
InternVL 2.5 | ✅ | |
Qwen2-VL | ✅ | |
GLM-4v | Need test | |
Molomo | ✅ | |
LLaVA 1.5 | ✅ | |
Mllama | Need test | |
LLaVA-Next | Need test | |
LLaVA-Next-Video | Need test | |
Phi-3-Vison/Phi-3.5-Vison | Need test | |
Ultravox | Need test | |
Qwen2-Audio | ✅ |
We welcome and value any contributions and collaborations:
- Please feel free comments here about your usage of vLLM Ascend Plugin.
- Please let us know if you encounter a bug by filing an issue.
- Please see the guidance on how to contribute in CONTRIBUTING.md.
Apache License 2.0, as found in the LICENSE file.