Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Core]: (1/N) Support prefill only models by Workflow Defined Engine - Prefill only scheduler #9181

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

noooop
Copy link
Contributor

@noooop noooop commented Oct 9, 2024

Previously

What is this pr going to do?

This PR focuses on the following new features

scheduler

  • Prefill only models requires simpler scheduler, no need to consider kvcache and preemption
  • Prefill only models, there is no correlation between tasks, so it is easy to implement async scheduling

Proposed Change.

  • InputProcessor: Input(request_id, inputs, params, arrival_time) -> InputProcessor -> Request
    • Like other text input models, bert supports str, Dict, TextPrompt, TokensPrompt, TextOnlyInputs
    • InputProcessor will do simple verification of the input, but will not do tokenizer.encode to avoid being stuck in a large number of submissions at the same time.
  • RequestProcessor: Request -> RequestProcessor -> SchedulableRequest
    • RequestProcessor is the callback function of the scheduler. It will only be called when scheduling is needed, and tokenizer.encode will be executed.
  • Scheduler
    • A simple first-come-first-serve scheduler that supports batched_tokens and num_requests to control concurrency
    • Prefill only models, there is no dependency between tasks, so it supports both sync and async scheduling

Briefly introduce

What new models need to be supported

These models are all from issues and are also very famous:

  • xlm_roberta
  • bge-m3
  • bge-reranker-v2-m3
  • bert
  • bge v1.5 family
  • Snowflake Arctic Embed (Family)
  • gte-Qwen2
  • This list is still growing

These models is roughly divided into three categories:

  • Encode only models. (Bidirectional Transformers, causal=False), Often fine-tuned as retriever and reranker etc.
  • Decode only models. (masked multi-head attention, causal=True). There are two interesting uses:
    • Output last hidden states as a feature extractor
    • Decode only retriever (I don't know of a better name),E.g. e5-mistral-7b (The only Embed model currently supported by vllm)
    • Whether it has been fine-tuned or not, there is almost no difference in the code.
  • Enable bidirectional. LLM2Vec propose a simple unsupervised approach that can transform any decoder-only LLM into a strong text encoder.

What new features these new models have

What the above three categories have in common is that there is only the prefill stage. In order to make the terminology more precise, prefill only is used below.

You can think of prefill only as encode only fancy writing.

New features:

  1. attention
    • Prefill only models requires simpler attention implementations, no need to consider kvcache, no decoding phase
    • We need to support enable_bidirectional flag manually or read hf config automatically, enable bidirectional.
  2. scheduler
    • Prefill only models requires simpler scheduler, no need to consider kvcache and preemption
    • Prefill only models, there is no correlation between tasks, so it is easy to implement async scheduling
  3. executer
    • In order to support async scheduling, model_input_builder needs to be separated from the runner.
    • The main thread executes scheduling and all CPU processing, and the gpu thread only executes h2d, execution model, d2h
    • If async scheduling and async execution are implemented, data parallelism is also easy to implement. Data parallelism is more efficient for small models

How engine Architecture needs to support these features flexibly and efficiently.

If we directly add new functions to existing modules, these modules are becoming increasingly complex, and sometimes new features must be compromised for compatibility. ultimately leading to suboptimal results

The most flexible and efficient way to support the prefill only models is to implement different modules for models of different architectures and load the required modules on demand.

I call this architecture Workflow Defined Engine, or WDE for short.

I divided the Engine into the following modules.

  • InputProcessor: The llm models inputs strings, the reranker inputs pairs, and the multimodal model input is more complex...
  • OutputProcessor: The retriever(embedding) models output embeddings, reranker models and classification models output Scores...
  • ModelInputBuilder: Building model inputs and attention metadata
  • AttnBackend: Support different AttnBackend and enable bidirectional
  • Tokenizer: There may be different tokenizers
  • Executor: Sync\Async\TP\PP\DP\maybe more
  • Worker & runner: Support different devices\maybe more
  • EngineArgs: Different models, different config may accept different parameters
  • maybe more

With wde, there is no need for one module to be compatible with all functions. You can use the dynamic loading feature of python to load different modules at the highest level, for different models and different needs.

  • Modules can be configured through Workflow, plug and play
  • Flexibly support plug-ins, and developers can load their own modules.
  • Workflow is really the best place to hide dirty codes.
    Some models cannot use the common Workflow. When you don’t know where to put the dirty code, you can always create a new workflow and link the model architecture to the new workflow to avoid leaving dirty code everywhere for the sake of compatibility.

Copy link

github-actions bot commented Oct 9, 2024

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@DarkLight1337 DarkLight1337 marked this pull request as draft October 9, 2024 08:44
@noooop noooop changed the title [Core]: (2/N) Support prefill only models by Workflow Defined Engine - Prefill only scheduler [Core]: (1/N) Support prefill only models by Workflow Defined Engine - Prefill only scheduler Oct 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant