Skip to content

Latest commit

 

History

History
105 lines (72 loc) · 3.53 KB

README.md

File metadata and controls

105 lines (72 loc) · 3.53 KB

Triton Distributed

A Datacenter Scale Distributed Inference Serving Framework

License

Triton Distributed is a flexible, component based, data center scale inference serving framework designed to leverage the strengths of the standalone Triton Inference Server while expanding its capabilities to meet the demands of complex use cases including those of Generative AI. It is designed to enable developers to implement and customize routing, load balancing, scaling and workflow definitions at the data center scale without sacrificing performance or ease of use.

Note

This project is currently in the alpha / experimental / rapid-prototyping stage and we are actively looking for feedback and collaborators.

Building Triton Distributed

Triton Distributed development and examples are container based.

You can build the Triton Distributed container using the build scripts in container/ (or directly with docker build).

We provide 3 types of builds:

  1. STANDARD which includes our default set of backends (onnx, openvino...)
  2. TENSORRTLLM which includes our TRT-LLM backend
  3. VLLM which includes our VLLM backend

For example, if you want to build a container for the STANDARD backends you can run

./container/build.sh

Please see the instructions in the corresponding example for specific build instructions.

Running Triton Distributed for Local Testing and Development

You can run the Triton Distributed container using the run scripts in container/ (or directly with docker run).

The run script offers a few common workflows:

  1. Running a command in a container and exiting.
./container/run.sh -- python3 -c "import triton_distributed.icp.protos.icp_pb2 as icp_proto; print(icp_proto); print(dir(icp_proto));"
  1. Starting an interactive shell.
./container/run.sh -it
  1. Mounting the local workspace and Starting an interactive shell.
./container/run.sh -it --mount-workspace

The last command also passes common environment variables ( -e HF_TOKEN) and mounts common directories such as /tmp:/tmp, /mnt:/mnt.

Please see the instructions in the corresponding example for specific deployment instructions.

Hello World

Hello World

A basic example demonstrating the new interfaces and concepts of triton distributed. In the hello world example, you can deploy a set of simple workers to load balance requests from a local work queue.

Disclaimers

Note

This project is currently in the alpha / experimental / rapid-prototyping stage and we will be adding new features incrementally.

  1. The TENSORRTLLM and VLLM containers are WIP and not expected to work out of the box.

  2. Testing has primarily been on single node systems with processes launched within a single container.