Triton Distributed is a flexible, component based, data center scale inference serving framework designed to leverage the strengths of the standalone Triton Inference Server while expanding its capabilities to meet the demands of complex use cases including those of Generative AI. It is designed to enable developers to implement and customize routing, load balancing, scaling and workflow definitions at the data center scale without sacrificing performance or ease of use.
Note
This project is currently in the alpha / experimental / rapid-prototyping stage and we are actively looking for feedback and collaborators.
Triton Distributed development and examples are container based.
You can build the Triton Distributed container using the build scripts
in container/
(or directly with docker build
).
We provide 3 types of builds:
STANDARD
which includes our default set of backends (onnx, openvino...)TENSORRTLLM
which includes our TRT-LLM backendVLLM
which includes our VLLM backend
For example, if you want to build a container for the STANDARD
backends you can run
./container/build.sh
Please see the instructions in the corresponding example for specific build instructions.
You can run the Triton Distributed container using the run scripts in
container/
(or directly with docker run
).
The run script offers a few common workflows:
- Running a command in a container and exiting.
./container/run.sh -- python3 -c "import triton_distributed.icp.protos.icp_pb2 as icp_proto; print(icp_proto); print(dir(icp_proto));"
- Starting an interactive shell.
./container/run.sh -it
- Mounting the local workspace and Starting an interactive shell.
./container/run.sh -it --mount-workspace
The last command also passes common environment variables ( -e HF_TOKEN
) and mounts common directories such as /tmp:/tmp
,
/mnt:/mnt
.
Please see the instructions in the corresponding example for specific deployment instructions.
A basic example demonstrating the new interfaces and concepts of triton distributed. In the hello world example, you can deploy a set of simple workers to load balance requests from a local work queue.
Note
This project is currently in the alpha / experimental / rapid-prototyping stage and we will be adding new features incrementally.
-
The
TENSORRTLLM
andVLLM
containers are WIP and not expected to work out of the box. -
Testing has primarily been on single node systems with processes launched within a single container.