Releases: InseeFrLab/helm-charts-miscellaneous
vllm-0.0.9
vLLM is a high-performance, low-latency, and memory-efficient library designed for serving large language models (LLMs) at scale.
vllm-0.0.8
vLLM is a high-performance, low-latency, and memory-efficient library designed for serving large language models (LLMs) at scale.
vllm-0.0.7
vLLM is a high-performance, low-latency, and memory-efficient library designed for serving large language models (LLMs) at scale.
lomas-server-0.3.8
Lomas is a remote access platform developed by the Swiss Federal Statistical Office allowing National Statistical Offices to offer eyes-off data science on private datasets while controlling disclosure risk. The platform relies on a service that is meant to be deployed on-premises and allows accredited users to apply differentially private algorithms on private datasets using a dedicated client Python library.
delta-sharing-server-0.0.4
The Delta Sharing Reference Server is a reference implementation server for the Delta Sharing Protocol. This can be used to set up a small service to test your own connector that implements the Delta Sharing Protocol.
bastionlab-1.0.3
BastionLab is a simple privacy framework for data science collaboration.
vllm-0.0.6
vLLM is a high-performance, low-latency, and memory-efficient library designed for serving large language models (LLMs) at scale.
vllm-0.0.5
vLLM is a high-performance, low-latency, and memory-efficient library designed for serving large language models (LLMs) at scale.
vllm-0.0.3
vLLM is a high-performance, low-latency, and memory-efficient library designed for serving large language models (LLMs) at scale.
vllm-0.0.2
vLLM is a high-performance, low-latency, and memory-efficient library designed for serving large language models (LLMs) at scale.