Skip to content

Releases: InseeFrLab/helm-charts-miscellaneous

vllm-0.0.9

17 Jan 16:24
96e1ee2
Compare
Choose a tag to compare

vLLM is a high-performance, low-latency, and memory-efficient library designed for serving large language models (LLMs) at scale.

vllm-0.0.8

17 Jan 15:13
da9ec2e
Compare
Choose a tag to compare

vLLM is a high-performance, low-latency, and memory-efficient library designed for serving large language models (LLMs) at scale.

vllm-0.0.7

16 Jan 13:13
7c36dfb
Compare
Choose a tag to compare

vLLM is a high-performance, low-latency, and memory-efficient library designed for serving large language models (LLMs) at scale.

lomas-server-0.3.8

16 Jan 13:13
7c36dfb
Compare
Choose a tag to compare

Lomas is a remote access platform developed by the Swiss Federal Statistical Office allowing National Statistical Offices to offer eyes-off data science on private datasets while controlling disclosure risk. The platform relies on a service that is meant to be deployed on-premises and allows accredited users to apply differentially private algorithms on private datasets using a dedicated client Python library.

delta-sharing-server-0.0.4

16 Jan 13:13
7c36dfb
Compare
Choose a tag to compare

The Delta Sharing Reference Server is a reference implementation server for the Delta Sharing Protocol. This can be used to set up a small service to test your own connector that implements the Delta Sharing Protocol.

bastionlab-1.0.3

16 Jan 13:13
7c36dfb
Compare
Choose a tag to compare

BastionLab is a simple privacy framework for data science collaboration.

vllm-0.0.6

08 Jan 08:29
Compare
Choose a tag to compare

vLLM is a high-performance, low-latency, and memory-efficient library designed for serving large language models (LLMs) at scale.

vllm-0.0.5

04 Dec 14:59
Compare
Choose a tag to compare

vLLM is a high-performance, low-latency, and memory-efficient library designed for serving large language models (LLMs) at scale.

vllm-0.0.3

04 Dec 14:28
Compare
Choose a tag to compare

vLLM is a high-performance, low-latency, and memory-efficient library designed for serving large language models (LLMs) at scale.

vllm-0.0.2

04 Dec 14:19
Compare
Choose a tag to compare

vLLM is a high-performance, low-latency, and memory-efficient library designed for serving large language models (LLMs) at scale.