Skip to content

Commit

Permalink
Add limited maintenance notice (#3395)
Browse files Browse the repository at this point in the history
  • Loading branch information
mreso authored Feb 28, 2025
1 parent 2a0ce75 commit b5871e2
Show file tree
Hide file tree
Showing 167 changed files with 751 additions and 86 deletions.
4 changes: 4 additions & 0 deletions CODE_OF_CONDUCT.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

# Code of Conduct

## Our Pledge
Expand Down
4 changes: 4 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

## Contributing to TorchServe
### Merging your code

Expand Down
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

# ❗ANNOUNCEMENT: Security Changes❗
TorchServe now enforces token authorization enabled and model API control disabled by default. These security features are intended to address the concern of unauthorized API calls and to prevent potential malicious code from being introduced to the model server. Refer the following documentation for more information: [Token Authorization](https://github.com/pytorch/serve/blob/master/docs/token_authorization_api.md), [Model API control](https://github.com/pytorch/serve/blob/master/docs/model_api_control.md)

Expand Down
4 changes: 4 additions & 0 deletions SECURITY.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

# Security Policy

## Supported Versions
Expand Down
4 changes: 4 additions & 0 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

# Torchserve Model Server Benchmarking

The benchmarks measure the performance of TorchServe on various models and benchmarks. It supports either a number of built-in models or a custom model passed in as a path or URL to the .mar file. It also runs various benchmarks using these models (see benchmarks section below). The benchmarks are executed in the user machine through a python3 script in case of jmeter and a shell script in case of apache benchmark. TorchServe is run on the same machine in a docker instance to avoid network latencies. The benchmark must be run from within `serve/benchmarks`
Expand Down
14 changes: 9 additions & 5 deletions benchmarks/add_jmeter_test.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,20 @@
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

## Adding a new test plan for torchserve

A new Jmeter test plan for torchserve benchmark can be added as follows:

* Assuming you know how to create a jmeter test plan. If not then please use this jmeter [guide](https://jmeter.apache.org/usermanual/build-test-plan.html)
* Here, we will show you how 'MMS Benchmarking Image Input Model Test Plan' plan can be added.
This test plan does following:
This test plan does following:

* Register a model - `default is resnet-18`
* Scale up to add workers for inference
* Send Inference request in a loop
* Unregister a model

(NOTE - This is an existing plan in `serve/benchmarks`)
* Open jmeter GUI
e.g. on macOS, type `jmeter` on commandline
Expand Down Expand Up @@ -63,7 +67,7 @@ You can create variables or use them directly in your test plan.
* input_filepath - input image file for prediction
* min_workers - minimum workers to be launch for serving inference request

NOTE -
NOTE -

* In above, screenshot, some variables/input box are partially displayed. You can view details by opening an existing test cases from serve/benchmarks/jmx for details.
* Apart from above argument, you can define custom arguments specific to you test plan if needed. Refer `benchmark.py` for details
* Apart from above argument, you can define custom arguments specific to you test plan if needed. Refer `benchmark.py` for details
4 changes: 4 additions & 0 deletions benchmarks/jmeter.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

# Benchmarking with JMeter

## Installation
Expand Down
4 changes: 4 additions & 0 deletions benchmarks/sample_report.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.


TorchServe Benchmark on gpu
===========================
Expand Down
36 changes: 20 additions & 16 deletions binaries/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,8 @@
# Building TorchServe and Torch-Model-Archiver release binaries
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

# Building TorchServe and Torch-Model-Archiver release binaries
1. Make sure all the dependencies are installed
##### Linux and macOS:
```bash
Expand All @@ -10,8 +14,8 @@
python .\ts_scripts\install_dependencies.py --environment=dev
```
> For GPU with Cuda 10.2, make sure add the `--cuda cu102` arg to the above command

2. To build a `torchserve` and `torch-model-archiver` wheel execute:
##### Linux and macOS:
```bash
Expand All @@ -22,23 +26,23 @@
python .\binaries\build.py
```

> If the scripts detect a conda environment, it also builds torchserve conda packages
> If the scripts detect a conda environment, it also builds torchserve conda packages
> For additional info on conda builds refer to [this readme](conda/README.md)
3. Build outputs are located at
##### Linux and macOS:
- Wheel files
`dist/torchserve-*.whl`
`dist/torchserve-*.whl`
`model-archiver/dist/torch_model_archiver-*.whl`
`workflow-archiver/dist/torch_workflow_archiver-*.whl`
- Conda pacakages
`binaries/conda/output/*`
`binaries/conda/output/*`

##### Windows:
- Wheel files
`dist\torchserve-*.whl`
`model-archiver\dist\torch_model_archiver-*.whl`
`workflow-archiver\dist\torch_workflow_archiver-*.whl`
`dist\torchserve-*.whl`
`model-archiver\dist\torch_model_archiver-*.whl`
`workflow-archiver\dist\torch_workflow_archiver-*.whl`
- Conda pacakages
`binaries\conda\output\*`

Expand Down Expand Up @@ -74,7 +78,7 @@
```bash
conda install --channel ./binaries/conda/output -y torchserve torch-model-archiver torch-workflow-archiver
```

##### Windows:
Conda install is currently not supported. Please use pip install command instead.

Expand Down Expand Up @@ -147,17 +151,17 @@
exec bash
python3 binaries/build.py
cd binaries/
python3 upload.py --upload-pypi-packages --upload-conda-packages
python3 upload.py --upload-pypi-packages --upload-conda-packages
```
4. To upload *.whl files to S3 bucket, run the following command:
4. To upload *.whl files to S3 bucket, run the following command:
Note: `--nightly` option puts the *.whl files in a subfolder named 'nightly' in the specified bucket
```
python s3_binary_upload.py --s3-bucket <s3_bucket> --s3-backup-bucket <s3_backup_bucket> --nightly
```
## Uploading packages to production torchserve account
As a first step binaries and docker containers need to be available in some staging environment. In that scenario the binaries can just be `wget`'d and then uploaded using the instructions below and the docker staging environment just needs a 1 line code change in https://github.com/pytorch/serve/blob/master/docker/promote-docker.sh#L8
As a first step binaries and docker containers need to be available in some staging environment. In that scenario the binaries can just be `wget`'d and then uploaded using the instructions below and the docker staging environment just needs a 1 line code change in https://github.com/pytorch/serve/blob/2a0ce756b179677f905c3216b9c8427cd530a129/docker/promote-docker.sh#L8
### pypi
Binaries should show up here: https://pypi.org/project/torchserve/
Expand All @@ -182,7 +186,7 @@ anaconda upload -u pytorch <path/to/.bz2>
## docker
Binaries should show up here: https://hub.docker.com/r/pytorch/torchserve
Change the staging org to your personal docker or test docker account https://github.com/pytorch/serve/blob/master/docker/promote-docker.sh#L8
Change the staging org to your personal docker or test docker account https://github.com/pytorch/serve/blob/2a0ce756b179677f905c3216b9c8427cd530a129/docker/promote-docker.sh#L8
### Direct upload
Expand All @@ -197,7 +201,7 @@ For an official release our tags include `pytorch/torchserve/<version_number>-cp
## Direct upload Kserve
To build the Kserve docker image follow instructions from [kubernetes/kserve](../kubernetes/kserve/README.md)
When tagging images for an official release make sure to tag with the following format `pytorch/torchserve-kfs/<version_number>-cpu` and `pytorch/torchserve-kfs/<version_number>-gpu`.
When tagging images for an official release make sure to tag with the following format `pytorch/torchserve-kfs/<version_number>-cpu` and `pytorch/torchserve-kfs/<version_number>-gpu`.
### Uploading from staging account
Expand Down
7 changes: 5 additions & 2 deletions binaries/conda/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

# Building conda packages

1. To build conda packages you must first produce wheels for the project, see [this readme](../README.md) for more details on building `torchserve` and `torch-model-archiver` wheel files.
Expand All @@ -9,7 +13,7 @@
```
# Build all packages
python build_packages.py
# Selectively build packages
python build_packages.py --ts-wheel=/path/to/torchserve.whl --ma-wheel=/path/to/torch_model_archiver_wheel --wa-wheel=/path/to/torch_workflow_archiver_wheel
```
Expand All @@ -21,4 +25,3 @@ The built conda packages are available in the `output` directory
Anaconda packages are both OS specific and python version specific so copying them one by one from a test/staging environment like https://anaconda.org/pytorch/torchserve/ to an official environment like https://anaconda.org/torchserve-staging can be fiddly

Instead you can run `anaconda copy torchserve-staging/<package>/<version_number> --to-owner pytorch`

4 changes: 4 additions & 0 deletions cpp/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

# TorchServe CPP (Experimental Release)
## Requirements
* C++17
Expand Down
4 changes: 4 additions & 0 deletions docker/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

## Security Changes
TorchServe now enforces token authorization enabled and model API control disabled by default. Refer the following documentation for more information: [Token Authorization](https://github.com/pytorch/serve/blob/master/docs/token_authorization_api.md), [Model API control](https://github.com/pytorch/serve/blob/master/docs/model_api_control.md)

Expand Down
4 changes: 4 additions & 0 deletions docs/FAQs.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

# FAQ'S
Contents of this document.
* [General](#general)
Expand Down
4 changes: 4 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

# ❗ANNOUNCEMENT: Security Changes❗
TorchServe now enforces token authorization enabled and model API control disabled by default. These security features are intended to address the concern of unauthorized API calls and to prevent potential malicious code from being introduced to the model server. Refer the following documentation for more information: [Token Authorization](https://github.com/pytorch/serve/blob/master/docs/token_authorization_api.md), [Model API control](https://github.com/pytorch/serve/blob/master/docs/model_api_control.md)

Expand Down
4 changes: 4 additions & 0 deletions docs/Troubleshooting.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

## Troubleshooting Guide
Refer to this section for common issues faced while deploying your Pytorch models using Torchserve and their corresponding troubleshooting steps.

Expand Down
4 changes: 4 additions & 0 deletions docs/batch_inference_with_ts.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

# Batch Inference with TorchServe

## Contents of this Document
Expand Down
4 changes: 4 additions & 0 deletions docs/code_coverage.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

# Code Coverage

## To check branch stability run the sanity suite as follows
Expand Down
4 changes: 4 additions & 0 deletions docs/configuration.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

# Advanced configuration

The default settings form TorchServe should be sufficient for most use cases. However, if you want to customize TorchServe, the configuration options described in this topic are available.
Expand Down
14 changes: 9 additions & 5 deletions docs/custom_service.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

# Custom Service

## Contents of this Document
Expand Down Expand Up @@ -257,12 +261,12 @@ Refer [waveglow_handler](https://github.com/pytorch/serve/blob/master/examples/t
Torchserve returns the captum explanations for Image Classification, Text Classification and BERT models. It is achieved by placing the below request:
`POST /explanations/{model_name}`

The explanations are written as a part of the explain_handle method of base handler. The base handler invokes this explain_handle_method. The arguments that are passed to the explain handle methods are the pre-processed data and the raw data. It invokes the get insights function of the custom handler that returns the captum attributions. The user should write his own get_insights functionality to get the explanations
The explanations are written as a part of the explain_handle method of base handler. The base handler invokes this explain_handle_method. The arguments that are passed to the explain handle methods are the pre-processed data and the raw data. It invokes the get insights function of the custom handler that returns the captum attributions. The user should write his own get_insights functionality to get the explanations

For serving a custom handler the captum algorithm should be initialized in the initialize functions of the handler
For serving a custom handler the captum algorithm should be initialized in the initialize functions of the handler

The user can override the explain_handle function in the custom handler.
The user should define their get_insights method for custom handler to get Captum Attributions.
The user should define their get_insights method for custom handler to get Captum Attributions.

The above ModelHandler class should have the following methods with captum functionality.

Expand Down Expand Up @@ -292,7 +296,7 @@ The above ModelHandler class should have the following methods with captum funct
else :
model_output = self.explain_handle(model_input, data)
return model_output

# Present in the base_handler, so override only when neccessary
def explain_handle(self, data_preprocess, raw_data):
"""Captum explanations handler
Expand Down Expand Up @@ -323,7 +327,7 @@ The above ModelHandler class should have the following methods with captum funct
def get_insights(self,**kwargs):
"""
Functionality to get the explanations.
Called from the explain_handle method
Called from the explain_handle method
"""
pass
```
Expand Down
4 changes: 4 additions & 0 deletions docs/default_handlers.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

# TorchServe default inference handlers

TorchServe provides following inference handlers out of box. It's expected that the models consumed by each support batched inference.
Expand Down
6 changes: 5 additions & 1 deletion docs/genai_use_cases.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

# TorchServe GenAI use cases and showcase

This document shows interesting usecases with TorchServe for Gen AI deployments.
Expand All @@ -8,4 +12,4 @@ In this blog, we show how to deploy a RAG Endpoint using TorchServe, increase th

## [Multi-Image Generation Streamlit App: Chaining Llama & Stable Diffusion using TorchServe, torch.compile & OpenVINO](https://pytorch.org/serve/llm_diffusion_serving_app.html)

This Multi-Image Generation Streamlit app is designed to generate multiple images based on a provided text prompt. Instead of using Stable Diffusion directly, this app chains Llama and Stable Diffusion to enhance the image generation process. This multi-image generation use case exemplifies the powerful synergy of cutting-edge AI technologies: TorchServe, OpenVINO, Torch.compile, Meta-Llama, and Stable Diffusion.
This Multi-Image Generation Streamlit app is designed to generate multiple images based on a provided text prompt. Instead of using Stable Diffusion directly, this app chains Llama and Stable Diffusion to enhance the image generation process. This multi-image generation use case exemplifies the powerful synergy of cutting-edge AI technologies: TorchServe, OpenVINO, Torch.compile, Meta-Llama, and Stable Diffusion.
4 changes: 4 additions & 0 deletions docs/getting_started.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# ⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

# Getting started

## Install TorchServe and torch-model-archiver
Expand Down
Loading

0 comments on commit b5871e2

Please sign in to comment.