Skip to content

Commit

Permalink
[Doc]Update English version of some documents (#1083)
Browse files Browse the repository at this point in the history
* 第一次提交

* 补充一处漏翻译

* deleted:    docs/en/quantize.md

* Update one translation

* Update en version

* Update one translation in code

* Standardize one writing

* Standardize one writing

* Update some en version

* Fix a grammer problem

* Update en version for api/vision result

* Merge branch 'develop' of https://github.com/charl-u/FastDeploy into develop

* Checkout the link in README in vision_results/ to the en documents

* Modify a title

* Add link to serving/docs/

* Finish translation of demo.md

* Update english version of serving/docs/

* Update title of readme

* Update some links

* Modify a title

* Update some links

* Update en version of java android README

* Modify some titles

* Modify some titles

* Modify some titles

* modify article to document

* update some english version of documents in examples

* Add english version of documents in examples/visions

* Sync to current branch

* Add english version of documents in examples

* Add english version of documents in examples

* Add english version of documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples
  • Loading branch information
charl-u authored Jan 9, 2023
1 parent 61c2f87 commit cbf88a4
Show file tree
Hide file tree
Showing 164 changed files with 1,556 additions and 776 deletions.
4 changes: 2 additions & 2 deletions examples/text/ernie-3.0/cpp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)

Before deployment, two steps require confirmation.

- 1. Environment of software and hardware should meet the requirements. Please refer to[FastDeploy Environment Requirements](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. Based on the develop environment, download the precompiled deployment library and samples code. Please refer to [FastDeploy Precompiled Library](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 1. Environment of software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
- 2. Based on the develop environment, download the precompiled deployment library and samples code. Please refer to [FastDeploy Precompiled Library](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).

This directory provides deployment examples that seq_cls_inferve.py fast finish text classification tasks on CPU/GPU.

Expand Down
4 changes: 2 additions & 2 deletions examples/text/ernie-3.0/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ English | [简体中文](README_CN.md)

Before deployment, two steps require confirmation.

- 1. Environment of software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. FastDeploy Python whl package should be installed. Please refer to [FastDeploy Python Installation](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 1. Environment of software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
- 2. FastDeploy Python whl package should be installed. Please refer to [FastDeploy Python Installation](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).

This directory provides deployment examples that seq_cls_inferve.py fast finish text classification tasks on CPU/GPU.

Expand Down
4 changes: 2 additions & 2 deletions examples/text/ernie-3.0/serving/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ English | [简体中文](README_CN.md)

Before serving deployment, you need to confirm

- 1. Refer to [FastDeploy Serving Deployment](../../../../../serving/README_CN.md) for hardware and software environment requirements and image pull commands of serving images.
- 1. Refer to [FastDeploy Serving Deployment](../../../../serving/README.md) for hardware and software environment requirements and image pull commands of serving images.

## Prepare Models

Expand Down Expand Up @@ -174,4 +174,4 @@ entity: 华夏 label: LOC pos: [14, 15]
```

## Configuration Modification
The current classification task (ernie_seqcls_model/config.pbtxt) is by default configured to run the OpenVINO engine on CPU; the sequence labelling task is by default configured to run the Paddle engine on GPU. If you want to run on CPU/GPU or other inference engines, you should modify the configuration. please refer to the [configuration document.](../../../../serving/docs/zh_CN/model_configuration.md)
The current classification task (ernie_seqcls_model/config.pbtxt) is by default configured to run the OpenVINO engine on CPU; the sequence labelling task is by default configured to run the Paddle engine on GPU. If you want to run on CPU/GPU or other inference engines, you should modify the configuration. please refer to the [configuration document.](../../../../serving/docs/EN/model_configuration-en.md)
2 changes: 1 addition & 1 deletion examples/text/ernie-3.0/serving/README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

在服务化部署前,需确认

- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](../../../../../serving/README_CN.md)
- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](../../../../serving/README_CN.md)

## 准备模型

Expand Down
2 changes: 1 addition & 1 deletion examples/text/uie/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ English | [简体中文](README_CN.md)

## Export Deployment Models

Before deployment, you need to export the UIE model into the deployment model. Please refer to [Export Model](https://github.com/PaddlePaddle/PaddleNLP/tree/release/2.4/model_zoo/uie#47-%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2)
Before deployment, you need to export the UIE model into the deployment model. Please refer to [Export Model](https://github.com/PaddlePaddle/PaddleNLP/tree/release/2.4/model_zoo/uie#47-%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2).

## Download Pre-trained Models

Expand Down
6 changes: 3 additions & 3 deletions examples/text/uie/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ English | [简体中文](README_CN.md)

Before deployment, two steps need to be confirmed.

- 1. The software and hardware environment meets the requirements. Please refer to [Environment requirements for FastDeploy](../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
- 2. FastDeploy Python whl pacakage needs installation. Please refer to [FastDeploy Python Installation](../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
- 1. The software and hardware environment meets the requirements. Please refer to [Environment requirements for FastDeploy](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
- 2. FastDeploy Python whl pacakage needs installation. Please refer to [FastDeploy Python Installation](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).

This directory provides an example that `infer.py` quickly complete CPU deployment conducted by the UIE model with OpenVINO acceleration on CPU/GPU and CPU.

Expand Down Expand Up @@ -348,7 +348,7 @@ fd.text.uie.UIEModel(model_file,
schema_language=SchemaLanguage.ZH)
```

UIEModel loading and initialization. Among them, `model_file`, `params_file` are Paddle inference documents exported by trained models. Please refer to [Model export](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md#%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2).`vocab_file`refers to the vocabulary file. The vocabulary of the UIE model UIE can be downloaded in [UIE configuration file](https://github.com/PaddlePaddle/PaddleNLP/blob/5401f01af85f1c73d8017c6b3476242fce1e6d52/model_zoo/uie/utils.py)
UIEModel loading and initialization. Among them, `model_file`, `params_file` are Paddle inference documents exported by trained models. Please refer to [Model export](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md#%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2). `vocab_file`refers to the vocabulary file. The vocabulary of the UIE model UIE can be downloaded in [UIE configuration file](https://github.com/PaddlePaddle/PaddleNLP/blob/5401f01af85f1c73d8017c6b3476242fce1e6d52/model_zoo/uie/utils.py).

**Parameter**

Expand Down
4 changes: 2 additions & 2 deletions examples/text/uie/serving/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ English | [简体中文](README_CN.md)

Before serving deployment, you need to confirm:

- 1. You can refer to [FastDeploy serving deployment](../../../../../serving/README_CN.md) for hardware and software environment requirements and image pull commands for serving images.
- 1. You can refer to [FastDeploy serving deployment](../../../../serving/README.md) for hardware and software environment requirements and image pull commands for serving images.

## Prepare models

Expand Down Expand Up @@ -143,4 +143,4 @@ results:

## Configuration Modification

The current configuration is by default to run the paddle engine on CPU. If you want to run on CPU/GPU or other inference engines, modifying the configuration is needed.Please refer to [Configuration Document](../../../../serving/docs/zh_CN/model_configuration.md).
The current configuration is by default to run the paddle engine on CPU. If you want to run on CPU/GPU or other inference engines, modifying the configuration is needed.Please refer to [Configuration Document](../../../../serving/docs/EN/model_configuration-en.md).
2 changes: 1 addition & 1 deletion examples/text/uie/serving/README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

在服务化部署前,需确认

- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](../../../../../serving/README_CN.md)
- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](../../../../serving/README_CN.md)

## 准备模型

Expand Down
2 changes: 1 addition & 1 deletion examples/vision/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,5 +32,5 @@ Targeted at the vision suite of PaddlePaddle and external popular models, FastDe
- Model Loading
- Calling the `predict`interface

When deploying visual models, FastDeploy supports one-click switching of the backend inference engine. Please refer to [How to switch model inference engine](../../docs/cn/faq/how_to_change_backend.md).
When deploying visual models, FastDeploy supports one-click switching of the backend inference engine. Please refer to [How to switch model inference engine](../../docs/en/faq/how_to_change_backend.md).

2 changes: 1 addition & 1 deletion examples/vision/README_CN.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
[English](README_EN.md) | 简体中文
[English](README.md) | 简体中文
# 视觉模型部署

本目录下提供了各类视觉模型的部署,主要涵盖以下任务类型
Expand Down
2 changes: 1 addition & 1 deletion examples/vision/classification/paddleclas/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Now FastDeploy supports the deployment of the following models

## Prepare PaddleClas Deployment Model

For PaddleClas model export, refer to [Model Export](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA)
For PaddleClas model export, refer to [Model Export](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA).

Attention:The model exported by PaddleClas contains two files, including `inference.pdmodel` and `inference.pdiparams`. However, it is necessary to prepare the generic [inference_cls.yaml](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/deploy/configs/inference_cls.yaml) file provided by PaddleClas to meet the requirements of deployment. FastDeploy will obtain from the yaml file the preprocessing information required during inference. FastDeploy will get the preprocessing information needed by the model from the yaml file. Developers can directly download this file. But they need to modify the configuration parameters in the yaml file based on personalized needs. Refer to the configuration information in the infer section of the PaddleClas model training [config.](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.4/ppcls/configs/ImageNet)

Expand Down
2 changes: 1 addition & 1 deletion examples/vision/classification/paddleclas/a311d/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ English | [简体中文](README_CN.md)
# Deploy PaddleClas Quantification Model on A311D
Now FastDeploy supports the deployment of PaddleClas quantification model to A311D based on Paddle Lite.

For model quantification and download, refer to [model quantification](../quantize/README.md)
For model quantification and download, refer to [model quantification](../quantize/README.md).


## Detailed Deployment Tutorials
Expand Down
43 changes: 22 additions & 21 deletions examples/vision/classification/paddleclas/a311d/cpp/README.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,27 @@
# PaddleClas A311D 开发板 C++ 部署示例
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PaddleClas 量化模型在 A311D 上的部署推理加速。
English | [简体中文](README_CN.md)
# PaddleClas A311D Development Board C++ Deployment Example
`infer.cc` in this directory can help you quickly complete the inference acceleration of PaddleClas quantization model deployment on A311D.

## 部署准备
### FastDeploy 交叉编译环境准备
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)
## Deployment Preparations
### FastDeploy Cross-compile Environment Preparations
1. For the software and hardware environment, and the cross-compile environment, please refer to [FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/a311d.md#Cross-compilation-environment-construction).

### 量化模型准备
1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
2. 用户可以使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从 FP32 模型文件夹下复制此 yaml 文件到量化后的模型文件夹内即可.)
### Quantization Model Preparations
1. You can directly use the quantized model provided by FastDeploy for deployment.
2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the inference_cls.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)

更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
For more information, please refer to [Model Quantization](../../quantize/README.md).

## 在 A311D 上部署量化后的 ResNet50_Vd 分类模型
请按照以下步骤完成在 A311D 上部署 ResNet50_Vd 量化模型:
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
## Deploying the Quantized ResNet50_Vd Segmentation model on A311D
Please follow these steps to complete the deployment of the ResNet50_Vd quantization model on A311D.
1. Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/a311d.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite).

2. 将编译后的库拷贝到当前目录,可使用如下命令:
2. Copy the compiled library to the current directory. You can run this line:
```bash
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
```

3. 在当前路径下载部署所需的模型和示例图片:
3. Download the model and example images required for deployment in current path.
```bash
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
mkdir models && mkdir images
Expand All @@ -31,26 +32,26 @@ wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/Ima
cp -r ILSVRC2012_val_00000010.jpeg images
```

4. 编译部署示例,可使入如下命令:
4. Compile the deployment example. You can run the following lines:
```bash
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
mkdir build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
make -j8
make install
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
# After success, an install folder will be created with a running demo and libraries required for deployment.
```

5. 基于 adb 工具部署 ResNet50 分类模型到晶晨 A311D,可使用如下命令:
5. Deploy the ResNet50 segmentation model to A311D based on adb. You can run the following lines:
```bash
# 进入 install 目录
# Go to the install directory.
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/build/install/
# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
bash run_with_adb.sh infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg $DEVICE_ID
```

部署成功后运行结果如下:
The output is:

<img width="640" src="https://user-images.githubusercontent.com/30516196/200767389-26519e50-9e4f-4fe1-8d52-260718f73476.png">

需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
Please note that the model deployed on A311D needs to be quantized. You can refer to [Model Quantization](../../../../../../docs/en/quantize.md).
57 changes: 57 additions & 0 deletions examples/vision/classification/paddleclas/a311d/cpp/README_CN.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
[English](README.md) | 简体中文
# PaddleClas A311D 开发板 C++ 部署示例
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PaddleClas 量化模型在 A311D 上的部署推理加速。

## 部署准备
### FastDeploy 交叉编译环境准备
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)

### 量化模型准备
1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
2. 用户可以使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从 FP32 模型文件夹下复制此 yaml 文件到量化后的模型文件夹内即可.)

更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)

## 在 A311D 上部署量化后的 ResNet50_Vd 分类模型
请按照以下步骤完成在 A311D 上部署 ResNet50_Vd 量化模型:
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)

2. 将编译后的库拷贝到当前目录,可使用如下命令:
```bash
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
```

3. 在当前路径下载部署所需的模型和示例图片:
```bash
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
mkdir models && mkdir images
wget https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar
tar -xvf resnet50_vd_ptq.tar
cp -r resnet50_vd_ptq models
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
cp -r ILSVRC2012_val_00000010.jpeg images
```

4. 编译部署示例,可使入如下命令:
```bash
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
mkdir build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
make -j8
make install
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
```

5. 基于 adb 工具部署 ResNet50 分类模型到晶晨 A311D,可使用如下命令:
```bash
# 进入 install 目录
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/build/install/
# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
bash run_with_adb.sh infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg $DEVICE_ID
```

部署成功后运行结果如下:

<img width="640" src="https://user-images.githubusercontent.com/30516196/200767389-26519e50-9e4f-4fe1-8d52-260718f73476.png">

需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
Original file line number Diff line number Diff line change
Expand Up @@ -148,4 +148,4 @@ set(FastDeploy_DIR "${CMAKE_CURRENT_SOURCE_DIR}/../../../libs/fastdeploy-android
## More Reference Documents
For more FastDeploy Java API documentes and how to access FastDeploy C++ API via JNI, refer to:
- [Use FastDeploy Java SDK in Android](../../../../../java/android/)
- [Use FastDeploy C++ SDK in Android](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
- [Use FastDeploy C++ SDK in Android](../../../../../docs/en/faq/use_cpp_sdk_on_android.md)
Loading

0 comments on commit cbf88a4

Please sign in to comment.