Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker环境内UIE服务化部署模型失败,报错 The FastDeploy didn't compile with Paddle Inference. #1657

Open
z5z56 opened this issue Mar 20, 2023 · 10 comments

Comments

@z5z56
Copy link

z5z56 commented Mar 20, 2023

  • 【FastDeploy版本】:fastdeploy:1.0.4-gpu-cuda11.4-trt8.5-21.10
  • 【系统平台】: Linux x64(Ubuntu 18.04)
  • 【硬件】: i9-13900K, Nvidia GPU 3090TI

按照
https://github.com/PaddlePaddle/FastDeploy/blob/develop/examples/text/uie/serving/README_CN.md
在fastdeploy:1.0.4-gpu-cuda11.4-trt8.5-21.10的docker环境内,服务化部署UIE模型,启动失败

输出:
root@xmdx:/# CUDA_VISIBLE_DEVICES=0 fastdeployserver --model-repository=/hdd/ljz/FastDeploy/examples/text/uie/serving/models/ --backend-config=python,shm-default-byte-size=10485760
I0320 08:03:03.270487 274 metrics.cc:298] Collecting metrics for GPU 0: NVIDIA GeForce RTX 3090 Ti
I0320 08:03:03.379255 274 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x7f4ace000000' with size 268435456
I0320 08:03:03.379357 274 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
I0320 08:03:03.379930 274 model_repository_manager.cc:1022] loading: uie:1
I0320 08:03:03.510236 274 python.cc:1875] TRITONBACKEND_ModelInstanceInitialize: uie_0 (GPU device 0)
model_config: {'name': 'uie', 'platform': '', 'backend': 'python', 'version_policy': {'latest': {'num_versions': 1}}, 'max_batch_size': 1, 'input': [{'name': 'INPUT_0', 'data_type': 'TYPE_STRING', 'format': 'FORMAT_NONE', 'dims': [1], 'is_shape_tensor': False, 'allow_ragged_batch': False}, {'name': 'INPUT_1', 'data_type': 'TYPE_STRING', 'format': 'FORMAT_NONE', 'dims': [1], 'is_shape_tensor': False, 'allow_ragged_batch': False}], 'output': [{'name': 'OUTPUT_0', 'data_type': 'TYPE_STRING', 'dims': [1], 'label_filename': '', 'is_shape_tensor': False}], 'batch_input': [], 'batch_output': [], 'optimization': {'priority': 'PRIORITY_DEFAULT', 'execution_accelerators': {'gpu_execution_accelerator': [{'name': 'paddle', 'parameters': {'cpu_threads': '12'}}], 'cpu_execution_accelerator': []}, 'input_pinned_memory': {'enable': True}, 'output_pinned_memory': {'enable': True}, 'gather_kernel_buffer_threshold': 0, 'eager_batching': False}, 'instance_group': [{'name': 'uie_0', 'kind': 'KIND_GPU', 'count': 1, 'gpus': [0], 'secondary_devices': [], 'profile': [], 'passive': False, 'host_policy': ''}], 'default_model_filename': '', 'cc_model_filenames': {}, 'metric_tags': {}, 'parameters': {}, 'model_warmup': []}
input: ['INPUT_0', 'INPUT_1']
output: ['OUTPUT_0']
[ERROR] fastdeploy/runtime/runtime_option.cc(133)::UsePaddleBackend The FastDeploy didn't compile with Paddle Inference.

提示The FastDeploy didn't compile with Paddle Inference.,但是我没有做修改,而且之前跑OCR的服务化部署跑通过,也是使用 Paddle Inference后端+paddle的模型,在UIE服务化就启动不起来,不知道是什么问题,求大佬们赐教

@LouisHeck
Copy link

我也遇到同样的问题

@z5z56
Copy link
Author

z5z56 commented Mar 30, 2023

hi,请问你解决了吗

@LouisHeck
Copy link

hi,请问你解决了吗

你好,暂时解决部分问题。只能用ORT后端推理,PaddleInference和TensorRT后端推理,还是无法用。

镜像版本:paddlepaddle/fastdeploy1.0.5-gpu-cuda11.4-trt8.5-21.10
修复命令:python3 -m pip install --upgrade --force-reinstall fastdeploy-gpu-python -f https://www.paddlepaddle.org.cn/whl/fastdeploy.html
解决思路:强制重新安装fastdeploy-gpu-python

解决后问题:只能用ORT后端推理,PaddleInference和TensorRT后端推理,还是无法用。

截图一:部署UIE,服务成功启动截图
4d4f028f7678bea2f9f3539f0ba9e39
截图二:采用TensorRT部署,只启动了ORT后端,没有启动TensorRT后端
967f1af1b436005bc87e8320cb8c0ca
截图三:修改采用Paddle推理,启动失败,并且触发服务自动重启
82cd2f9ccc980e49e3ad93a0238c4ba

@blakeliu
Copy link

镜像paddlepaddle/fastdeploy:1.0.0-cpu-only-21.10, 在系统win10 and ubuntu18.04 遇到同样的问题, paddle, onnx, openvio均不支持!

[ERROR] fastdeploy/runtime.cc(262)::UsePaddleBackend    The FastDeploy didn't compile with Paddle Inference.
[ERROR] fastdeploy/runtime.cc(271)::UseOrtBackend       The FastDeploy didn't compile with OrtBackend.
[ERROR] fastdeploy/runtime.cc(296)::UseOpenVINOBackend  The FastDeploy didn't compile with OpenVINO.

ubuntu18.04环境如下:
os

Ubuntu 18.04.6 LTS \n \l
5.4.0-150-generic #167~18.04.1-Ubuntu SMP Wed May 24 00:51:42 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

docker

Docker version 20.10.17, build 100c701

win10环境如下:

11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz   2.42 GHz
64 位操作系统, 基于 x64 的处理器
Windows 10 家庭中文版
版本号:22H2
操作系统内部版本:19045.3324

docker:

PS D:\uie\serving> docker --version
Docker version 24.0.5, build ced0996

@blakeliu
Copy link

如同issues 1077 更换为fastdeploy_python-1.0.7-cp38-cp38-manylinux1_x86_64.whl,依然报错!

@huangjun11
Copy link

请问问题有得到解决吗,我使用ORT、paddle、openvino都报错。

@neo502721
Copy link

指定一下fastdeploy-python的版本,跟镜像的fastdeploy保持一致

python3 -m pip uninstall fastdeploy-python
python3 -m pip install fastdeploy-python=1.0.4 -f https://www.paddlepaddle.org.cn/whl/fastdeploy.html

@neo502721
Copy link

neo502721 commented Feb 28, 2024

指定一下fastdeploy-python的版本,跟镜像的fastdeploy保持一致

python3 -m pip uninstall fastdeploy-python
python3 -m pip install fastdeploy-python=1.0.4 -f https://www.paddlepaddle.org.cn/whl/fastdeploy.html

GPU镜像使用:

python3 -m pip uninstall fastdeploy-python
python3 -m pip install fastdeploy-gpu-python==1.0.4 -f https://www.paddlepaddle.org.cn/whl/fastdeploy.html

@DDUFlyme
Copy link

DDUFlyme commented May 7, 2024

docker run -dit --name uie_base --gpus all -p 8201:8201 --shm-size="2g" -v `pwd`:/uie_serving paddlepaddle/fastdeploy:1.0.7-gpu-cuda11.6-trt8.5-22.12

python3 -m pip install fastdeploy-gpu-python==1.0.7 -f https://www.paddlepaddle.org.cn/whl/fastdeploy.html

CUDA_VISIBLE_DEVICES=0 fastdeployserver --model-repository=/uie_serving/models --backend-config=python,shm-default-byte-size=10485760

第三步命令启动后报错:fastdeployserver: error while loading shared libraries: libdcgm.so.2: cannot open shared object file: No such file or directory,但系统中存在 libdcgm.so.3,请大佬们赐教 👀 👀

image

@neoragex2002
Copy link

neoragex2002 commented Sep 6, 2024

use this:

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/datacenter-gpu-manager_2.2.9_amd64.deb
dpkg -i datacenter-gpu-manager_2.2.9_amd64.deb

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants