-
Notifications
You must be signed in to change notification settings - Fork 475
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker环境内UIE服务化部署模型失败,报错 The FastDeploy didn't compile with Paddle Inference. #1657
Comments
我也遇到同样的问题 |
hi,请问你解决了吗 |
你好,暂时解决部分问题。只能用ORT后端推理,PaddleInference和TensorRT后端推理,还是无法用。 镜像版本:paddlepaddle/fastdeploy1.0.5-gpu-cuda11.4-trt8.5-21.10 解决后问题:只能用ORT后端推理,PaddleInference和TensorRT后端推理,还是无法用。 截图一:部署UIE,服务成功启动截图 |
镜像paddlepaddle/fastdeploy:1.0.0-cpu-only-21.10, 在系统win10 and ubuntu18.04 遇到同样的问题, paddle, onnx, openvio均不支持! [ERROR] fastdeploy/runtime.cc(262)::UsePaddleBackend The FastDeploy didn't compile with Paddle Inference.
[ERROR] fastdeploy/runtime.cc(271)::UseOrtBackend The FastDeploy didn't compile with OrtBackend.
[ERROR] fastdeploy/runtime.cc(296)::UseOpenVINOBackend The FastDeploy didn't compile with OpenVINO. ubuntu18.04环境如下: Ubuntu 18.04.6 LTS \n \l
5.4.0-150-generic #167~18.04.1-Ubuntu SMP Wed May 24 00:51:42 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux docker
win10环境如下: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz 2.42 GHz
64 位操作系统, 基于 x64 的处理器
Windows 10 家庭中文版
版本号:22H2
操作系统内部版本:19045.3324 docker: PS D:\uie\serving> docker --version
Docker version 24.0.5, build ced0996 |
如同issues 1077 更换为fastdeploy_python-1.0.7-cp38-cp38-manylinux1_x86_64.whl,依然报错! |
请问问题有得到解决吗,我使用ORT、paddle、openvino都报错。 |
指定一下fastdeploy-python的版本,跟镜像的fastdeploy保持一致
|
GPU镜像使用:
|
use this:
|
按照
https://github.com/PaddlePaddle/FastDeploy/blob/develop/examples/text/uie/serving/README_CN.md
在fastdeploy:1.0.4-gpu-cuda11.4-trt8.5-21.10的docker环境内,服务化部署UIE模型,启动失败
输出:
root@xmdx:/# CUDA_VISIBLE_DEVICES=0 fastdeployserver --model-repository=/hdd/ljz/FastDeploy/examples/text/uie/serving/models/ --backend-config=python,shm-default-byte-size=10485760
I0320 08:03:03.270487 274 metrics.cc:298] Collecting metrics for GPU 0: NVIDIA GeForce RTX 3090 Ti
I0320 08:03:03.379255 274 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x7f4ace000000' with size 268435456
I0320 08:03:03.379357 274 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
I0320 08:03:03.379930 274 model_repository_manager.cc:1022] loading: uie:1
I0320 08:03:03.510236 274 python.cc:1875] TRITONBACKEND_ModelInstanceInitialize: uie_0 (GPU device 0)
model_config: {'name': 'uie', 'platform': '', 'backend': 'python', 'version_policy': {'latest': {'num_versions': 1}}, 'max_batch_size': 1, 'input': [{'name': 'INPUT_0', 'data_type': 'TYPE_STRING', 'format': 'FORMAT_NONE', 'dims': [1], 'is_shape_tensor': False, 'allow_ragged_batch': False}, {'name': 'INPUT_1', 'data_type': 'TYPE_STRING', 'format': 'FORMAT_NONE', 'dims': [1], 'is_shape_tensor': False, 'allow_ragged_batch': False}], 'output': [{'name': 'OUTPUT_0', 'data_type': 'TYPE_STRING', 'dims': [1], 'label_filename': '', 'is_shape_tensor': False}], 'batch_input': [], 'batch_output': [], 'optimization': {'priority': 'PRIORITY_DEFAULT', 'execution_accelerators': {'gpu_execution_accelerator': [{'name': 'paddle', 'parameters': {'cpu_threads': '12'}}], 'cpu_execution_accelerator': []}, 'input_pinned_memory': {'enable': True}, 'output_pinned_memory': {'enable': True}, 'gather_kernel_buffer_threshold': 0, 'eager_batching': False}, 'instance_group': [{'name': 'uie_0', 'kind': 'KIND_GPU', 'count': 1, 'gpus': [0], 'secondary_devices': [], 'profile': [], 'passive': False, 'host_policy': ''}], 'default_model_filename': '', 'cc_model_filenames': {}, 'metric_tags': {}, 'parameters': {}, 'model_warmup': []}
input: ['INPUT_0', 'INPUT_1']
output: ['OUTPUT_0']
[ERROR] fastdeploy/runtime/runtime_option.cc(133)::UsePaddleBackend The FastDeploy didn't compile with Paddle Inference.
提示The FastDeploy didn't compile with Paddle Inference.,但是我没有做修改,而且之前跑OCR的服务化部署跑通过,也是使用 Paddle Inference后端+paddle的模型,在UIE服务化就启动不起来,不知道是什么问题,求大佬们赐教
The text was updated successfully, but these errors were encountered: