Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Linux FastDeploy docker OCR推理报错 #1687

Open
xiaomi0922 opened this issue Mar 23, 2023 · 9 comments
Open

Linux FastDeploy docker OCR推理报错 #1687

xiaomi0922 opened this issue Mar 23, 2023 · 9 comments
Assignees
Labels

Comments

@xiaomi0922
Copy link

xiaomi0922 commented Mar 23, 2023


温馨提示:根据社区不完全统计,按照模板提问,可以加快回复和解决问题的速度


环境

  • 【FastDeploy版本】: 1.0.4-cpu-only-21.10
  • 【系统平台】: Linux x64 (Ubuntu 18.04)
  • 【硬件】: 16 Intel(R) Xeon(R) CPU E5-2682 v4 @ 2.50GHz
  • 【编译语言】: Python3.8

按照https://github.com/PaddlePaddle/PaddleOCR/tree/dygraph/deploy/fastdeploy/serving/fastdeploy_serving文档进行操作。

server端,执行fastdeployserver --model-repository=/ocr_serving/models 启动服务成功 ,日志如下:
I0323 02:13:45.461891 1362 model_repository_manager.cc:1022] loading: cls_runtime:1
I0323 02:13:45.562215 1362 model_repository_manager.cc:1022] loading: det_preprocess:1
I0323 02:13:45.662454 1362 model_repository_manager.cc:1022] loading: det_runtime:1
I0323 02:13:45.699268 1362 fastdeploy_runtime.cc:1182] TRITONBACKEND_Initialize: fastdeploy
I0323 02:13:45.699309 1362 fastdeploy_runtime.cc:1191] Triton TRITONBACKEND API version: 1.6
I0323 02:13:45.699326 1362 fastdeploy_runtime.cc:1196] 'fastdeploy' TRITONBACKEND API version: 1.6
I0323 02:13:45.699348 1362 fastdeploy_runtime.cc:1225] backend configuration:
{}
I0323 02:13:45.699429 1362 fastdeploy_runtime.cc:1255] TRITONBACKEND_ModelInitialize: cls_runtime (version 1)
[WARNING] fastdeploy/runtime/runtime_option.cc(189)::SetPaddleMKLDNN RuntimeOption::SetPaddleMKLDNN will be removed in v1.2.0, please modify its member variable directly, e.g option.paddle_infer_option.enable_mkldnn = true
I0323 02:13:45.702743 1362 fastdeploy_runtime.cc:1294] TRITONBACKEND_ModelInstanceInitialize: cls_runtime_0 (CPU device 0)
WARNING: Logging before InitGoogleLogging() is written to STDERR
W0323 10:13:45.716488 1363 analysis_config.cc:972] It is detected that mkldnn and memory_optimize_pass are enabled at the same time, but they are not supported yet. Currently, memory_optimize_pass is explicitly disabled
I0323 02:13:45.762764 1362 model_repository_manager.cc:1022] loading: rec_postprocess:1
I0323 02:13:45.863142 1362 model_repository_manager.cc:1022] loading: det_postprocess:1
I0323 02:13:45.963409 1362 model_repository_manager.cc:1022] loading: cls_postprocess:1
[INFO] fastdeploy/runtime/runtime.cc(266)::CreatePaddleBackend Runtime initialized with Backend::PDINFER in Device::CPU.
I0323 02:13:46.030319 1362 model_repository_manager.cc:1183] successfully loaded 'cls_runtime' version 1
I0323 02:13:46.030720 1362 fastdeploy_runtime.cc:1255] TRITONBACKEND_ModelInitialize: det_runtime (version 1)
[WARNING] fastdeploy/runtime/runtime_option.cc(189)::SetPaddleMKLDNN RuntimeOption::SetPaddleMKLDNN will be removed in v1.2.0, please modify its member variable directly, e.g option.paddle_infer_option.enable_mkldnn = true
I0323 02:13:46.032259 1362 python.cc:1875] TRITONBACKEND_ModelInstanceInitialize: det_preprocess_0 (CPU device 0)
I0323 02:13:46.063790 1362 model_repository_manager.cc:1022] loading: rec_runtime:1
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.14) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
model_config: {'name': 'det_preprocess', 'platform': '', 'backend': 'python', 'version_policy': {'latest': {'num_versions': 1}}, 'max_batch_size': 1, 'input': [{'name': 'INPUT_0', 'data_type': 'TYPE_UINT8', 'format': 'FORMAT_NONE', 'dims': [-1, -1, 3], 'is_shape_tensor': False, 'allow_ragged_batch': False}], 'output': [{'name': 'OUTPUT_0', 'data_type': 'TYPE_FP32', 'dims': [3, -1, -1], 'label_filename': '', 'is_shape_tensor': False}, {'name': 'OUTPUT_1', 'data_type': 'TYPE_INT32', 'dims': [4], 'label_filename': '', 'is_shape_tensor': False}], 'batch_input': [], 'batch_output': [], 'optimization': {'priority': 'PRIORITY_DEFAULT', 'input_pinned_memory': {'enable': True}, 'output_pinned_memory': {'enable': True}, 'gather_kernel_buffer_threshold': 0, 'eager_batching': False}, 'instance_group': [{'name': 'det_preprocess_0', 'kind': 'KIND_CPU', 'count': 1, 'gpus': [], 'secondary_devices': [], 'profile': [], 'passive': False, 'host_policy': ''}], 'default_model_filename': '', 'cc_model_filenames': {}, 'metric_tags': {}, 'parameters': {}, 'model_warmup': []}
preprocess input names: ['INPUT_0']
preprocess output names: ['OUTPUT_0', 'OUTPUT_1']
I0323 02:13:46.460128 1362 fastdeploy_runtime.cc:1294] TRITONBACKEND_ModelInstanceInitialize: det_runtime_0 (CPU device 0)
I0323 02:13:46.460377 1362 model_repository_manager.cc:1183] successfully loaded 'det_preprocess' version 1
[INFO] fastdeploy/runtime/runtime.cc(266)::CreatePaddleBackend Runtime initialized with Backend::PDINFER in Device::CPU.
I0323 02:13:46.933564 1362 python.cc:1875] TRITONBACKEND_ModelInstanceInitialize: rec_postprocess_0 (CPU device 0)
I0323 02:13:46.933794 1362 model_repository_manager.cc:1183] successfully loaded 'det_runtime' version 1
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.14) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
model_config: {'name': 'rec_postprocess', 'platform': '', 'backend': 'python', 'version_policy': {'latest': {'num_versions': 1}}, 'max_batch_size': 128, 'input': [{'name': 'POST_INPUT_0', 'data_type': 'TYPE_FP32', 'format': 'FORMAT_NONE', 'dims': [-1, 6625], 'is_shape_tensor': False, 'allow_ragged_batch': False}], 'output': [{'name': 'POST_OUTPUT_0', 'data_type': 'TYPE_STRING', 'dims': [1], 'label_filename': '', 'is_shape_tensor': False}, {'name': 'POST_OUTPUT_1', 'data_type': 'TYPE_FP32', 'dims': [1], 'label_filename': '', 'is_shape_tensor': False}], 'batch_input': [], 'batch_output': [], 'optimization': {'priority': 'PRIORITY_DEFAULT', 'input_pinned_memory': {'enable': True}, 'output_pinned_memory': {'enable': True}, 'gather_kernel_buffer_threshold': 0, 'eager_batching': False}, 'instance_group': [{'name': 'rec_postprocess_0', 'kind': 'KIND_CPU', 'count': 1, 'gpus': [], 'secondary_devices': [], 'profile': [], 'passive': False, 'host_policy': ''}], 'default_model_filename': '', 'cc_model_filenames': {}, 'metric_tags': {}, 'parameters': {}, 'model_warmup': []}
postprocess input names: ['POST_INPUT_0']
postprocess output names: ['POST_OUTPUT_0', 'POST_OUTPUT_1']
I0323 02:13:47.301404 1362 python.cc:1875] TRITONBACKEND_ModelInstanceInitialize: det_postprocess_0 (CPU device 0)
I0323 02:13:47.301628 1362 model_repository_manager.cc:1183] successfully loaded 'rec_postprocess' version 1
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.14) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
model_config: {'name': 'det_postprocess', 'platform': '', 'backend': 'python', 'version_policy': {'latest': {'num_versions': 1}}, 'max_batch_size': 128, 'input': [{'name': 'POST_INPUT_0', 'data_type': 'TYPE_FP32', 'format': 'FORMAT_NONE', 'dims': [1, -1, -1], 'is_shape_tensor': False, 'allow_ragged_batch': False}, {'name': 'POST_INPUT_1', 'data_type': 'TYPE_INT32', 'format': 'FORMAT_NONE', 'dims': [4], 'is_shape_tensor': False, 'allow_ragged_batch': False}, {'name': 'ORI_IMG', 'data_type': 'TYPE_UINT8', 'format': 'FORMAT_NONE', 'dims': [-1, -1, 3], 'is_shape_tensor': False, 'allow_ragged_batch': False}], 'output': [{'name': 'POST_OUTPUT_0', 'data_type': 'TYPE_STRING', 'dims': [-1, 1], 'label_filename': '', 'is_shape_tensor': False}, {'name': 'POST_OUTPUT_1', 'data_type': 'TYPE_FP32', 'dims': [-1, 1], 'label_filename': '', 'is_shape_tensor': False}, {'name': 'POST_OUTPUT_2', 'data_type': 'TYPE_FP32', 'dims': [-1, -1, 1], 'label_filename': '', 'is_shape_tensor': False}], 'batch_input': [], 'batch_output': [], 'optimization': {'priority': 'PRIORITY_DEFAULT', 'input_pinned_memory': {'enable': True}, 'output_pinned_memory': {'enable': True}, 'gather_kernel_buffer_threshold': 0, 'eager_batching': False}, 'instance_group': [{'name': 'det_postprocess_0', 'kind': 'KIND_CPU', 'count': 1, 'gpus': [], 'secondary_devices': [], 'profile': [], 'passive': False, 'host_policy': ''}], 'default_model_filename': '', 'cc_model_filenames': {}, 'metric_tags': {}, 'parameters': {}, 'model_warmup': []}
postprocess input names: ['POST_INPUT_0', 'POST_INPUT_1', 'ORI_IMG']
postprocess output names: ['POST_OUTPUT_0', 'POST_OUTPUT_1', 'POST_OUTPUT_2']
I0323 02:13:47.691956 1362 model_repository_manager.cc:1183] successfully loaded 'det_postprocess' version 1
I0323 02:13:47.692282 1362 fastdeploy_runtime.cc:1255] TRITONBACKEND_ModelInitialize: rec_runtime (version 1)
[WARNING] fastdeploy/runtime/runtime_option.cc(189)::SetPaddleMKLDNN RuntimeOption::SetPaddleMKLDNN will be removed in v1.2.0, please modify its member variable directly, e.g option.paddle_infer_option.enable_mkldnn = true
I0323 02:13:47.693223 1362 python.cc:1875] TRITONBACKEND_ModelInstanceInitialize: cls_postprocess_0 (CPU device 0)
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.14) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
model_config: {'name': 'cls_postprocess', 'platform': '', 'backend': 'python', 'version_policy': {'latest': {'num_versions': 1}}, 'max_batch_size': 128, 'input': [{'name': 'POST_INPUT_0', 'data_type': 'TYPE_FP32', 'format': 'FORMAT_NONE', 'dims': [2], 'is_shape_tensor': False, 'allow_ragged_batch': False}], 'output': [{'name': 'POST_OUTPUT_0', 'data_type': 'TYPE_INT32', 'dims': [1], 'label_filename': '', 'is_shape_tensor': False}, {'name': 'POST_OUTPUT_1', 'data_type': 'TYPE_FP32', 'dims': [1], 'label_filename': '', 'is_shape_tensor': False}], 'batch_input': [], 'batch_output': [], 'optimization': {'priority': 'PRIORITY_DEFAULT', 'input_pinned_memory': {'enable': True}, 'output_pinned_memory': {'enable': True}, 'gather_kernel_buffer_threshold': 0, 'eager_batching': False}, 'instance_group': [{'name': 'cls_postprocess_0', 'kind': 'KIND_CPU', 'count': 1, 'gpus': [], 'secondary_devices': [], 'profile': [], 'passive': False, 'host_policy': ''}], 'default_model_filename': '', 'cc_model_filenames': {}, 'metric_tags': {}, 'parameters': {}, 'model_warmup': []}
postprocess input names: ['POST_INPUT_0']
postprocess output names: ['POST_OUTPUT_0', 'POST_OUTPUT_1']
I0323 02:13:48.052532 1362 fastdeploy_runtime.cc:1294] TRITONBACKEND_ModelInstanceInitialize: rec_runtime_0 (CPU device 0)
I0323 02:13:48.052826 1362 model_repository_manager.cc:1183] successfully loaded 'cls_postprocess' version 1
[INFO] fastdeploy/runtime/runtime.cc(266)::CreatePaddleBackend Runtime initialized with Backend::PDINFER in Device::CPU.
I0323 02:13:48.440503 1362 model_repository_manager.cc:1183] successfully loaded 'rec_runtime' version 1
I0323 02:13:48.440929 1362 model_repository_manager.cc:1022] loading: cls_pp:1
I0323 02:13:48.541282 1362 model_repository_manager.cc:1022] loading: pp_ocr:1
I0323 02:13:48.641614 1362 model_repository_manager.cc:1022] loading: rec_pp:1
I0323 02:13:48.741941 1362 model_repository_manager.cc:1183] successfully loaded 'cls_pp' version 1
I0323 02:13:48.741959 1362 model_repository_manager.cc:1183] successfully loaded 'pp_ocr' version 1
I0323 02:13:48.742065 1362 model_repository_manager.cc:1183] successfully loaded 'rec_pp' version 1
I0323 02:13:48.742231 1362 server.cc:522]
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+

I0323 02:13:48.742308 1362 server.cc:549]
+------------+---------------------------------------------------------------+--------+
| Backend | Path | Config |
+------------+---------------------------------------------------------------+--------+
| fastdeploy | /opt/tritonserver/backends/fastdeploy/libtriton_fastdeploy.so | {} |
| python | /opt/tritonserver/backends/python/libtriton_python.so | {} |
+------------+---------------------------------------------------------------+--------+

I0323 02:13:48.742378 1362 server.cc:592]
+-----------------+---------+--------+
| Model | Version | Status |
+-----------------+---------+--------+
| cls_postprocess | 1 | READY |
| cls_pp | 1 | READY |
| cls_runtime | 1 | READY |
| det_postprocess | 1 | READY |
| det_preprocess | 1 | READY |
| det_runtime | 1 | READY |
| pp_ocr | 1 | READY |
| rec_postprocess | 1 | READY |
| rec_pp | 1 | READY |
| rec_runtime | 1 | READY |
+-----------------+---------+--------+

I0323 02:13:48.742484 1362 tritonserver.cc:1920]
+----------------------------------+------------------------------------------------------------------------------------------+
| Option | Value |
+----------------------------------+------------------------------------------------------------------------------------------+
| server_id | triton |
| server_version | 2.15.0 |
| server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_po |
| | licy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data stat |
| | istics |
| model_repository_path[0] | /ocr_serving/models |
| model_control_mode | MODE_NONE |
| strict_model_config | 1 |
| rate_limit | OFF |
| pinned_memory_pool_byte_size | 268435456 |
| response_cache_byte_size | 0 |
| min_supported_compute_capability | 0.0 |
| strict_readiness | 1 |
| exit_timeout | 30 |
+----------------------------------+------------------------------------------------------------------------------------------+

I0323 02:13:48.743927 1362 grpc_server.cc:4117] Started GRPCInferenceService at 0.0.0.0:8001
I0323 02:13:48.744215 1362 http_server.cc:2815] Started HTTPService at 0.0.0.0:8000
I0323 02:13:48.785501 1362 http_server.cc:167] Started Metrics Service at 0.0.0.0:8002
W0323 02:14:32.827865 1362 pinned_memory_manager.cc:133] failed to allocate pinned system memory: no pinned memory pool, falling back to non-pinned system memory


client端,执行python3 client.py报错:
Traceback (most recent call last):
File "client.py", line 112, in
scores[i_box], ' bbox=', bboxes[i_box])
IndexError: index 0 is out of bounds for axis 0 with size 0
image

将rec_texts、rec_scores、det_bboxes结果打印都为空
image
image

@yunyaoXYY
Copy link
Collaborator

好,后面我复现一下看看

@yunyaoXYY
Copy link
Collaborator

yunyaoXYY commented Mar 27, 2023

你好,目前在 registry.baidubce.com/paddlepaddle/fastdeploy:1.0.5-cpu-only-21.10 这个镜像中,已复现出问题. paddle inference后端会出现上述问题.
onnxruntime和openvino可以正常使用,可以先参考https://github.com/PaddlePaddle/FastDeploy/blob/develop/serving/docs/zh_CN/model_configuration.md 把runtime换成ORT或者openvino

@kerry-weic
Copy link

@yunyaoXYY 请问这个问题修复了没

@yunyaoXYY
Copy link
Collaborator

@yunyaoXYY 请问这个问题修复了没

没有,你必须用paddleinference吗,其他的后端是否能满足你的需求呢

@kerry-weic
Copy link

@yunyaoXYY paddleinference ocr推理的精度视乎高一些,另外想请教一下不同的后端都支持这些参数设置吗?https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_ch/inference_args.md

@polarisunny
Copy link

polarisunny commented May 29, 2023

@yunyaoXYY
registry.baidubce.com/paddlepaddle/fastdeploy:1.0.7-cpu-only-21.10 镜像,还是没有解决,使用 OpenVINO 推理一样没有识别出内容

@bltcn
Copy link

bltcn commented Aug 21, 2023

同上,无论openvino还是paddle都不行

@hanzhy-code
Copy link

你好,目前在 registry.baidubce.com/paddlepaddle/fastdeploy:1.0.5-cpu-only-21.10 这个镜像中,已复现出问题. paddle inference后端会出现上述问题. onnxruntime和openvino可以正常使用,可以先参考https://github.com/PaddlePaddle/FastDeploy/blob/develop/serving/docs/zh_CN/model_configuration.md 把runtime换成ORT或者openvino

@yunyaoXYY
替换了还是没有用

@tianv
Copy link

tianv commented Jun 21, 2024

同样,改成openvino还是没有结果

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

8 participants