You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm trying to run models with onnxruntime-gpu with TensorRT/Cude executors, and it looks like they do not have FusedConv operator. Can you provide with lesser operator set? Also INT32 model would be nice to have. Thanks.
2021-12-31 23:40:05.374626878 [W:onnxruntime:Default, tensorrt_execution_provider.h:53 log] [2021-12-31 20:40:05 WARNING] /onnxruntime_src/cmake/external/onnx-tensorrt/onnx2trt_utils.cpp:362: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
2021-12-31 23:40:05.374878637 [E:onnxruntime:Default, tensorrt_execution_provider.h:51 log] [2021-12-31 20:40:05 ERROR] 3: getPluginCreator could not find plugin: FusedConv version: 1
/onnxruntime_src/cmake/external/onnx-tensorrt/onnx2trt_utils.cpp:362: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
The text was updated successfully, but these errors were encountered:
Hi! I don't have time right now to reconvert them, but you can find the pytorch weights for most models here with some more in another comment on that issue. If you load them via model.py, you should be able to export them to ONNX with the desired options yourself.
Edit: There shouldn't really be any integer weights being used in the model. I'm not sure where that comes from.
Hi, I'm trying to run models with onnxruntime-gpu with TensorRT/Cude executors, and it looks like they do not have FusedConv operator. Can you provide with lesser operator set? Also INT32 model would be nice to have. Thanks.
The text was updated successfully, but these errors were encountered: