Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to export onnx? #37

Open
garspace opened this issue Aug 8, 2023 · 20 comments
Open

how to export onnx? #37

garspace opened this issue Aug 8, 2023 · 20 comments

Comments

@garspace
Copy link

garspace commented Aug 8, 2023

thanks for your work!

@garspace garspace changed the title how how to export onnx? Aug 8, 2023
@Zalways
Copy link

Zalways commented Aug 21, 2023

have you solved this problem? i met some problem when i try to export onnx

@agoryuno
Copy link

I had to take the axe to the code to get ONNX export to work. You can use the results here: https://github.com/agoryuno/deepsolo-onnx

@gigasurgeon
Copy link

@agoryuno Thanks for providing the ONNX export notebook. During onnx inference, I got these output nodes with shapes. Can you please guide me to which output belongs to what. I am interested in obtaining the bbox for text detected in the image

image

@YuMJie
Copy link

YuMJie commented Oct 31, 2023

thanks for your work!

could you provide the version of your torch and some packages?

@Gavinic
Copy link

Gavinic commented Nov 1, 2023

have you solved this problem? i met some problem when i try to export onnx

i also try to use the notebook to convert onnx model, but i got the unsupported value type 'Instance', is there any suggestion? thank you

@YuMJie
Copy link

YuMJie commented Nov 2, 2023

have you solved this problem? i met some problem when i try to export onnx

i also try to use the notebook to convert onnx model, but i got the unsupported value type 'Instance', is there any suggestion? thank you

import shutil
from DeepSolo.onnx_model import SimpleONNXReadyModel
import numpy as np

CHECKPOINT = "vitaev2-s_pretrain_synth-tt-mlt-13-15-textocr.pth" # If you use other pth, pls change the CONFIG 
OUTPATH = "deepsolo.onnx"  

DIMS = (960,960)
CONFIG = "configs/Base_det_export.yaml"   #Here
CHANNELS = 3
model = SimpleONNXReadyModel(CONFIG, CHECKPOINT)
img = np.random.randint(0, 255, (CHANNELS, *DIMS))

img = img.astype(np.int8)
import torch.onnx
import torch
img_t = torch.from_numpy(img)
torch.onnx.export(model.model,
           [img_t],
           OUTPATH,
           export_params=True)

My code for exporting onnx model is above. You can try and pay attention to your torch vision and python>=3.9.

torch                    2.0.0+cu118
torchaudio               2.0.1+cu118
torchvision              0.15.1+cu118

@Gavinic
Copy link

Gavinic commented Nov 3, 2023

have you solved this problem? i met some problem when i try to export onnx

i also try to use the notebook to convert onnx model, but i got the unsupported value type 'Instance', is there any suggestion? thank you

import shutil
from DeepSolo.onnx_model import SimpleONNXReadyModel
import numpy as np

CHECKPOINT = "vitaev2-s_pretrain_synth-tt-mlt-13-15-textocr.pth" # If you use other pth, pls change the CONFIG 
OUTPATH = "deepsolo.onnx"  

DIMS = (960,960)
CONFIG = "configs/Base_det_export.yaml"   #Here
CHANNELS = 3
model = SimpleONNXReadyModel(CONFIG, CHECKPOINT)
img = np.random.randint(0, 255, (CHANNELS, *DIMS))

img = img.astype(np.int8)
import torch.onnx
import torch
img_t = torch.from_numpy(img)
torch.onnx.export(model.model,
           [img_t],
           OUTPATH,
           export_params=True)

My code for exporting onnx model is above. You can try and pay attention to your torch vision and python>=3.9.

torch                    2.0.0+cu118
torchaudio               2.0.1+cu118
torchvision              0.15.1+cu118

Thank you! i have exported the onnx model successfully with your guidance. but when i check the output, i found the output of onnx is different from the pth model , the used image is same.
For example:
pth model output of 'ctrl_point_cls':
image

the according output of onnx model is:
image

tanks!

@YuMJie
Copy link

YuMJie commented Nov 3, 2023

have you solved this problem? i met some problem when i try to export onnx

i also try to use the notebook to convert onnx model, but i got the unsupported value type 'Instance', is there any suggestion? thank you

import shutil
from DeepSolo.onnx_model import SimpleONNXReadyModel
import numpy as np

CHECKPOINT = "vitaev2-s_pretrain_synth-tt-mlt-13-15-textocr.pth" # If you use other pth, pls change the CONFIG 
OUTPATH = "deepsolo.onnx"  

DIMS = (960,960)
CONFIG = "configs/Base_det_export.yaml"   #Here
CHANNELS = 3
model = SimpleONNXReadyModel(CONFIG, CHECKPOINT)
img = np.random.randint(0, 255, (CHANNELS, *DIMS))

img = img.astype(np.int8)
import torch.onnx
import torch
img_t = torch.from_numpy(img)
torch.onnx.export(model.model,
           [img_t],
           OUTPATH,
           export_params=True)

My code for exporting onnx model is above. You can try and pay attention to your torch vision and python>=3.9.

torch                    2.0.0+cu118
torchaudio               2.0.1+cu118
torchvision              0.15.1+cu118

Thank you! i have exported the onnx model successfully with your guidance. but when i check the output, i found the output of onnx is different from the pth model , the used image is same. For example: pth model output of 'ctrl_point_cls': image

the according output of onnx model is: image

tanks!

hah,I also meet this issue. It may be related to the size of input image. I am debugging now and will reply to you as soon as I have it fixed.

@Gavinic
Copy link

Gavinic commented Nov 3, 2023

have you solved this problem? i met some problem when i try to export onnx

i also try to use the notebook to convert onnx model, but i got the unsupported value type 'Instance', is there any suggestion? thank you

import shutil
from DeepSolo.onnx_model import SimpleONNXReadyModel
import numpy as np

CHECKPOINT = "vitaev2-s_pretrain_synth-tt-mlt-13-15-textocr.pth" # If you use other pth, pls change the CONFIG 
OUTPATH = "deepsolo.onnx"  

DIMS = (960,960)
CONFIG = "configs/Base_det_export.yaml"   #Here
CHANNELS = 3
model = SimpleONNXReadyModel(CONFIG, CHECKPOINT)
img = np.random.randint(0, 255, (CHANNELS, *DIMS))

img = img.astype(np.int8)
import torch.onnx
import torch
img_t = torch.from_numpy(img)
torch.onnx.export(model.model,
           [img_t],
           OUTPATH,
           export_params=True)

My code for exporting onnx model is above. You can try and pay attention to your torch vision and python>=3.9.

torch                    2.0.0+cu118
torchaudio               2.0.1+cu118
torchvision              0.15.1+cu118

Thank you! i have exported the onnx model successfully with your guidance. but when i check the output, i found the output of onnx is different from the pth model , the used image is same. For example: pth model output of 'ctrl_point_cls': image
the according output of onnx model is: image
tanks!

hah,I also meet this issue. It may be related to the size of input image. I am debugging now and will reply to you as soon as I have it fixed.

Thank you very much! 👍🏻

@YuMJie
Copy link

YuMJie commented Nov 23, 2023

@Gavinic Sorry, Ican not find why the results are different. I try the same input, however, the outputs of the backbone are different, which is strange. Have you found the bug?

@Zalways
Copy link

Zalways commented Dec 1, 2023

@Gavinic @agoryuno @YuMJie Can this ONNX-exported model support multi-scale image input? Or only support fixed-size images?

@Zalways
Copy link

Zalways commented Dec 1, 2023

@Gavinic @agoryuno @YuMJie Can this ONNX-exported model support multi-scale image input? Or only support fixed-size images?

i exported the model by tracing,but it only support fixed-size images

@YuMJie
Copy link

YuMJie commented Dec 1, 2023

@Gavinic @agoryuno @YuMJie Can this ONNX-exported model support multi-scale image input? Or only support fixed-size images?

i exported the model by tracing,but it only support fixed-size images

Have you used the dynamic_axes on onnx ?

@jasper-cell
Copy link

have you solved this problem? i met some problem when i try to export onnx

i also try to use the notebook to convert onnx model, but i got the unsupported value type 'Instance', is there any suggestion? thank you

import shutil
from DeepSolo.onnx_model import SimpleONNXReadyModel
import numpy as np

CHECKPOINT = "vitaev2-s_pretrain_synth-tt-mlt-13-15-textocr.pth" # If you use other pth, pls change the CONFIG 
OUTPATH = "deepsolo.onnx"  

DIMS = (960,960)
CONFIG = "configs/Base_det_export.yaml"   #Here
CHANNELS = 3
model = SimpleONNXReadyModel(CONFIG, CHECKPOINT)
img = np.random.randint(0, 255, (CHANNELS, *DIMS))

img = img.astype(np.int8)
import torch.onnx
import torch
img_t = torch.from_numpy(img)
torch.onnx.export(model.model,
           [img_t],
           OUTPATH,
           export_params=True)

My code for exporting onnx model is above. You can try and pay attention to your torch vision and python>=3.9.

torch                    2.0.0+cu118
torchaudio               2.0.1+cu118
torchvision              0.15.1+cu118

when i export onnx model, i meet some wrong, can i add your contact information to ask more details?

@Zalways
Copy link

Zalways commented Dec 27, 2023

@Gavinic @agoryuno @YuMJie Can this ONNX-exported model support multi-scale image input? Or only support fixed-size images?

i exported the model by tracing,but it only support fixed-size images

Have you used the dynamic_axes on onnx ?

import shutil
from DeepSolo.onnx_model import SimpleONNXReadyModel
import numpy as np

CHECKPOINT = "rects_res50_finetune.pth" # If you use other pth, pls change the CONFIG
OUTPATH = "deepsolo2.onnx"

DIMS = (480,480)
CONFIG = "configs/Base_Rects_export.yaml"
CHANNELS = 3
model = SimpleONNXReadyModel(CONFIG, CHECKPOINT)
img = np.random.randint(0, 255, (CHANNELS, *DIMS))

img = img.astype(np.uint8)
import torch.onnx
import torch
img_t = torch.from_numpy(img)
input_names = ["image"]
output_names = ['ctrl_point_cls', 'ctrl_point_coord', 'ctrl_point_text', 'bd_points']
torch.onnx.export(model.model,
[img_t],
OUTPATH,
input_names = input_names,
output_names = output_names,
dynamic_axes={'image':[1,2]},
export_params=True)

i have exported onnx,but when i run inference, it failed.
image

@Zalways
Copy link

Zalways commented Dec 28, 2023

@Gavinic Sorry, Ican not find why the results are different. I try the same input, however, the outputs of the backbone are different, which is strange. Have you found the bug?

@agoryuno @Gavinic the result inferenced by exported onnx model seems different from pth file, and i cann't use the exported onnx model to get the final recgnized text,have you solved this problem? and how is your result? the inference result does really work?

@jasper-cell
Copy link

jasper-cell commented Dec 28, 2023 via email

@jasper-cell
Copy link

@Gavinic Sorry, Ican not find why the results are different. I try the same input, however, the outputs of the backbone are different, which is strange. Have you found the bug?

@agoryuno @Gavinic the result inferenced by exported onnx model seems different from pth file, and i cann't use the exported onnx model to get the final recgnized text,have you solved this problem? and how is your result? the inference result does really work?

have you know how to use the onnx model's inference result, i also found this result is different.

@shining-love
Copy link

an you tell me your environment for onnx export

can you tell me your environment for onnx export?

@stevenLuzhengti
Copy link

have you solved this problem? i met some problem when i try to export onnx

i also try to use the notebook to convert onnx model, but i got the unsupported value type 'Instance', is there any suggestion? thank you

import shutil
from DeepSolo.onnx_model import SimpleONNXReadyModel
import numpy as np

CHECKPOINT = "vitaev2-s_pretrain_synth-tt-mlt-13-15-textocr.pth" # If you use other pth, pls change the CONFIG 
OUTPATH = "deepsolo.onnx"  

DIMS = (960,960)
CONFIG = "configs/Base_det_export.yaml"   #Here
CHANNELS = 3
model = SimpleONNXReadyModel(CONFIG, CHECKPOINT)
img = np.random.randint(0, 255, (CHANNELS, *DIMS))

img = img.astype(np.int8)
import torch.onnx
import torch
img_t = torch.from_numpy(img)
torch.onnx.export(model.model,
           [img_t],
           OUTPATH,
           export_params=True)

My code for exporting onnx model is above. You can try and pay attention to your torch vision and python>=3.9.

torch                    2.0.0+cu118
torchaudio               2.0.1+cu118
torchvision              0.15.1+cu118

Thank you! i have exported the onnx model successfully with your guidance. but when i check the output, i found the output of onnx is different from the pth model , the used image is same. For example: pth model output of 'ctrl_point_cls': image

the according output of onnx model is: image

tanks!
@Gavinic
Hey, I'm experiencing the same issue. Do you have any insights on this? I followed your instructions exactly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants