Creating lite, small or medium versions of inference models #9281
danielvfung
started this conversation in
General
Replies: 1 comment
-
Justification is the need for word level bounding boxes, for NER and NLP type labelling and inference tasks. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I managed to create the inference versions of the SAST, DB++ and even DB det_algorithms as per the instructions on the Paddle Docs.
They all seem to segment words better than the default en_PP-OCRv3 lite weight model.
The inference models i created using the export script are all 100 MB and larger. my question is how can we generate small sized models say medium, or smaller sized to be used for inference but still get the detection levels that these algorithms provide over the default line block level detection.
Beta Was this translation helpful? Give feedback.
All reactions