Replies: 2 comments 1 reply
-
|
Hi @hcao10 👋, Could you provide a bit more information please ? 😅 How much slower ? What's the input image sizes / How many text needs to be processed ? Have you tried to run it on TensorRT ? In general yes should work with 0.5.1 but it's not recommended because between several bug fixes / improvements was added. Best regards, |
Beta Was this translation helpful? Give feedback.
-
|
Thanks for your reply! 😊 The input image sizes are below 640x640, I did try TensorRT, but the speed didn’t improve much. I’ve tested many things already, so I wanted to ask if there are some specific points I should pay attention to. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, thanks for this great project!
I’m running OCR with OnnxTR on Tesla T4 and Jetson AGX Orin 32GB with fast tiny and crnn mobilenet v3 small.
I used the provided script to convert the models to fp16, but I still cannot reach the benchmark speed of <100 ms per page (I see slower results).
Is there any special setting (e.g. batch size, providers, preprocessing) that I should configure for these GPUs to get closer to the benchmark numbers? Also, with version 0.5.1, is it possible to achieve speeds similar to the benchmark results?
Any help is very much appreciated!
Thanks a lot! 🙏
Beta Was this translation helpful? Give feedback.
All reactions