Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6404 Discussions

How to get YoloV4 speed comparable to OpenCV?

Brown_Kramer__Joshua
677 Views

If I run a YoloV4 model with leaky relu activations on my CPU with 256x256 RGB images in OpenCV with an OpenVINO backend, inference time plus non-max suppression is about 80ms. If, on the other hand, I convert my model to an IR following https://github.com/TNTWEN/OpenVINO-YOLOV4, which is linked to from https://github.com/AlexeyAB/darknet, inference time directly using the OpenVINO inference engine is roughly 130ms, which does not even include non-max suppression, which is quite slow when implemented naively in python.

Unfortunately, OpenCV does not offer all of the control I would like for the models and inference schemes I want to try (e.g. I want to change batch size, import models from YOLO repositories other than darknet, etc.)

What is the magic that allows OpenCV with OpenVINO backend to be so much faster?

0 Kudos
3 Replies
Vladimir_Dudnik
Employee
637 Views

@Brown_Kramer__Joshua from your message it is not clear if you compare the same model or different models.

0 Kudos
Zulkifli_Intel
Moderator
605 Views

Hello Josh Brown Kramer,


Do you have any feedback on the question from Vladimir?


SIncerely,

Zulkifli


0 Kudos
Munesh_Intel
Moderator
561 Views

Hi Josh,

Thank you for your question. If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored.



Regards,

Munesh


0 Kudos
Reply