- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If I run a YoloV4 model with leaky relu activations on my CPU with 256x256 RGB images in OpenCV with an OpenVINO backend, inference time plus non-max suppression is about 80ms. If, on the other hand, I convert my model to an IR following https://github.com/TNTWEN/OpenVINO-YOLOV4, which is linked to from https://github.com/AlexeyAB/darknet, inference time directly using the OpenVINO inference engine is roughly 130ms, which does not even include non-max suppression, which is quite slow when implemented naively in python.
Unfortunately, OpenCV does not offer all of the control I would like for the models and inference schemes I want to try (e.g. I want to change batch size, import models from YOLO repositories other than darknet, etc.)
What is the magic that allows OpenCV with OpenVINO backend to be so much faster?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Brown_Kramer__Joshua from your message it is not clear if you compare the same model or different models.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Josh Brown Kramer,
Do you have any feedback on the question from Vladimir?
SIncerely,
Zulkifli
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Josh,
Thank you for your question. If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored.
Regards,
Munesh

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page