Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Questions about yolov3-tiny

bc__y
Beginner
509 Views

My cpu is i7-8700.

In the same test video, and same input frame is 416*416

when I run yolov3-tiny weight from darknet by opencv, fps is about 25 

but  when I convert yolov3-tiny weight to ir model, fps is about 20.

Why does openvino consume more time?

0 Kudos
2 Replies
Iffa_Intel
Moderator
509 Views

Greetings,

After conversion, Inference Engine consumes the IR to perform inference. While Inference Engine API itself is target-agnostic, internally, it has a notion of plugins, which are device-specific libraries facilitating the hardware-assisted acceleration.

Performance flow: Upon conversion to IR, the execution starts with existing Inference Engine samples to measure and tweak the performance of the network on different devices.


 While consuming the same IR, each plugin performs additional device-specific optimizations at load time, so the resulting accuracy might differ. 

You can find more insight here: 

https://docs.openvinotoolkit.org/latest/_docs_optimization_guide_dldt_optimization_guide.html

 

Hope my answer helps!

Sincerely,

Iffa

0 Kudos
bc__y
Beginner
509 Views

Thanks!

I think it may be that the openvino CPU plugin itself also consumes time.

Because when I use GPU plugin,fps is same as opencv, about 25fps

So openvino accelerates significantly for larger and slower models, but increases the time consumption for some smaller models.

0 Kudos
Reply