Community
cancel
Showing results for 
Search instead for 
Did you mean: 
bc__y
Beginner
100 Views

Questions about yolov3-tiny

My cpu is i7-8700.

In the same test video, and same input frame is 416*416

when I run yolov3-tiny weight from darknet by opencv, fps is about 25 

but  when I convert yolov3-tiny weight to ir model, fps is about 20.

Why does openvino consume more time?

0 Kudos
2 Replies
Iffa_Intel
Moderator
100 Views

Greetings,

After conversion, Inference Engine consumes the IR to perform inference. While Inference Engine API itself is target-agnostic, internally, it has a notion of plugins, which are device-specific libraries facilitating the hardware-assisted acceleration.

Performance flow: Upon conversion to IR, the execution starts with existing Inference Engine samples to measure and tweak the performance of the network on different devices.


 While consuming the same IR, each plugin performs additional device-specific optimizations at load time, so the resulting accuracy might differ. 

You can find more insight here: 

https://docs.openvinotoolkit.org/latest/_docs_optimization_guide_dldt_optimization_guide.html

 

Hope my answer helps!

Sincerely,

Iffa

bc__y
Beginner
100 Views

Thanks!

I think it may be that the openvino CPU plugin itself also consumes time.

Because when I use GPU plugin,fps is same as opencv, about 25fps

So openvino accelerates significantly for larger and slower models, but increases the time consumption for some smaller models.

Reply