My cpu is i7-8700.
In the same test video, and same input frame is 416*416
when I run yolov3-tiny weight from darknet by opencv, fps is about 25
but when I convert yolov3-tiny weight to ir model, fps is about 20.
Why does openvino consume more time?
After conversion, Inference Engine consumes the IR to perform inference. While Inference Engine API itself is target-agnostic, internally, it has a notion of plugins, which are device-specific libraries facilitating the hardware-assisted acceleration.
Performance flow: Upon conversion to IR, the execution starts with existing Inference Engine samples to measure and tweak the performance of the network on different devices.
While consuming the same IR, each plugin performs additional device-specific optimizations at load time, so the resulting accuracy might differ.
You can find more insight here:
Hope my answer helps!
I think it may be that the openvino CPU plugin itself also consumes time.
Because when I use GPU plugin，fps is same as opencv, about 25fps
So openvino accelerates significantly for larger and slower models, but increases the time consumption for some smaller models.