- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
My cpu is i7-8700.
In the same test video, and same input frame is 416*416
when I run yolov3-tiny weight from darknet by opencv, fps is about 25
but when I convert yolov3-tiny weight to ir model, fps is about 20.
Why does openvino consume more time?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings,
After conversion, Inference Engine consumes the IR to perform inference. While Inference Engine API itself is target-agnostic, internally, it has a notion of plugins, which are device-specific libraries facilitating the hardware-assisted acceleration.
Performance flow: Upon conversion to IR, the execution starts with existing Inference Engine samples to measure and tweak the performance of the network on different devices.
While consuming the same IR, each plugin performs additional device-specific optimizations at load time, so the resulting accuracy might differ.
You can find more insight here:
https://docs.openvinotoolkit.org/latest/_docs_optimization_guide_dldt_optimization_guide.html
Hope my answer helps!
Sincerely,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks!
I think it may be that the openvino CPU plugin itself also consumes time.
Because when I use GPU plugin,fps is same as opencv, about 25fps
So openvino accelerates significantly for larger and slower models, but increases the time consumption for some smaller models.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page