- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I am a junior OpenVINO C++ user
I am interesting to know if I can measure resources efficiency which used by the OpenVINO inference process such as:
- Device RAM consumed by the model
- CPU RAM consumed by the model
- Power consumed by the inference process
- Consumed time since Onnx loading till inference is ready to be actives
- etc.
Are any one of the OpenVINO type such as ov::Core, ov::CompiledModel etc. provide queries APIs for these kinds of metrics?
Thanks,
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi OronG13,
Thanks for reaching out to us.
OpenVINO API allows you to handle the models, load and configure Inference Engine plugins based on device names, and perform inference in synchronous and asynchronous modes with arbitrary number of infer requests.
We regret to tell you that OpenVINO API does not provide functions to measure resources efficiency. If you are using Windows 10, you may observe device and CPU RAM usage in Windows Performance Monitor.
On another note, to estimate deep learning inference performance on supported devices, you may use OpenVINO Benchmark C++ Tool. Hope it helps.
Regards,
Wan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you Wan,
Baseon your knowlege,
For CPU device RAM usage it is clear.
But, is there a way to measure for iGPU, GPU, VPU, CPU power consumption in order to measure W per frame metric?
And model memory consumed size (after model optimization) for all devices excpet CPU?
I will learn the Benchmark C++ tool, thanks.
Regards,,
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi OronG13,
As for now, we are sorry to tell you that there are no functions from OpenVINO API that could measure power consumption and model memory-sized consumption for integrated GPU, GPU, VPU, and CPU.
However, there are third-party open source libraries that are able to do it.
Regards,
Wan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi OronG13,
Thanks for your question.
This thread will no longer be monitored since we have provided information.
If you need any additional information from Intel, please submit a new question.
Best regards,
Wan
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page