Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

Measure Actual Inference Time without USB Overhead

Dominik
Beginner
889 Views

Hi everyone,

I try to measure the time required for inference on my NCS2.  At the moment I have a similar approach like the `benchmark_app` sample:

start = Time::now();
net.Infer();
time = Time::now() - start;

I recon that this method measures also the time required to transmit the input/output blob to/from the NCS2 via USB. In my case I want to know the chip's inference time excluding the communication overhead.

Do you have any idea how to achieve this?

Already thanks in advance.

Best Regards
Dominik

Labels (2)
0 Kudos
1 Solution
Rizal_Intel
Moderator
863 Views

Hi Dominik,


Try using the openvino_20xx.x.xxx\deployment_tools\tools\benchmark_tool\benchmark_app.py with the following arguments:


benchmark_app.py

-m <your model>

-report_type detailed_counters

-d MYRIAD


This should generate a benchmark_detailed_counters_report.csv with all the layers execution time.


Regards,

Rizal


View solution in original post

0 Kudos
3 Replies
Rizal_Intel
Moderator
864 Views

Hi Dominik,


Try using the openvino_20xx.x.xxx\deployment_tools\tools\benchmark_tool\benchmark_app.py with the following arguments:


benchmark_app.py

-m <your model>

-report_type detailed_counters

-d MYRIAD


This should generate a benchmark_detailed_counters_report.csv with all the layers execution time.


Regards,

Rizal


0 Kudos
Dominik
Beginner
821 Views

I forgot to mention that I use C++ but there is the same benchmark_app also in C++.

For future reference, I searched for the following method:

inferRequest.GetPerformanceCounts()


It returns the calculation time per layer in the ANN and with that info I can calculate the total runtime.

For more details on the usage search for the method's name in the C++ benchmark_app.

 

0 Kudos
Rizal_Intel
Moderator
841 Views

Hi Dominik,


Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question. 


Regards,

Rizal


0 Kudos
Reply