- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi everyone,
I try to measure the time required for inference on my NCS2. At the moment I have a similar approach like the `benchmark_app` sample:
start = Time::now();
net.Infer();
time = Time::now() - start;
I recon that this method measures also the time required to transmit the input/output blob to/from the NCS2 via USB. In my case I want to know the chip's inference time excluding the communication overhead.
Do you have any idea how to achieve this?
Already thanks in advance.
Best Regards
Dominik
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Dominik,
Try using the openvino_20xx.x.xxx\deployment_tools\tools\benchmark_tool\benchmark_app.py with the following arguments:
benchmark_app.py
-m <your model>
-report_type detailed_counters
-d MYRIAD
This should generate a benchmark_detailed_counters_report.csv with all the layers execution time.
Regards,
Rizal
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Dominik,
Try using the openvino_20xx.x.xxx\deployment_tools\tools\benchmark_tool\benchmark_app.py with the following arguments:
benchmark_app.py
-m <your model>
-report_type detailed_counters
-d MYRIAD
This should generate a benchmark_detailed_counters_report.csv with all the layers execution time.
Regards,
Rizal
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I forgot to mention that I use C++ but there is the same benchmark_app also in C++.
For future reference, I searched for the following method:
It returns the calculation time per layer in the ANN and with that info I can calculate the total runtime.
For more details on the usage search for the method's name in the C++ benchmark_app.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Dominik,
Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.
Regards,
Rizal

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page