- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I installed your latest OpenVino R2020.2 version and have tested out some of your inference engine demos. However, I have a few queries as followed :
(1) Am I able to run the object detection inference on the Intel GPU and simultaneously run a tracker (i.e. KLT tracker) on the return detected object on VPU?
(2) When we run the model optimizer on a model, can we actually know what are the unnecessary layers that are actually being removed during the optimization process?
Regards
Kelvin
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Kelvin,
You can try modifying the demo with the tracker of your choice and validate it.
Additionally, it is also important to note that specific components of a demo don’t support different plugins. However, you can also explore and validate them on your end as well.
For your second question, you can use -pc option to enable per-layer performance report.
More information is available at the following page:
https://docs.openvinotoolkit.org/2020.3/_demos_object_detection_demo_ssd_async_README.html#running
Additionally, you can also run the Benchmark C++ tool, which outputs the number of executed iterations, total duration of execution, latency, throughput and statistics report as well.
More information is available at the following page:
https://docs.openvinotoolkit.org/2020.3/_inference_engine_samples_benchmark_app_README.html
Regards,
Munesh

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page