Community
cancel
Showing results for 
Search instead for 
Did you mean: 
YAP__KELVIN
Beginner
100 Views

Using both Intel GPU and VPU simultaneously

Hi,

I installed your latest OpenVino R2020.2 version and have tested out some of your inference engine demos. However, I have a few queries as followed :

(1) Am I able to run the object detection inference on the Intel GPU and simultaneously run a tracker (i.e. KLT tracker) on the return detected object on VPU?

(2) When we run the model optimizer on a model, can we actually know what are the unnecessary layers that are actually being removed during the optimization process?

 

Regards

Kelvin

0 Kudos
1 Reply
Munesh_Intel
Moderator
100 Views

Hi Kelvin,

You can try modifying the demo with the tracker of your choice and validate it.

Additionally, it is also important to note that specific components of a demo don’t support different plugins. However, you can also explore and validate them on your end as well.

For your second question, you can use -pc option to enable per-layer performance report.

More information is available at the following page:

https://docs.openvinotoolkit.org/2020.3/_demos_object_detection_demo_ssd_async_README.html#running

Additionally, you can also run the Benchmark C++ tool, which outputs the number of executed iterations, total duration of execution, latency, throughput and statistics report as well.

More information is available at the following page:

https://docs.openvinotoolkit.org/2020.3/_inference_engine_samples_benchmark_app_README.html

 

Regards,

Munesh

 

Reply