Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

How to set specified VPU (Myriad X) id in C++?

ABoch5
Beginner
1,426 Views

If I have several connected Myriad X VPUs, how can I set specified Device_id in C++?

 

  1. In general, if I want to use multi-stick computation from one application, should I select different VPU device_id's and create network?
  2. Or should I just create 2 Networks and they will be allocated on two difference VPUs?

 

 

0 Kudos
1 Solution
Aroop_B_Intel
Employee
1,171 Views

Hi ABoch5,

 

1. In general, if I want to use multi-stick computation from one application, should I select different VPU device_id's and create network?

 

No, the OpenVINO tooklit application will automatically manage the workload for multiple NCS devices, so there is no need to select different VPU device_id's. You can just make one PlugIn instance that will manage all the MYRIAD devices on your system.

 

2. Or should I just create 2 Networks and they will be allocated on two difference VPUs?

 

Yes, for maximum performance, you should create an ExecutableNetwork instance for each device. Note: it is not possible to tie each network directly to each device.

 

Take a look at the "Multiple NCS Devices" section of this webpage for more information: https://software.intel.com/en-us/articles/transitioning-from-intel-movidius-neural-compute-sdk-to-openvino-toolkit

 

Regards,

Aroop

View solution in original post

0 Kudos
3 Replies
Aroop_B_Intel
Employee
1,172 Views

Hi ABoch5,

 

1. In general, if I want to use multi-stick computation from one application, should I select different VPU device_id's and create network?

 

No, the OpenVINO tooklit application will automatically manage the workload for multiple NCS devices, so there is no need to select different VPU device_id's. You can just make one PlugIn instance that will manage all the MYRIAD devices on your system.

 

2. Or should I just create 2 Networks and they will be allocated on two difference VPUs?

 

Yes, for maximum performance, you should create an ExecutableNetwork instance for each device. Note: it is not possible to tie each network directly to each device.

 

Take a look at the "Multiple NCS Devices" section of this webpage for more information: https://software.intel.com/en-us/articles/transitioning-from-intel-movidius-neural-compute-sdk-to-openvino-toolkit

 

Regards,

Aroop

0 Kudos
ABoch5
Beginner
1,171 Views

 

Thanks!

 

I changed this code: https://github.com/opencv/open_model_zoo/blob/e458c1f0407d0303e36be4828bb963a67d6d050a/demos/object_detection_demo_yolov3_async/main.cpp#L287-L293

 

ExecutableNetwork network = plugin.LoadNetwork(netReader.getNetwork(), {});   // -----------------------------------------------------------------------------------------------------   // --------------------------- 5. Creating infer request ----------------------------------------------- InferRequest::Ptr async_infer_request_next = network.CreateInferRequestPtr(); InferRequest::Ptr async_infer_request_curr = network.CreateInferRequestPtr();

 

to this code:

 

ExecutableNetwork network = plugin.LoadNetwork(netReader.getNetwork(), {}); ExecutableNetwork network2 = plugin.LoadNetwork(netReader.getNetwork(), {});   // -----------------------------------------------------------------------------------------------------   // --------------------------- 5. Creating infer request ----------------------------------------------- InferRequest::Ptr async_infer_request_next = network2.CreateInferRequestPtr(); InferRequest::Ptr async_infer_request_curr = network.CreateInferRequestPtr();

Then run application object_detection_demo_yolov3_async and press TAB for async execution.

But it did not speed up the execution.

 

How can I see usage (in %) for each VPU?

0 Kudos
Aroop_B_Intel
Employee
1,171 Views

Hi,

 

I just responded to your question in the new thread you opened.

 

Regards,

Aroop

0 Kudos
Reply