Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

Multiple NSC2 working together on an inference

BuuVo
New Contributor I
1,010 Views

Hi every body,

I am doing a project to classify images with NCS2. I successfully classify images when I used only one NCS2, and the inference time is ~0.37(s).

The inference time is calculated as the following lines of code:

time_pre = time.time()

outputs = exec_net.infer({'data':inputs})

time_post = time.time()

inferrenc_time = time_pre - time_post

However, when I used 2 NSC2 and configured openVino IECore like below:

plugin = IECore()

net = plugin.read_network(model=model_xml, weights=model_bin)

exec_neto = plugin.load_network(network=net,                      device_name='HETERO:MYRIAD.1.1.2-ma2480,MYRIAD.1.2-ma2480')

The inference time doesn't reduce. It is still ~ 0.37(s). I also check the query layers map, and all layers are always assigned to one NCS2 device (Please refer to the image)

Screenshot from 2020-10-13 09-54-02.png

How can I change the setting to speed up my system by using multiple NCS2?

I referred to this tutorial but I didn't success:

https://www.coursera.org/lecture/int-openvino/using-multiple-devices-s976F

Thank you.

Labels (1)
0 Kudos
1 Solution
IntelSupport
Community Manager
965 Views

 

Hi BuuVo,

 

The heterogeneous plugin enables computing for inference on one network on several devices. Purposes to execute networks in the heterogeneous mode are as follows:

·      To utilize accelerators power and calculate heaviest parts of the network on the accelerator and execute not supported layers on fallback devices like CPU

·      To utilize all available hardware more efficiently during one inference

 

However, since you are using a hetero plugin with identical devices it is only natural that the execution will be done in the first device.

A single inference can be split into multiple inferences to be handled by separate devices. Please refer to the following sections,  "Annotation of Layers per Device and Default Fallback Policy", "Details of Splitting Network and Execution" and "Execution Precision" in the following link:

https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_HETERO.html

 

Please also refer to the following link for an example of implementation.

https://github.com/yuanyuanli85/open_model_zoo/tree/ncs2/demos/python_demos

 

You can go through this community discussion that might be helpful to you as well.

https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/How-to-use-Multi-Stick-NCS2-Parallelization-of-reasoning/td-p/1181316

 

Please refer to the Multi-Device Plugin and Hello Query Device C++ Sample pages for additional relevant information.

https://docs.openvinotoolkit.org/latest/openvino_inference_engine_samples_hello_query_device_README.html

https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_MULTI.html

 

Regards,

Aznie

 

 

View solution in original post

0 Kudos
5 Replies
BuuVo
New Contributor I
982 Views

Any Help?

0 Kudos
IntelSupport
Community Manager
977 Views

Hi BuuVo,


We are currently working on this. We will get back to you shortly.


Regards,

Aznie


0 Kudos
IntelSupport
Community Manager
966 Views

 

Hi BuuVo,

 

The heterogeneous plugin enables computing for inference on one network on several devices. Purposes to execute networks in the heterogeneous mode are as follows:

·      To utilize accelerators power and calculate heaviest parts of the network on the accelerator and execute not supported layers on fallback devices like CPU

·      To utilize all available hardware more efficiently during one inference

 

However, since you are using a hetero plugin with identical devices it is only natural that the execution will be done in the first device.

A single inference can be split into multiple inferences to be handled by separate devices. Please refer to the following sections,  "Annotation of Layers per Device and Default Fallback Policy", "Details of Splitting Network and Execution" and "Execution Precision" in the following link:

https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_HETERO.html

 

Please also refer to the following link for an example of implementation.

https://github.com/yuanyuanli85/open_model_zoo/tree/ncs2/demos/python_demos

 

You can go through this community discussion that might be helpful to you as well.

https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/How-to-use-Multi-Stick-NCS2-Parallelization-of-reasoning/td-p/1181316

 

Please refer to the Multi-Device Plugin and Hello Query Device C++ Sample pages for additional relevant information.

https://docs.openvinotoolkit.org/latest/openvino_inference_engine_samples_hello_query_device_README.html

https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_MULTI.html

 

Regards,

Aznie

 

 

0 Kudos
BuuVo
New Contributor I
929 Views
0 Kudos
IntelSupport
Community Manager
910 Views

Hi Buu Vo,


This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.


Regards,

Aznie


0 Kudos
Reply