Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Samuel
Beginner
198 Views

Running a set of models on multiple NCS 2

Hi everyone,

I am trying to use NCS 2 to run multiple models (~10) and would like to speed up by using two sticks. I followed some examples and found that to use two sticks, I need to create two instances of IEPlugin (in different threads). As I need to run ~10 models, for each instance of IEPlugin, I loaded 10 executable networks. 

The problem is that I can only load one executable network for each IEPlugin and an error message will occur if I add another one. For example, I have two sticks S1 and S2 and 3 models M1, M2, and M3.

Case 1: No error

Load M1~3 to S1 or S2 only but not both

Case 2: No error

Load M1/2/3 to S1 and M1/2/3 to S2

Case 3: Error

Load one more model to case 2 (any stick)

Is there any way for me to utilize both sticks to perform the inference? Thanks.

0 Kudos
2 Replies
Shubha_R_Intel
Employee
198 Views

Dear Samuel, I think the below IDZ thread will help you. This poster initialized only 1 IEPlugin for multiple NCS2 devices.

https://software.intel.com/en-us/forums/computer-vision/topic/820712

Hope it helps,

Thanks,

Shubha

SSola8
New Contributor I
198 Views

It is actually the most important question to talk about, For folks who come here for a solution, I want to point out a few things

1. One device ---> One IExecutableNetwork() (loaded model). ref

2. So in order to run model, use that no of IExecutableNetwork = InferenceEngine::Core:: LoadNetwork, depending on the device(like NCS2 you have)

3. One more cool things about Openvino is, You can either use sync or aysnc for inferences. For Async operation, you need to set num_requests parameter and that is nothing but no of threads getting inference request from IExecutableNetwork.

4. Sample Code to understand: here

Hope, it helps many. 

 

Reply