Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
New Contributor I
19 Views

Multiple model on NCS2 in Async Fashion

I have gone through many samples code and read about all Openvino Python API. I am concern about how can I load multiple model in async fashion on One NCS2. I know that from here, that Each IExecutableNetwork instance tries to allocate new device on InferenceEngine::Core::LoadNetwork, but How can we allocate multiple models on one NCS2 in async fashion(not pipelined ) same as we do in one model on one NCS2 by changing num_requests parameter. Do we have anything such here? Is it possible in NCS1 or NCS2? 

In short, Is Multiple model on NCS2 in Async fashion or parallel processing possible?

 

0 Kudos
3 Replies
Highlighted
19 Views

Hi Solanki, Saurav,

Hi Solanki, Saurav,

Yes, you can deploy multiple models in Async fashion on NCS2.

It can be done similarly as we load a single model.

You can set number of infer requests for each model separately using num_requests parameter.

Kindly refer security_barrier_camera_demo which supports three models in async mode and can also be deployed on a single NCS2.

 

Best Regards,

Surya

0 Kudos
Highlighted
New Contributor I
19 Views

Hi @Chauhan, Surya Pratap

Hi @Chauhan, Surya Pratap Singh,

Can you give a related example to python as well?

0 Kudos
Highlighted
19 Views

Hi Solanki, Saurav,

Hi Solanki, Saurav,

You can refer to action recognition demo (python) which supports two models in async mode and can also be deployed on a single NCS2.

 

Best Regards,

Surya

0 Kudos