- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have gone through many samples code and read about all Openvino Python API. I am concern about how can I load multiple model in async fashion on One NCS2. I know that from here, that Each IExecutableNetwork instance tries to allocate new device on InferenceEngine::Core::LoadNetwork, but How can we allocate multiple models on one NCS2 in async fashion(not pipelined ) same as we do in one model on one NCS2 by changing num_requests parameter. Do we have anything such here? Is it possible in NCS1 or NCS2?
In short, Is Multiple model on NCS2 in Async fashion or parallel processing possible?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Solanki, Saurav,
Yes, you can deploy multiple models in Async fashion on NCS2.
It can be done similarly as we load a single model.
You can set number of infer requests for each model separately using num_requests parameter.
Kindly refer security_barrier_camera_demo which supports three models in async mode and can also be deployed on a single NCS2.
Best Regards,
Surya
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @Chauhan, Surya Pratap Singh,
Can you give a related example to python as well?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Solanki, Saurav,
You can refer to action recognition demo (python) which supports two models in async mode and can also be deployed on a single NCS2.
Best Regards,
Surya

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page