I have gone through many samples code and read about all Openvino Python API. I am concern about how can I load multiple model in async fashion on One NCS2. I know that from here, that Each IExecutableNetwork instance tries to allocate new device on InferenceEngine::Core::LoadNetwork, but How can we allocate multiple models on one NCS2 in async fashion(not pipelined ) same as we do in one model on one NCS2 by changing num_requests parameter. Do we have anything such here? Is it possible in NCS1 or NCS2?
In short, Is Multiple model on NCS2 in Async fashion or parallel processing possible?
Hi Solanki, Saurav,
Yes, you can deploy multiple models in Async fashion on NCS2.
It can be done similarly as we load a single model.
You can set number of infer requests for each model separately using num_requests parameter.
Kindly refer security_barrier_camera_demo which supports three models in async mode and can also be deployed on a single NCS2.