From here, I know that Model Loading and its Inference are performed internally but are there any way I can target a Movidius for my model. I need to distribute the model inference target. I have three models to work with. So on CPU I have loaded all three models on 2 NCS2, taken inference successfully but the same code when running on Raspberry Pi 3B throws NC_ERROR. I am really confused about why that is happening. I need to debug this, which model is loaded on what devices so that I can know what went wrong. Please help.
I solved it. First I plugged-in one NCS2 and let OpenVino, load two models on it and start inferencing and then I plugged the other NCS2 and start the same for the third model. It is running successfully.
I am still wanna know how that happened internally and can I configure in targeting the loading of models on NCS2?