I want to use the SSD model on Raspberry pi to detect specific objects and use MLPN to locate specific objects.
Use SSD to frame a specific object and use MLPN to convert the image plane coordinates of the specific target into actual coordinates.
Now I load the SSD model into NCS2 using the opencv method (cv.dnn.readNetfrrom Tensorflow) and the MLPN model uses the CPU of raspberry pi.
In this way, the result of program execution FPS is only 1.5.
What should I do if I want to use SSD model and MLPN model in a neural compute stick2?
Thank you for reading my question.
You just need to load it and f the memory on the NCS2 could hold it should run.
Each IExecutableNetwork instance tries to allocate new device on InferenceEngine::Core::LoadNetwork, but if all available devices are already allocated it will use the one with the minimal number of uploaded networks. The maximum number of networks single device can handle depends on device memory capacity and the size of the networks.
It is from this this article.
I would also recommend you to use the OpenVINO toolkit, here are instructions on how to install OpenVINO on the Pi.
Here is a documentation that covers the overview of OpenVINO.
You will need to convert it to IR (Intermediate Representation). It is the format in which OpenVINO inference engine will read and run.
You can not do this on the raspberry Pi , you would need to do this on another computer and requires you installing OpenVINO.
Here is a guide to save the H5 model to standard save model.
Then you would need to follow the Tensorflow saved model conversion to IR.
You can then take the IR files and use it on the Pi utilizing the OpenVINO toolkit.