Hi, Hyodo, Katsuya
I took a look at your mutliple stick sample, openvino_yolov3_MultiStick_test.py. The app does not utilize all of the multiple ncs2 sticks. In that app, it uses the async API and multiple infer requests. That is good. But, multiple infere requests are not scheduled to multiple sticks. The performance gain from mutlitple infer requests comes from the hidden data transfer cost. Only one ExecutableNetwork instance created, so only one ncs2 device was used. You can try to monitor the fps when you only plug in one ncs2 there. If you want to make use of multiple ncs2 devices, multiple ExecutableNetwork instances need to be used.
Seems too slow indeed. I am getting around 40FPS (mobilnet-ssd) on mini-PCIe Myriad X card plugged into Up board:
and around 20FPS (mobilnet-ssd) on RPi with NCS 2...I run two models and one is slower so the total FPS right now running both models in parallel is around 12 FPS I think...that's two models on two sticks. If I update the slower model then the speed should be near 20 FPS for both models but right now the inference results are delayed by the slower model.
Hope this helps