I am doing hand_writting_japanese_recognition_demo example with NCS2 devince on Rasberry Pi4 B kit. However, when I loaded the model, the time loading of the program is about 1-2 minutes (Maybe longer). I got this model by using downloader.py with the FP16 precision. I also tried to load this model with NCS2 on my laptop (I7-8750H and 8GB RAM), and the time that the program loaded the model is about 10 seconds. My explain for this issue is that IECore.load_network() uses CPU to load the model. Because rasberry pi4 has much less performance than intel I7-8750H, so the time for loading model on NCS2 takes longer than intel i7-8750H.
If you know about my issue, can you please confirm my explanation?
Furthermore, can you let me know the workflow of the program when it loads the model on NCS2?
Thank you for you concerns.
Is there any way to add more CPU core when loading model? My rasberry board only use 1 core when model is loaded mean while Rasberry Pi4 has 4 cores. Maybe, can I use configuration option in IECore.load_network(..., config ="?") to add more CPU core?
I believe you are trying to achieve optimization. You may refer here for the possible ways to achieve that:
I read the article that you posted but I did not find out the information that i need. I used python3 for my program, when NCS2 loads the model by using IECore().load_network(), python3 use CPU's resource for this task. Normally, python3 only use 1 core to do the task, but raspberry pi 4 with BCM2711 has 4 core. So, can I make a configuration to use more core when load_network() with openvino?
OpenVINO supports Multi-Device where it automatically assigns inference requests to available computational devices to execute the requests in parallel.
And also there is something called Throughput Mode for CPU which allows the Inference Engine to efficiently run multiple inference requests on the CPU simultaneously:
These might helps.
Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.