Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
132 Views

Loading model takes too long time on NCS2

Dear all,

I am doing hand_writting_japanese_recognition_demo example with NCS2 devince on Rasberry Pi4 B kit. However, when I loaded the model, the time loading of the program is about 1-2 minutes (Maybe longer). I got this model by using downloader.py with the FP16 precision. I also tried to load this model with NCS2 on my laptop (I7-8750H and 8GB RAM), and the time that the program loaded the model is about 10 seconds. My explain for this issue is that IECore.load_network() uses CPU to load the model. Because rasberry pi4 has much less performance than intel I7-8750H, so the time for loading model on NCS2 takes longer than intel i7-8750H. 

If you know about my issue, can you please confirm my explanation?

Furthermore, can you let me know the workflow of the program when it loads the model on NCS2?

Thank you for you concerns.

0 Kudos
5 Replies
Highlighted
Beginner
105 Views

Hi,

Is there any way to add more CPU core when loading model? My rasberry board only use 1 core when model is loaded mean while Rasberry Pi4 has 4 cores. Maybe, can I use configuration option in IECore.load_network(..., config ="?") to add more CPU core?

Thank you.

0 Kudos
Highlighted
Moderator
89 Views

Greetings,


I believe you are trying to achieve optimization. You may refer here for the possible ways to achieve that:

https://docs.openvinotoolkit.org/latest/openvino_docs_optimization_guide_dldt_optimization_guide.htm...



Sincerely,

Iffa


0 Kudos
Highlighted
Beginner
65 Views

Hi Iffa,

I read the article that you posted but I did not find out the information that i need. I used python3 for my program, when NCS2 loads the model by using IECore().load_network(), python3 use CPU's resource for this task. Normally, python3 only use 1 core to do the task, but raspberry pi 4 with BCM2711 has 4 core. So, can I make a configuration to use more core when load_network() with openvino?

0 Kudos
Highlighted
Moderator
58 Views

OpenVINO supports Multi-Device where it automatically assigns inference requests to available computational devices to execute the requests in parallel.

you can refer here: https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_MULTI.html


And also there is something called Throughput Mode for CPU which allows the Inference Engine to efficiently run multiple inference requests on the CPU simultaneously:

https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_Intro_to_Performance.html


These might helps.

Sincerely,

iffa



0 Kudos
Highlighted
Moderator
52 Views

Greetings,


Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question. 


Sincerely,

Iffa


0 Kudos