- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear all,
I am doing hand_writting_japanese_recognition_demo example with NCS2 devince on Rasberry Pi4 B kit. However, when I loaded the model, the time loading of the program is about 1-2 minutes (Maybe longer). I got this model by using downloader.py with the FP16 precision. I also tried to load this model with NCS2 on my laptop (I7-8750H and 8GB RAM), and the time that the program loaded the model is about 10 seconds. My explain for this issue is that IECore.load_network() uses CPU to load the model. Because rasberry pi4 has much less performance than intel I7-8750H, so the time for loading model on NCS2 takes longer than intel i7-8750H.
If you know about my issue, can you please confirm my explanation?
Furthermore, can you let me know the workflow of the program when it loads the model on NCS2?
Thank you for you concerns.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Is there any way to add more CPU core when loading model? My rasberry board only use 1 core when model is loaded mean while Rasberry Pi4 has 4 cores. Maybe, can I use configuration option in IECore.load_network(..., config ="?") to add more CPU core?
Thank you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings,
I believe you are trying to achieve optimization. You may refer here for the possible ways to achieve that:
Sincerely,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Iffa,
I read the article that you posted but I did not find out the information that i need. I used python3 for my program, when NCS2 loads the model by using IECore().load_network(), python3 use CPU's resource for this task. Normally, python3 only use 1 core to do the task, but raspberry pi 4 with BCM2711 has 4 core. So, can I make a configuration to use more core when load_network() with openvino?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
OpenVINO supports Multi-Device where it automatically assigns inference requests to available computational devices to execute the requests in parallel.
you can refer here: https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_MULTI.html
And also there is something called Throughput Mode for CPU which allows the Inference Engine to efficiently run multiple inference requests on the CPU simultaneously:
https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_Intro_to_Performance.html
These might helps.
Sincerely,
iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings,
Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.
Sincerely,
Iffa

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page