- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I'm using the latest version of openvino (2021.3) to do inference with python API.
I found that loading the IR model onto CPU is too slow. Here is the code:
net = ie.read_network(model=path_to_xml, weights=path_to_bin)
exec_net = ie.load_network(network=net, device_name="CPU")
res = exec_net.infer(inputs=data)
Any methods to speed up?
Thanks.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Zihao,
Thanks for reaching out.
Your code seems fine. Could you share more information about your model? Did you load and read the network more than once for the inference?
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for reply.
Yes, I did load the network more than once. My input shape is dynamic so I have to do the net.reshape() first and then reload it to CPU. The code is something like this:
for img in all_imgs:
h_new, w_new = img.shape[:-1]
net.reshape({input_name: [n, c, h_new, w_new]})
exec_net = ie.load_network(network=net, device_name="CPU")
res = exec_net.infer(inputs=img)
I printed out the time cost by ie.load_network() every cycle and it was always around 220ms while the time cost by exec_net.infer() was only around 30ms.
Any ideas to optimize?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Zihao,
Sorry for the delay in replying to you. You could try referring to Model Optimization techniques to accelerate the inference. Meanwhile, the slowdown might be also due to the layers of your network topology. Check out the layers that relevant to be executed on CPU below:
https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_CPU.html
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Zihao,
This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.
Regards,
Aznie
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page