Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.
5774 Discussions

openvino python API ie.load_network() is too slow

zihao
Beginner
446 Views

Hi, 

 

I'm using the latest version of openvino (2021.3) to do inference with python API.

I found that loading the IR model onto CPU is too slow. Here is the code:

net = ie.read_network(model=path_to_xml, weights=path_to_bin)
exec_net = ie.load_network(network=net, device_name="CPU")
res = exec_net.infer(inputs=data)

 Any methods to speed up? 

Thanks.

0 Kudos
4 Replies
IntelSupport
Community Manager
423 Views

Hi Zihao,

Thanks for reaching out.

Your code seems fine. Could you share more information about your model? Did you load and read the network more than once for the inference?

 

Regards,

Aznie


zihao
Beginner
408 Views

Thanks for reply.

Yes, I did load the network more than once. My input shape is dynamic so I have to do the net.reshape() first and then reload it to CPU.  The code is something like this:

 

 

 

for img in all_imgs:
    h_new, w_new = img.shape[:-1]
    net.reshape({input_name: [n, c, h_new, w_new]})
    exec_net = ie.load_network(network=net, device_name="CPU")
    res = exec_net.infer(inputs=img)
 
    

 

 

 

I printed out the time cost by ie.load_network() every cycle and it was always around 220ms while the time cost by exec_net.infer() was only around 30ms. 

Any ideas to optimize?

IntelSupport
Community Manager
385 Views

Hi Zihao,

Sorry for the delay in replying to you. You could try referring to Model Optimization techniques to accelerate the inference. Meanwhile, the slowdown might be also due to the layers of your network topology. Check out the layers that relevant to be executed on CPU below:

https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_CPU.html

 

Regards,

Aznie

 

IntelSupport
Community Manager
363 Views

Hi Zihao,

This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.


Regards,

Aznie


Reply