Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
216 Views

NCS2 + Raspberry 4B + Openvino toolkit: ie.load_network(network, device_name) doesn't work

Jump to solution

Hi all,

Recently  I face problems with loading network.

My settings are: NCS2+Raspberry Pi 4B 4GB+ Python 3.7+Opencv 4.1.0+ l_openvino_toolkit_runtime_raspbian_p_2020.3.22..> 

I tried to load a transferred .xml and .bin model with version 10 from a onnx model with opset_vesrion=10 based on Pytorch.

I ran the python script: 

from openvino.inference_engine import IECore

import cv2

ie = IECore()
myriad_config = {"VPU_HW_STAGES_OPTIMIZATION": "YES"}
ie.set_config(myriad_config, "MYRIAD")
model_xml = "models/test.xml"
model_bin = "models/test.bin"
net = ie.read_network(model=model_xml, weights=model_bin)
input_blob = next(iter(net.inputs))
out_blob = next(iter(net.outputs))
net.batch_size = 1
n, c, h, w = net.inputs[input_blob].shape
exec_net = ie.load_network(network=net,device_name="MYRIAD")

Error: exec_net = ie.load_network(network=net,device_name="MYRIAD")
File "ie_api.pyx", line 178, in openvino.inference_engine.ie_api.IECore.load_network
File "ie_api.pyx", line 187, in openvino.inference_engine.ie_api.IECore.load_network
RuntimeError: Check 'input_order_shape.compatible(PartialShape{arg_shape.rank()})' failed at /home/jenkins/agent/workspace/private-ci/ie/build-linux-debian_9_arm/b/repos/closed-dldt/ngraph/src/ngraph/op/experimental/transpose.cpp:51:
While validating node 'v0::Transpose Transpose_4764(Reshape_4762[0]:f32{1,1,1,1}, Constant_4763[0]:i64{7}) -> (dynamic?)':
Input order must have shape [n], where n is the rank of arg.

 

Could you help me to check what could go wrong?

 

Best,

Huanbo

 

 

0 Kudos

Accepted Solutions
Highlighted
191 Views

Hi HuanboSun,


Thanks for reaching out.


As you can check in the Myriad Plugin documentation, the ONNX network is not supported by the Myriad Plugin. Even though, you can try converting your ONNX model to Tensorflow and convert it to IR format using the model optimizer.


There is an incompatibility issue between the OpenVINO™ Toolkit 2020.3 version (for RaspbianOS) and IR version 10 files, so you should add the flag --generate_deprecated_IR_V7 when converting the model to IR format.


Additionally, go to the supported layers list to check if your model can be used with the Intel® Neural Compute Stick 2.


Regards,


Javier A.


View solution in original post

0 Kudos
2 Replies
Highlighted
192 Views

Hi HuanboSun,


Thanks for reaching out.


As you can check in the Myriad Plugin documentation, the ONNX network is not supported by the Myriad Plugin. Even though, you can try converting your ONNX model to Tensorflow and convert it to IR format using the model optimizer.


There is an incompatibility issue between the OpenVINO™ Toolkit 2020.3 version (for RaspbianOS) and IR version 10 files, so you should add the flag --generate_deprecated_IR_V7 when converting the model to IR format.


Additionally, go to the supported layers list to check if your model can be used with the Intel® Neural Compute Stick 2.


Regards,


Javier A.


View solution in original post

0 Kudos
Highlighted
Beginner
175 Views

Dear Javier,

thanks for you reply, the suggestion does help to solve the problem. 

And  there is another problem appeared: As I trained my pytorch model with FP32 and the ONNX and IR are saved as FP32. The RPI4B+NCS2 doesn't give a consistent inference. 

There are some blogs said NCS2 runs with FP16, is that true? However in the openvino zoo, there are xml model files in FP32. This at least causes my confusion.

 

Best Regards!

Huanbo Sun

0 Kudos