Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

IECore fails to load network

CSche22
Beginner
1,610 Views

Hello

 

When I try to load my network with the IECore.load_network() function I get the following error:

 

exec_net = ie.load_network(model, device_name="MYRIAD")

Traceback (most recent call last):

 

 File "<ipython-input-16-ffe86d643791>", line 1, in <module>

  exec_net = ie.load_network(model, device_name="MYRIAD")

 

 File "ie_api.pyx", line 178, in openvino.inference_engine.ie_api.IECore.load_network

 

 File "ie_api.pyx", line 187, in openvino.inference_engine.ie_api.IECore.load_network

 

RuntimeError: Failed to find reference implementation for `Select83_clip` Layer with `Clamp` Type on constant propagation

 

The network was converted with the model optimizer from .onnx to .xml and .bin everything worked fine. As far as I know the Clip layer should be supported. When I try to load a model dowanloaded from the model zoo (and converted with the optimizer) everything just works fine. Maybe someone has an idea how I could troubleshoot this?

 

Thanks for your help

0 Kudos
7 Replies
David_C_Intel
Employee
1,484 Views

Hi CSche22,

Thanks for reaching out. Could you please share the following for us to test on our end:

 

  • The frozen model and the model optimizer command used to convert it to IR files.
  • A snippet of code to be able to run your custom model.
  • A sample input and output.

 

Regards,

David C.

CSche22
Beginner
1,484 Views

Sadly I cant share the model to public. My model is trained in CNTK. I now tryed to freeze the model before saving it to an .onnx file now the error occures on a different layer (Failed to find reference implementation for `Select5346_clip` Layer with `Clamp` Type on constant propagation). Are there some steps I could try before converting the model? My model convertion command is simply "python mo.py --input_model C:/Users/Schel/Documents/Bachelorarbeit/Blood21-Epoch10.onnx --output_dir C:/Users/Schel/Documents/Bachelorarbeit"

0 Kudos
David_C_Intel
Employee
1,484 Views

Hi CSche22,

 

Thanks for your reply.

We will look into your issue and will come back to you as soon as possible. We understand that you cannot share your model publicly, but you can share it privately to us via private message.

 

Best Regards,

David C.

0 Kudos
DLuon2
Beginner
1,484 Views

hi DavidC_Intel,

 

I got the same error when try to load my network (optimized from .pb file) with IECore.load_network()

 

File "read_opnevino.py", line 19, in <module>

  exec_net = ie.load_network(network=net, device_name="CPU")

 File "ie_api.pyx", line 85, in openvino.inference_engine.ie_api.IECore.load_network

 File "ie_api.pyx", line 92, in openvino.inference_engine.ie_api.IECore.load_network

RuntimeError: Failed to find reference implementation for `AttentionOcr_v1/sequence_logit_fn/SQLR/LSTM/attention_decoder/MatMul` Layer with `FullyConnected` Type on constant propagation

 

have you find any solution to resolve this?

 

Regards,

Tuan Luong

0 Kudos
CSche22
Beginner
1,484 Views

I haven't found any solution for it yet. I think it's kinda strange that you can convert it and call ie.read_network and ie.query_network (you'll get every layer shown from the network) but not ie.load_network ... what version of OpenVino are you using? Maybe I should try an older one, I used 2020.1 and now 2020.2. If you find any solution to this please let me know.

 

 

0 Kudos
DLuon2
Beginner
1,484 Views

Hi CSche22,

 

As your advice, I updated my Openvino to the lastest version ( my old version is 19.3) and the problem be resolved. You sould try the lastest one.

 

Regrards,

Tuan Luong

0 Kudos
CSche22
Beginner
1,484 Views

So update on this, I made it work somehow but I don't know why it works this way and it's not a really smooth solution:

Step 1:

Convert the .onnx model to a tensorflow model with onnx-tf

Step 2:

Downgraded my openvino to 2019.3

Step 3:

Add a line into \deployment_tools\model_optimizer\extensions\middle\EltwiseInputReshape.py

line 96: shape = [int(x) for x in shape]

cause otherwise the shapes are numpy.float type

Step 4:

Convert the tensorflow .pb to .xml and .bin with model optimizer 2019.3

 

Model loads in openvino 2020.3 with exec_net = ie.load_network(model, device_name="MYRIAD") and runs as expected

0 Kudos
Reply