Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Keras/Tensorflow based Unet Segmentation model conversion to IR fails

Dahiya__Navdeep
Beginner
1,276 Views

I am trying to convert a UNet Segmentation model trained using Keras with Tensorflow backend to IR format using mo_tf.py in latest Openvino release.

It's standard UNet model with following key details:

1) Uses Dilated convolution in encoder stages.

2) Uses channels first format [NCHW]

I am using the following command to create the IR files:

python3 mo_tf.py --input_model unet_model.pb --data_type FP32 --disable_nhwc_to_nchw --batch 1

leading to the following error:

Model Optimizer version:     2019.1.0-341-gc9b66a2
[ ERROR ]  Cannot infer shapes or values for node "conv2d_2/convolution".
[ ERROR ]  index 4 is out of bounds for axis 0 with size 4
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function Convolution.infer at 0x7ff0d3edd488>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "conv2d_2/convolution" node. 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
I'm attaching the full debug log.


I have seen a few questions like this on the forum but haven't seen any with a solution. The model consists of standard operations, 2D convolutions (standard and dilated), dropout, maxpooling, upsampling and concatenation. I am not sure why OpenVINO is unable to infer shapes? This first time I am using OpenVINO so it's hard to debug the problem.

Any help in debugging this error is much appreciated.

0 Kudos
16 Replies
Shubha_R_Intel
Employee
1,276 Views

Dear Dahiya, Navdeep

The debug log you attached tells you this:

[ ERROR ] Cannot infer shapes or values for node "conv2d_2/convolution". [ ERROR ] index 4 is out of bounds for axis 0 with size 4 [ ERROR ] [ ERROR ] It can happen due to bug in custom shape infer function <function Convolution.infer at 0x7ff326bcc400>. [ ERROR ] Or because the node inputs have incorrect values/shapes. [ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).

The layer with a problem is "conv2d_2/convolution". If you will notice in the debug log,  each "Op" has Input: then Output: . As the Model Optimizer parses each layer of your unet model, it tries to infer shapes of the layer input and also the layer output. In this case, it failed to infer the input shape of the aforementioned layer. Most likely the dimensions of the input to this layer (likely embedded in your model since you did not pass --input_shape into your MO command) are incorrect.

Please see my detailed answer to the below forum post:

https://software.intel.com/en-us/forums/computer-vision/topic/806712

Thanks for using OpenVino !

Shubha

 

 

0 Kudos
Dahiya__Navdeep
Beginner
1,276 Views

Hi Shubha,

Thanks for replying. I managed to solve this error by downgrading Tensorflow from 13 to 12 and creating the graph and running conversion with the same version of Tensorflow (12). Tensorflow 13 doesn't seem to be supported.

However, now I am getting different error in inference part of the program as follows:

ie_network = IENetwork(xml_filename, bin_filename)
  File "ie_api.pyx", line 271, in openvino.inference_engine.ie_api.IENetwork.__cinit__
RuntimeError: Error reading network: Unsupported Activation layer type: exp

I have a final softmax layer, that's the only place where exponential should arise. Is that not supported somehow? Any other ideas as to why it's not working and how I can debug this?

Thanks,

Navdeep

0 Kudos
Shubha_R_Intel
Employee
1,276 Views

Dear Dahiya, Navdeep,

You're correct. Tensorflow 1.13 is not supported in 2019 R1 Model Optimizer.

if you successfully generated IR on your model, then you definitely should not be getting that error when you run Inference Engine. Can you run one of the OpenVino samples (even slightly modified) on your model ? Please avoid writing code from scratch for the purposes of debugging this issue. Start with one of the OpenVino samples and modify incrementally. 

Please post your results here.

Thanks for using OpenVino !

Shubha

 

0 Kudos
Dahiya__Navdeep
Beginner
1,276 Views

Hi Shubha,

I tried running a segmentation model demo using a "road-segmentation-adas-0001.xml/bin" model downloaded using downloader.py by running the: /opt/intel/openvino_2019.1.094/deployment_tools/inference_engine/samples/python_samples/segmentation_demo/segmentation_demo.py file.

I was able to successfully run this model after specifying libcpu_extension.so library path. I compiled the C++ samples as well and that work as well with the road-segmentation model.

However, if I use my trained model with either python or c++ demo I get the same error with python:

net = IENetwork(model=model_xml, weights=model_bin)
  File "ie_api.pyx", line 271, in openvino.inference_engine.ie_api.IENetwork.__cinit__
RuntimeError: Error reading network: Unsupported Activation layer type: exp
 

and with C++:

[ INFO ] Loading network files
[ ERROR ] Error reading network: Unsupported Activation layer type: exp
 

Kindly let me know how to debug.

Best,

Navdeep

0 Kudos
Shubha_R_Intel
Employee
1,276 Views

Dear Navdeep,

I have sent you a PM. Please send me your IR in a *.zip attachment over PM. The fact that the OpenVino segmentation model demo fails on your model but yet you were able to generate IR for that model speaks of an Inference Engine bug.

Thanks,

Shubha

0 Kudos
Cheng_Hsien__Huang
1,276 Views

I have the same problem.   "RuntimeError: Error reading network: Unsupported Activation layer type: exp"

if is there a softmax layer in segmentation model, the inference engine can not work? 

 

Thanks

Hsien

0 Kudos
Shubha_R_Intel
Employee
1,276 Views

Dear everyone, 

First please download the latest OpenVino 2019R1.1 release. Next add this switch to your mo command:

--input 0:conv2d_1/convolution

So maybe something like this,

mo.py --input_model ~/Downloads/unet_model.pb --input 0:conv2d_1/convolution --input_shape "[1,512,512,1]"

Re-generate the IR like the above and try running inference again.

There was an Unsupported Activation layer type: exp bug but it's been fixed in 2019R1.1

Basically we are using the Model Cutting technique here.

Thanks,

Shubha

 

0 Kudos
Dahiya__Navdeep
Beginner
1,276 Views

Hi,

I tried with version R1.1 and I am getting the same error:

File "inference_using_openvino.py", line 31, in <module>
    ie_network = IENetwork(xml_filename, bin_filename)
  File "ie_api.pyx", line 271, in openvino.inference_engine.ie_api.IENetwork.__cinit__
RuntimeError: Error reading network: Unsupported Activation layer type: exp

I uninstalled previous versions of toolkits (deleted everything) before installing the latest R1.1.

Thanks,

Navdeep

0 Kudos
Shubha_R_Intel
Employee
1,276 Views

Dear Dahiya, Navdeep

So you tried  my tip above adding --input 0:conv2d_1/convolution to your mo command ?

Thanks,

Shubha

 

0 Kudos
Gupta__Shubham
New Contributor I
1,276 Views

Hi All,

Is this issue resolved? I am also facing the same issue. i am using the latest openvino version 2019_R1.1.

I am able to convert the .pb file to IR, but during inference while loading model, its giving error :

terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
  what():  Error reading network: Unsupported Activation layer type: exp
/opt/intel/openvino/inference_engine/include/details/ie_exception_conversion.hpp:71

Regards

Shubham

0 Kudos
Dahiya__Navdeep
Beginner
1,276 Views

Hi,

Unfortunately, I haven't been able to resolve this issue even with the latest toolkit and using the "--input 0:conv2d_1/convolution" flag with the mo command. I still get the same unsupported activation layer type error. My python version is "3.7.3" , tensorflow version is "1.13.1" and keras version is "2.2.4".

Thanks,

Navdeep

0 Kudos
Gupta__Shubham
New Contributor I
1,276 Views

Dahiya, Navdeep wrote:

Hi,

Unfortunately, I haven't been able to resolve this issue even with the latest toolkit and using the "--input 0:conv2d_1/convolution" flag with the mo command. I still get the same unsupported activation layer type error. My python version is "3.7.3" , tensorflow version is "1.13.1" and keras version is "2.2.4".

Thanks,

Navdeep

 

I don't think there is any relation between --input flag and the unsupported layer error. This is probably a bug in their latest release.

Shubham

0 Kudos
Gupta__Shubham
New Contributor I
1,276 Views

Shubha R. (Intel) wrote:

Dear Dahiya, Navdeep

So you tried  my tip above adding --input 0:conv2d_1/convolution to your mo command ?

Thanks,

Shubha

 

Hi Shubha,

Can you explain how is this --input flag is related to unsupported exponential layer error?

Thanks

Shubham

0 Kudos
Shubha_R_Intel
Employee
1,276 Views

Dear 引文:,

Absolutely. What --input does is "model cutting" described in the model cutting doc . You are telling Model Optimizer to use whatever you pass into --input as the new entry point to the model. Likewise, you can do the same with the --output switch, whereby you tell Model Optimizer where to chop the model off at, in other words, what you pass to --output will be the last layer of the model. This technique is used to remove problematic layers from an model so that Model Optimizer can handle it.

Hope it helps,

Thanks,

Shubha

0 Kudos
Gupta__Shubham
New Contributor I
1,276 Views

Shubha R. (Intel) wrote:

Dear 引文:,

Absolutely. What --input does is "model cutting" described in the model cutting doc . You are telling Model Optimizer to use whatever you pass into --input as the new entry point to the model. Likewise, you can do the same with the --output switch, whereby you tell Model Optimizer where to chop the model off at, in other words, what you pass to --output will be the last layer of the model. This technique is used to remove problematic layers from an model so that Model Optimizer can handle it.

Hope it helps,

Thanks,

Shubha

Shubha,

That's okay,I already know that but how come it is related to "unsupported activation layer" error? The error is coming from a softmax layer and obviously we cannot remove it.

Also there is no problem during conversion of the model to IR. The error comes when loading the network.

i am attaching the .xml file. Please let me know that this issue is fixable or not, I already wasted a lot of time in this.

 

Thanks

Shubham

0 Kudos
Shubha_R_Intel
Employee
1,276 Views

Dear Shubham,

Actually Exp has to do with an Activation Layer, not the Softmax Layer.  Please consider The IR Notation Document . Softmax does use Exp (e^x, e^-x) in the formula though, is that why you think the error comes from SoftMax ? SoftMax depends only upon model output, not input right ? Well wouldn't an activation layer depend totally on the model's input ? If the model's input is nonsense, then the Activation Layer using that input during inference would break wouldn't it ? You need to perform model cutting in this case because the assumed very first entry point to the model without model cutting does not make sense.

So I'm a bit confused. Did you make the  Model Optimizer model cutting changes to the command-line I mentioned above? And after making those changes you're still getting the inference error ? What device are you using ? 

Thanks,

Shubha 

 

0 Kudos
Reply