Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Error while converting Tensorflow model to IR

s_3
Beginner
833 Views

I wrote a Tensorflow program to classify the images into dogs and cats. I am getting the following error while converting to an intermediate representation(IR) using the mo_tf.py as per the documentation, which further will be used to run on the Movidius NCS2 on Raspberry pi 3B. Please someone help me solve this issue if they have faced a simililar error.

[ 2019-05-22 09:53:33,888 ] [ DEBUG ] [ infer:128 ]  Partial infer for gradients/mul_grad/tuple/group_deps
[ 2019-05-22 09:53:33,890 ] [ DEBUG ] [ infer:129 ]  Op: NoOp
[ INFO ]  Called "tf_native_tf_node_infer" for node "gradients/mul_grad/tuple/group_deps"
[ ERROR ]  Cannot infer shapes or values for node "gradients/mul_grad/tuple/group_deps".
[ ERROR ]  "The name 'gradients/mul_grad/tuple/group_deps:0' refers to a Tensor which does not exist. The operation, 'gradients/mul_grad/tuple/group_deps', exists but only has 0 outputs."
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7fc330677e18>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ 2019-05-22 09:53:33,909 ] [ DEBUG ] [ infer:194 ]  Node "gradients/mul_grad/tuple/group_deps" attributes: {'pb': name: "gradients/mul_grad/tuple/group_deps"
op: "NoOp"
, '_in_ports': {0, 1}, '_out_ports': {0}, 'kind': 'op', 'name': 'gradients/mul_grad/tuple/group_deps', 'op': 'NoOp', 'precision': 'FP32', 'infer': <function tf_native_tf_node_infer at 0x7fc330677e18>, 'is_output_reachable': True, 'is_undead': False, 'is_const_producer': False, 'is_partial_inferred': False}
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "gradients/mul_grad/tuple/group_deps" node. 

0 Kudos
6 Replies
s_3
Beginner
833 Views

I have used the --log_level=DEBUG for running the model optimizer, I tried to follow the sequence of output shapes and operations it used, I figured that most of the Tensorflow operations are converted to IR operations, I have doubts whether the operations mentioned below are supported in the latest OpenVINO Release 2019_R1.1.

I have gone through the supported framework operation of Tensorflow, the mentioned operations are not in the list, I see through the DEBUG mode that some operations even though are not in the list are supported by converting to the alternate operations, Can anyone confirm which of these operations are supported to use, so that I can fix the error in the above post.

 1) tf.Variable(tf.truncated_normal(shape, stddev=0.05))

2) tf.Variable(tf.constant(0.05, shape=[size]))

3) tf.reduce_sum

4)tf.reduce_mean

5) tf.log

6) tf.train.AdamOptimizer(learning_rate=1e-4).minimize(cost)

7) tf.global_variables_initializer()

0 Kudos
Shubha_R_Intel
Employee
833 Views

Dear saineni, sreekar,

Please see the Supported Layers Document to understand how Tensorflow Layers map to Model Optimizer.  Also things like tf.log,  tf.train.AdamOptimizer(learning_rate=1e-4).minimize(cost) and tf.global_variables_initializer() are irrelevant to Model Optimizer. Model Optimizer doesn't care about ancillary variables related to training and logging, nor does it care about tf.Variable. When using Tensorflow, Model optimizer does expect as input a frozen tensorflow model . 

How to Freeze a Tensorflow Model explains how to freeze your tensorflow model so that it can be consumed by Model Optimizer.

Are you getting specific Model Optimizer errors after feeding in a frozen tensorflow model ? If so what model are you trying to convert and what error are you experiencing ? Please attach your DEBUG log here. You are right that tf.reduce_sum and tf.reduce_mean are not in the supported layer list for Model Optimizer Tensorflow but without seeing the specific error you're experiencing I can't comment more on that.

Hope it helps,

Shubha

 

 

0 Kudos
s_3
Beginner
833 Views

Thank you, I have figured out the reason for the above error, seems there is a bug in the model optimizer, I am using two placeholders as inputs

1) x = tf.placeholder(tf.float32, shape=[None, img_size,img_size,num_channels], name='x')

2) y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')

when i ran the command 

$>  python mo_tf.py --input_meta_graph ~/Downloads/dogs-cats-model.meta --reverse_input_channels --data_type FP16 --batch 32 --output_dir ~/Desktop/MOoutput/

I got the below error where 'y_cross_value' is the name of my tensorflow operation node which refers to 'y_true', even after mentioning the --batch 32 through command line, Model Optimizer is not taking the second(y_true) variable shape as [32, num_classes]. Through the debug output i have understood that it is parsing the entire model and getting struck at the 'y_true' variable shape.

[ ERROR ]  Cannot infer shapes or values for node "gradients/y_cross_value_grad/tuple/group_deps".
[ ERROR ]  "The name 'gradients/y_cross_value_grad/tuple/group_deps:0' refers to a Tensor which does not exist. The operation, 'gradients/y_cross_value_grad/tuple/group_deps', exists but only has 0 outputs."
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7fa323d32e18>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "gradients/y_cross_value_grad/tuple/group_deps" node.

-When I tried to pass the shape values through the --input_shape argument as

$>python mo_tf.py --input_meta_graph ~/Downloads/dogs-cats-model.meta --reverse_input_channels --data_type FP16 --input_shape [[32,128,128,3],[32,2]] --output_dir ~/Desktop/MOoutput/

I am getting the following Error

python mo_tf.py --input_meta_graph ~/Downloads/dogs-cats-model.meta --reverse_input_channels --data_type FP16 --input_shape [[32,128,128,3],[32,2]] --output_dir ~/Desktop/MOoutput/

When i changed the braces format in the --input_shape

$>python mo_tf.py --input_meta_graph ~/Downloads/dogs-cats-model.meta --reverse_input_channels --data_type FP16 --input_shape ((32,128,128,3],(32,2)) --output_dir ~/Desktop/MOoutput/

I am getting the following error

bash: syntax error near unexpected token `('

The only way i could solve the error from occuring is to make the Model Optimizer to parse through the two input placeholders where only the batch value is passed through command line, all the other parameters are mentioned in the model. If there is any way to fix this bug, or any other solution please help me.

Yes, I have gone through the supported layers document, I am converting a custom Tesorflow model using the "metagraph" format in the "Loading Non-Frozen Models to the Model Optimizer" given in the documentation link  https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html

And all the seven tensorflow operations including the tf.reduce_sum and tf.reduce_mean are supported through the model optimizer parsing from the frozen model, as i have seen their shape references through the DEBUG mode, but are not mentioned in the documentation link  https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Supported_Frameworks_Layers.html

Please update them in the documentation.

 

Thanks,

Sreekar S.

0 Kudos
Shubha_R_Intel
Employee
833 Views

Dear saineni, sreekar,

Have you looked at a technique called model cutting ?

You can pass in a specific layer to --input along with all the other model optimizer switches. To use this, get a tensorboard picture of your model to see what the actual inputs are and create a different entry point for Model Optimizer.  I just gave a similar suggestion to this forum poster - model cutting . i was able to determine what to pass to --input by looking at his generated IR. In your case you cannot even generate IR but you can visualize your inputs through TensorBoard. 

Let me know if this helps you. Also please make sure you are using the latest and greatest OpenVino 2019R1.1 which was just released this week.

Thanks,

Shubha

0 Kudos
s_3
Beginner
833 Views

Thank you, it was useful.

Finally, it worked when I used the .pb file instead of .meta file

0 Kudos
Shubha_R_Intel
Employee
833 Views

Dear saineni, sreekar,

Thanks for reporting back. Glad to know you were successful.

Shubha

 

0 Kudos
Reply