Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

IR Model from Tensorflow

Mspiz
New Contributor I
710 Views
Hi, I'm trying to use TensorFlow model defined in meta. To have the .pb, defining metagrph, checkpoint and output_node in the freeze script. I use the implementation reported in the guide for freezing that use: import tensorflow as tf from tensorflow.python.framework import graph_io frozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ["name_of_the_output_node"]) graph_io.write_graph(sess.graph, './', 'inference_graph.pb', as_text=False) Obtained a .pb inferenceGraph, my cmd line string is: sudo python3 mo_tf.py --input_model inference_openvino_graph.pb --model_name res_model_IR --output_dir ../my_ir_model/ --mean_values [x0x0x0,x1x1x1,x2x2x2] --log_level=DEBUG My output error is: tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value Variable_34/Adam_1 [[Node: _retval_Variable_34/Adam_1_0_0 = _Retval[T=DT_FLOAT, index=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Variable_34/Adam_1)]] The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/main.py", line 321, in main return driver(argv) File "/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/main.py", line 263, in driver mean_scale_values=mean_scale) File "/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 180, in tf2nx partial_infer(graph) File "/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/middle/passes/infer.py", line 189, in partial_infer refer_to_faq_msg(38)) from err mo.utils.error.Error: Stopped shape/value propagation at "Variable_34/Adam_1" node. In each execution I have different Variable_x error in the propagation. (Variable_39/Adam, Variable_36/Adam, Variable_1/Adam). Thanks in advanced. Cheers, Carmine
0 Kudos
7 Replies
Mark_L_Intel1
Moderator
710 Views

Hi Carmine,

You might get error of the node name, how did you get it?

I am trying to reproduce your issue but I don't have any clue which module you are using and what is your process.

It looks like an issue at the input of the model and the model name was wrong in function partial_infer(graph), you can try to put a sentence "print("abc")" at line 121 of file ~/deployment_tools/model_optimizer/mo/middle/passes/infer.py, if it doesn't print "abc", it means the line 120 had an exception since the node_name was wrong.

There are several ways to freeze the model, so put the code in the python module might not a solid method. If you are using TF models from GitHub, you might try following after "git clone":

  1. Under directory ~/models/research/slim, using export_inference_graph.py to export the model
  2. Get the summary for the name of output_node_names with ~/deployment_tools/model_optimizer/mo/utils/summarize_graph.py
  3. Freeze the output of #1 with /opt/intel/computer_vision_sdk_2018.2.319/deployment_tools/model_optimizer/venv/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py
  4. Convert the frozen model with mo_tf.py
git clone https://github.com/tensorflow/models.git
cd models/research/slim

python3 export_inference_graph.py --alsologtostderr --model_name=inception_v3 --output_file=<output directory>/inception_v3_inf_graph.pb
cd <output directory>
python3 ~/deployment_tools/model_optimizer/mo/utils/summarize_graph.py --input_model=inception_v3_inf_graph.pb
python3 /opt/intel/computer_vision_sdk_2018.2.319/deployment_tools/model_optimizer/venv/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py --input_graph inception_v3_inf_graph.pb --input_binary --input_checkpoint inception_v3.ckpt --output_node_names InceptionV3/Predictions/Reshape_1 --output_graph inception_v3_frozen.pb
python3 /opt/intel/computer_vision_sdk_2018.2.319/deployment_tools/model_optimizer/mo_tf.py --input_model=inception_v3_frozen.pb --input_shape [1,299,299,3]

Note: freeze_graph.py can only be found in OpenVINO R2.

Mark

0 Kudos
Monique_J_Intel
Employee
710 Views

Hi Carmine,

Unfortuantely, this documentation is incorrect as this is an error you get when your model isn't frozen and we will update it accordingly. Now, the easiest way for you to proceed is to freeze your graph using the ckpt.meta(your meta graph file) and your checkpoint file. You can do this by downloading the freeze_graph.py file from tensorflow's github and running the following command with it:

#INSTALL DIR = /opt/intel/computer_vision_sdk/

python3 freeze_graph.py 
--input_meta_graph <model.ckpt.meta> \
--input_binary \
--input_checkpoint  model.ckpt \
--output_node_names  outputnode \
--output_graph model_frozen.pb

Then you will have your model_frozen.pb which will be your frozen model that you can then convert to IR using Model Optimizer

Kind Regards,

Monique Jones

0 Kudos
Mspiz
New Contributor I
710 Views
Hi Guys, thanks for the reply. I did more investigation on it and yes the graph is not frozen. Really I don't use official model, but trained network using vgg-deepdraem topology, to realize fast style transfer. In python using the ".meta " I can do inference with meaningful results values. I did meta -> inferenceGraph.pb-> FrozenGraph.pb but in model_optmizer I came back in operation issue about pow (Like I expressed in this post : https://software.intel.com/en-us/comment/1926702#comment-1926702) I tried different way to frozen but always same results Operations issues. Overwrite the operation not resolve my problem, in this case my network give me empty result image. Any suggestion is Kindly appreciate. Thanks for you time. All the best, Carmine Spizuoco
0 Kudos
Monique_J_Intel
Employee
710 Views

Hi Carmine,

I see that Severine helped you identify your issue with the power layer in extensibility to Model Optimizer. Did you by chance write the extensibility code for the power layer for inference engine? Also, what hardware device are you planning on deploying this layer on cpu,gpu, fpga, or movidius? The reason I ask is because the issue may reside in the implementation of the power layer for inference engine.

Kind Regards,

Monique Jones

 

 

0 Kudos
Mspiz
New Contributor I
710 Views
Hi Monique, yes I did it. But no good results. I'm in VMware under Osx for Ubuntu 16.04. I'm planning to deploy for CPU first after for Movidius stick. When you said: "The reason I ask is because the issue may reside in the implementation of the power layer for inference engine." I'm not sure to follow you, can you be more precise? Cheers, Carmine Spizuoco
0 Kudos
Monique_J_Intel
Employee
710 Views

I am thinking that maybe that possibly the way your power layer is implemented in your model is different from the layer implementation that the power layer in OpenVINO this could simply be different defaults etc. If this is the case then you could create a custom layer that has the correct implementation and use that with inference engine at run time and get the correct results. 

Kind Regards,

Monique Jones

0 Kudos
Mspiz
New Contributor I
710 Views
Hi, I Guess, when you speak about power layer it means power operation ? Right? Well, could be cool write a custom layer, but not for exercise for a real work.. :) I mean this is for testing about style transfer, so, as you said, this example work for you, which model do you use for test it? Did you replace some custom operation or layer ? Can you link a reference of your model and topology ? Kind Regards, Carmine Spizuoco
0 Kudos
Reply