Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6482 Discussions

Frustrated - No or multiple placeholders in the model, but only one shape is provided

Lam__Carson
Beginner
1,702 Views

I have to say I am pretty upset with all the failed attempts to put my model onto the Neural Compute Stick 2, this time I even used the ResNext Model that was recommended by the website https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow pointed me here: https://github.com/taki0112/ResNeXt-Tensorflow/blob/master/ResNeXt.py It is extremely hard to get help and the tutorials are really not smooth or clear for the common programmer. I dont think many of the file paths are in the places they say they are in so I dont know how to debug my errors. Take this one for example, I have a 4 placeholders just like in the github, one is for the input, one is for feeding the target labels during training , another is for the learning rate and the 4th one is to indicate whether it is being trained. I am trying to convert the model that was given to me, this is not even my own custom model. thanks for the good samaritans trying to suggest things but really Intel should be doing better

[ 2019-01-21 23:22:57,693 ] [ DEBUG ] [ main:331 ]  Traceback (most recent call last):
  File "/home/carson/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/main.py", line 325, in main
    return driver(argv)
  File "/home/carson/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/main.py", line 267, in driver
    mean_scale_values=mean_scale)
  File "/home/carson/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 184, in tf2nx
    argv.freeze_placeholder_with_value)
  File "/home/carson/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/front/extractor.py", line 807, in user_data_repack
    _input_shapes, _freeze_placeholder = input_user_data_repack(graph, input_user_shapes, freeze_placeholder)
  File "/home/carson/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/front/extractor.py", line 760, in input_user_data_repack
    refer_to_faq_msg(32))
mo.utils.error.Error: No or multiple placeholders in the model, but only one shape is provided, cannot set it. 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #32. 
 

0 Kudos
1 Solution
Evgeny_L_Intel
Employee
1,702 Views

The tricky part of this particular topology is that the TensorFlow generates model which is not possible to freeze correctly. Even if you freeze the model it will be impossible to infer it with TensorFlow because it will contain broken nodes dependencies related to the "training" flag. The only option is to create the pure inference graph (set "training" variable to constant False) and freeze it. You can achieve this the following way:

Open the main script and change the line: 

logits = ResNeXt(x, training=training_flag).model

to: 

logits = ResNeXt(x, training=tf.constant(False)).model

and add several line so the code looks like this:

     if ckpt and tf.train.checkpoint_exists(ckpt.model_checkpoint_path):
         saver.restore(sess, ckpt.model_checkpoint_path)
     else:
         sess.run(tf.global_variables_initializer())
 
     from tensorflow.python.framework import graph_io
     frozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, [logits.name[:-2]])
     graph_io.write_graph(frozen, './', 'inference_graph.pb', as_text=False)
     exit(0)

     
and re-run the script again. The file 'inference_graph.pb' will be generated.

Then you can convert the model with Model Optimizer using the following command line:

./mo.py --input_model inference_graph.pb --input_shape [1,32,32,3]
 

View solution in original post

0 Kudos
6 Replies
Evgeny_L_Intel
Employee
1,703 Views

The tricky part of this particular topology is that the TensorFlow generates model which is not possible to freeze correctly. Even if you freeze the model it will be impossible to infer it with TensorFlow because it will contain broken nodes dependencies related to the "training" flag. The only option is to create the pure inference graph (set "training" variable to constant False) and freeze it. You can achieve this the following way:

Open the main script and change the line: 

logits = ResNeXt(x, training=training_flag).model

to: 

logits = ResNeXt(x, training=tf.constant(False)).model

and add several line so the code looks like this:

     if ckpt and tf.train.checkpoint_exists(ckpt.model_checkpoint_path):
         saver.restore(sess, ckpt.model_checkpoint_path)
     else:
         sess.run(tf.global_variables_initializer())
 
     from tensorflow.python.framework import graph_io
     frozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, [logits.name[:-2]])
     graph_io.write_graph(frozen, './', 'inference_graph.pb', as_text=False)
     exit(0)

     
and re-run the script again. The file 'inference_graph.pb' will be generated.

Then you can convert the model with Model Optimizer using the following command line:

./mo.py --input_model inference_graph.pb --input_shape [1,32,32,3]
 

0 Kudos
Lam__Carson
Beginner
1,702 Views

Let me give this advice a try, thanks for helping !

0 Kudos
Evgeny_L_Intel
Employee
1,702 Views

Also, note that if you plan to use "classification_sample" from Inference Engine samples directory then you need to specify two more command line parameters to Model Optimizer:

  • --reverse_input_channels. The IE samples read images in BGR channels order while TF models are fed with images with RGB channels order.
  • --scale 255. Since the model was trained with input values scaled from [0,255] to [0,1] interval.
0 Kudos
Lam__Carson
Beginner
1,702 Views

Hi @Evgeny Lazarev

Again I really appreciate the help. I am a Pytorch user and just getting used to Tensorflow now. It looks like the  training_flag: True is needed for batch_normalization to work during training. So when training on a new dataset should I set it  training_flag: True in order to generate a .ckpt file where the parameters will be stored, then after training, rebuild the graph using

logits = ResNeXt(x, training=tf.constant(False)).model

for the purpose of saving the architecture,  then the .pb file will be compatible with the .ckpt file that was trained with training_flag: True ? 

Right now it saves 

ResNeXt.ckpt.data-00000-of-00001

ResNeXt.ckpt.index

ResNeXt.ckpt.meta

 Does the .pb file contain both the architecture and the trained parameters or will I need to save this as a .ckpt  in addition to the .pb since the tutorial for mo_tf.py says I need both:

Load Non-Frozen Models to the Model Optimizer
There are three ways to store non-frozen TensorFlow models and load them to the Model Optimizer:

Checkpoint: In this case, a model consists of two files:
inference_graph.pb or inference_graph.pbtxt
checkpoint_file.ckpt
If you do not have an inference graph file, refer to Freeze Custom Models in Python.

To convert such TensorFlow model:

Go to the <INSTALL_DIR>/deployment_tools/model_optimizer directory
Run the mo_tf.py script with the path to the checkpoint file to convert a model:
If input model is in .pb format:
mo_tf.py --input_model <INFERENCE_GRAPH>.pb --input_checkpoint <INPUT_CHECKPOINT>
If input model is in .pbtxt format:
Copy Code
mo_tf.py --input_model <INFERENCE_GRAPH>.pbtxt --input_checkpoint <INPUT_CHECKPOINT> --input_model_is_text

Thank you ! All the best,

Carson

0 Kudos
Evgeny_L_Intel
Employee
1,702 Views

Carson,

Yes, you are right. The graph (meta file and checkpoint file) created with "training_flag: True" is compatible with graph created with "training_flag: False".

The generated frozen pb file contains both architecture and trained parameters. There is a common confusion about pb files because you can create a pb file with graph topology only. But this will be non-frozen graph .

The non-frozen pb file (with just topology structure) and ckpt file (with weights) should be used if you have a checkpoint file and want to use it as a "source" of the model. In this case you should use "--input_checkpoint" command line paramer.

0 Kudos
Lam__Carson
Beginner
1,702 Views

Thank you, this worked for me, I trained as usual having built the computational graph as

logits = model(_x, training=training_flag).model

then after training I saved the model and parameters in the .pb with: 

tf.reset_default_graph()
x = tf.placeholder(tf.float32, shape=[None, image_size, image_size, img_channels], name="input_img")
y = tf.placeholder(tf.float32, shape=[None, class_num])

training_flag = tf.placeholder(tf.bool)
learning_rate = tf.placeholder(tf.float32, name='learning_rate')

logits = model(x, training=tf.constant(False)).model
saver = tf.train.Saver(tf.global_variables())

with tf.Session() as sess:
    
    saver.restore(sess, ckpt.model_checkpoint_path)
    frozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, [logits.name[:-2]])
    graph_io.write_graph(frozen, './model', 'inference_graph.pb', as_text=False)

 

0 Kudos
Reply