Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Dan_P_Intel
Employee
115 Views

model optimizer conversion doesn't work

Hi,

I've been trying to get my network running on Gen for about a week now. I've been told that OpenVino can handle the porting very easily - this is not my experience so far. I'm providing below all the materials in order to reproduce the issue. Could you please help?

How to reproduce the issue

$ git clone https://github.com/takanokage/Learning-to-See-in-the-Dark.git l2std

$ cd l2std

$ python test_Sony.py

$ python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py --input_model test_Sony_graph.pb

 

Model Optimizer arguments

    Batch:     1

    Precision of IR:     FP32

    Enable fusing:     True

    Enable gfusing:     True

    Names of input layers:     inherited from the model

    Path to the Input Model:     test_Sony_graph.pb

    Input shapes:     inherited from the model

    Log level:     ERROR

    Mean values:     ()

    IR output name:     inherited from the model

    Names of output layers:     inherited from the model

    Path for generated IR:     /home/dpetre/l2std/master

    Reverse input channels:     False

    Scale factor:     None

    Scale values:     ()

    Version:     0.3.75.d6bae621

    Input model in text protobuf format:     False

    Offload unsupported operations:     False

    Path to model dump for TensorBoard:     None

    Update the configuration file with input/output node names:     None

    Operations to offload:     None

    Patterns to offload:     None

    Use the config file:     None

2018-06-28 10:52:25.737848: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA

[ ERROR ]  Cannot infer shapes or values for node "g_conv6_1/weights".

[ ERROR ]  Attempting to use uninitialized value g_conv6_1/weights

     [[Node: _retval_g_conv6_1/weights_0_0 = _Retval[T=DT_FLOAT, index=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](g_conv6_1/weights)]]

[ ERROR ]  

[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7f2ba5117400>.

[ ERROR ]  Or because the node inputs have incorrect values/shapes.

[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).

[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.

[ ERROR ]  Stopped shape/value propagation at "g_conv6_1/weights" node. For more information please refer to Model Optimizer FAQ, question #38.

 
0 Kudos
6 Replies
Severine_H_Intel
Employee
115 Views

Hi Dan Petre,

the .pb model output by test_Sony.py is not a frozen model. OpenVINO can convert frozen model from TF. In order to do that, I invite you to read our documentation that you can find in the package: computer_vision_sdk_2018.2.299/deployment_tools/documentation/ConvertFromTF.html

As I did the steps, this is the line you need to run:

python3 -m tensorflow.python.tools.freeze_graph --input_graph test_Sony_graph.pb --input_checkpoint checkpoint\Sony\model.ckpt --input_binary=true --output_node_names=g_conv10/BiasAdd --output_graph frozen_Sony.pb

You can see that the frozen graph is a lot larger in size than the original .pb, as the frozen one contains the weights, which was not the case of  test_Sony_graph.pb and explained the error output when running MO with the not frozen one. 

I set --output_node_names=g_conv10/BiasAdd , as the last layer DepthToSpace is not supported by the MO

When running the model optimizer will need to provide the shape of the input (here in question marks)

python3 mo_tf.py --input_model frozen_Sony.pb --input_shape=[?,?,?,4] --input=Placeholder

Best, 

Severine

 

Dan_P_Intel
Employee
115 Views

Hi Severine,

thank you for your reply and for your help, much appreciated.

I've already tried unsuccessfully to freeze the graph.
My main issues was the --output_node_names argument: how can I know this argument?
I can see now that tensorboard helps with this.
Would be nice if OpenVino would figure out what the output node name is without user help. The fact that OpenVino can manipulate the TensorFlow graph and model would seem to suggest that it has enough information to do so.

Thanks again!

Best,
Dan
 

Dan_P_Intel
Employee
115 Views

By the way, you mentioned tf.depth_to_space(…) is not supported.
What can I do, what's your recommendation?

Thanks!

Monique_J_Intel
Employee
115 Views

I've sent Dan the steps of how to convert a TF model  that showcases using bazel to build GTT's summarize graph that gives you the output node of the model to then use as a parameter to freeze the model which is the only format that MO accepts for TF models. Will work with Dan internally as we are focusing on moving internal post to another platform.

yu__jia
Beginner
115 Views

Hi , my issues was the --output_node_names argument: how can I know this argument?Can you send me the steps of how to convert a TF model ?

Thank you

Jia

Shubha_R_Intel
Employee
115 Views

Dear yu, jia,

This is a good question. How does one find out  --output_node_names ? There are many ways actually, but this is not really an OpenVino question.  Here are some internet links to help you :

https://stackoverflow.com/questions/47267636/tensorflow-how-do-i-find-my-output-node-in-my-tensorflo...

https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms (look at summarize_graph)

Hope it helps,

Thanks,

Shubha