Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Employee
107 Views

Unable to convert tf model to IR

Hello,

 

I retrain inception_v3 model and was trying to convert it to IR format using model optimizer but I get the following error. Any way to solve this issue?

[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  local variable 'new_attrs' referenced before assignment
[ ERROR ]  Traceback (most recent call last):
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\front\extractor.py", line 749, in extract_node_attrs
    supported, new_attrs = extractor(Node(graph, node))
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\pipeline\tf.py", line 104, in <lambda>
    extract_node_attrs(graph, lambda node: tf_op_extractor(node, check_for_duplicates(tf_op_extractors)))
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\front\tf\extractor.py", line 92, in tf_op_extractor
    attrs = tf_op_extractors[op](node)
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\front\common\register_custom_ops.py", line 96, in <lambda>
    node, cls, disable_omitting_optional, enable_flattening_optional_params),
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\front\common\register_custom_ops.py", line 29, in extension_extractor
    supported = ex.extract(node)
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\extensions\front\tf\const_ext.py", line 32, in extract
    'value': tf_tensor_content(pb_tensor.dtype, shape, pb_tensor),
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\front\tf\extractors\utils.py", line 76, in tf_tensor_content
    dtype=type_helper[0]),
UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\main.py", line 314, in main
    return driver(argv)
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\main.py", line 281, in driver
    ret_res = emit_ir(prepare_ir(argv), argv)
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\main.py", line 226, in prepare_ir
    graph = mo_tf.driver(argv)
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\pipeline\tf.py", line 104, in driver
    extract_node_attrs(graph, lambda node: tf_op_extractor(node, check_for_duplicates(tf_op_extractors)))
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\front\extractor.py", line 755, in extract_node_attrs
    new_attrs['name'] if 'name' in new_attrs else '<UNKNOWN>',
UnboundLocalError: local variable 'new_attrs' referenced before assignment

[ ERROR ]  ---------------- END OF BUG REPORT --------------
[ ERROR ]  -------------------------------------------------

0 Kudos
12 Replies
Highlighted
107 Views

Hi snhase,

Can you share the below details:

  • What is the version of OpenVINO?
  • What is the version of Tensorflow?
  • What command you used to optimize the model?

Can you share the model and necessary files so that we can replicate the optimization at our end. If required, I can send a PM to share the model privately.

Best Regards,

Surya

0 Kudos
Highlighted
107 Views

Hi, thanks for reply. I got this error when I tried running the demo program that came with the OpenVINO. I had same error with openvino_2019.3.379 which made me upgrade to 2020.1.033. unfortunately i had same error too.

I CD into the bin directory and run the setupvars.bat 

Then CD into the model_optimizer directory to configure the model optimizer by running the install_prerequisites.bat.

After running the file I got message on the console that all conditions are satisfied.

But whenever I CD into the demo directory to run the demo files, I get this error... 

0 Kudos
Highlighted
Employee
107 Views

Hello Surya,

Here is the info :

What is the version of OpenVINO?- 2020.1.033

What is the version of Tensorflow ? -   1.14.0

What command you used to optimize the model?     

python3 mo_tf.py --input_model retrained_model.pb --output_dir .\IR_model\ --mean_values [128,128,128] --scale_values [299,299,299]

pythonversion- 3.6.9

So I went through this page on openvino website  to see all the supported model and saw that inception_v3_2016_08_28  is listed. The model version I retrained is inception-2015-12-05. Does the platform work only for the supported inception_v3 version and not previous versions? 

Also, I can send you the model via PM if needed to duplicate.

Thanks!

snhase

0 Kudos
Highlighted
107 Views

Hi Snhase,

It is recommended to use only the supported models.

Try to use inception_v3_2016_08_28 model and let us know if the issue still persists.

Best Regards,

Surya

0 Kudos
Highlighted
107 Views

Hi Manpasu,

  • Which demo and model are you using?
  • What command are you using to execute the demo?

Best Regards,

Surya

0 Kudos
Highlighted
107 Views

Hi, Chauhan,

Thanks for your reply. I figured out the cause of the problem. 

0 Kudos
Highlighted
Employee
107 Views

Hello Surya,

I fine tuned inception_v3_2016_08_28 model following the tf-slim instructions and the froze the resultant pbtxt and meta graph  files to get a frozen .pb model and then tried to push it through the model optimizer and I am still unable to convert it to IR format. Here is the error I see : 

[ ERROR ]  Unexpected exception happened during extracting attributes for node case/cond/is_jpeg/Substr.
Original exception message: 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128)

That's the entire error I see. Any ideas?

Thanks,

snhase

0 Kudos
Highlighted
107 Views

Hi Snhase,

 

Can you mention the command you used to freeze the model?

Have you used –input_type image_tensor parameter while freezing the model?

You may also refer this article to freeze tensorflow model.

Feel free to ask any other question.

Best Regards,

Surya

0 Kudos
Highlighted
Employee
107 Views

Hello Surya,

Thanks for your reply. My bad, I was using all the nodes instead of just one. I changed the output node to be only the output node for inception_v3 (InceptionV3/Predictions/Reshape_1) and then froze the graph. This is code snippet I used  to freeze the graph :


with tf.Session() as sess:
    saver = tf.train.import_meta_graph(cwdpath+'\\inception_v3_finetuned\\model.ckpt-2.meta')
    saver.restore(sess,tf.train.latest_checkpoint(cwdpath+'\\inception_v3_finetuned\\'))
    frozen_graph_def = tf.graph_util.convert_variables_to_constants(sess,sess.graph_def,['InceptionV3/Predictions/Reshape_1'])
    with open('frozen_graph_v2.pb', 'wb') as f:
        f.write(frozen_graph_def.SerializeToString())

Then tried to put it through the optimizer and get a new error now even though the frozen graph is not empty :

[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.output_cut.OutputCut'>): Graph contains 0 node after executing <class 'extensions.front.output_cut.OutputCut'>. It considered as error because resulting IR will be empty which is not usual.

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

I also tried using the export_graph function you guided me to, even with that I get the following error when putting through the optimizer :

[ ERROR ]  Cannot infer shapes or values for node "InceptionV3/Logits/Conv2d_1c_1x1/biases".
[ ERROR ]  Attempting to use uninitialized value InceptionV3/Logits/Conv2d_1c_1x1/biases
         [[{{node _retval_InceptionV3/Logits/Conv2d_1c_1x1/biases_0_0}}]]
[ ERROR ]
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x000001D393DC6BF8>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "InceptionV3/Logits/Conv2d_1c_1x1/biases" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.

Here is the freeze command I used : 

python export_inference_graph.py --input_type image_tensor --trained_checkpoint_prefix ".\inception_v3_finetuned\model.ckpt-2.meta" --output_file frozen_inception_v3.pb

Is something wrong with the commands that's exporting wrong frozen graphs? I am new to tensorflow and openvino so now at a loss,  not sure how to proceed now. Any suggestions?

Thanks,

Snehal

0 Kudos
Highlighted
107 Views

Hi Snhase,

What command you used to optimize the model?

Can you share the model and necessary files so that we can replicate the optimization at our end. If required, I can send a PM to share the model privately.

Best Regards,

Surya

0 Kudos
Highlighted
Employee
107 Views

Hi Surya,

Here is the model optimizer command I used : 

python mo_tf.py --input_model "C:\Users\inception_v3_finetuned\frozen_v4.pb" --output_dir "C:\Users\inception_v3_finetuned" --mean_values [128,128,128] --scale_values [299,299,299]

The model files altogether are larger than 250mb so can't attach it. Here is a google drive link, let me know if you can access it.

https://drive.google.com/open?id=1A8NBSBbKa6HUv3cATtvns_7UbtQ5bzX8 ;

-------------------------------------------------------------------------------------------------------------------------------------------

Also, just for debugging purposes, I tried convert the inception_v3 model as is without any fine tuning, I used the the export_inference_graph.py  tool provided by tf-slim to export out a .pb inference graph from the latest checkpoint file and I still get the same error

[ ERROR ]  Cannot infer shapes or values for node "InceptionV3/Logits/Conv2d_1c_1x1/biases".
[ ERROR ]  Attempting to use uninitialized value InceptionV3/Logits/Conv2d_1c_1x1/biases
         [[{{node _retval_InceptionV3/Logits/Conv2d_1c_1x1/biases_0_0}}]]
[ ERROR ]
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x00000278D9018BF8>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "InceptionV3/Logits/Conv2d_1c_1x1/biases" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.

Command used for creating inference graph and model optimizer are as follows :

python export_inference_graph.py --alsologtostderr --model_name=inception_v3 --output_file=C:\tmp\inception_v3_inf_graph.pb

python mo_tf.py --input_model C:\tmp\inception_v3_2016\inception_v3_inf_graph.pb --output_dir "C:\tmp\inception_v3_2016" --mean_values [128,128,128] --scale_values [299,299,299]

 

This is confusing to me..why am I not able to to do this without an error? 

Thanks,

Snehal

 

0 Kudos
Highlighted
Employee
107 Views

Hello Surya,

After some more debugging, I figured out the problem was with the how I was freezing the exported graphs for the supported models. On using the freeze_graph tool by tensorflow. I can now convert the inception_v3 model as is model to the IR format. Then I tried the same with my fine tuned model and realized it was failing because my input node was a fifo queue instead of the default inception_v3 input [?,299,299,3] so I modified the input to match the inception_v3_2016_08_28 input , now I get a new error :

 WARNING ]  Please set `version` attribute for node InceptionV3/Logits/Dropout_1b/dropout/random_uniform/RandomUniform with type=<UNKNOWN>
[ ERROR ]  List of operations that cannot be converted to Inference Engine IR:
[ ERROR ]      RandomUniform (1)
[ ERROR ]          InceptionV3/Logits/Dropout_1b/dropout/random_uniform/RandomUniform
[ ERROR ]  Part of the nodes was not converted to IR. Stopped.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #2

I believe random uniform is not a supported op, some cursory google search shows this is a tf2 op but I am not sure how/why this operation was used during fine tuning, I trained in tf 1.14. Any ideas or solutions of how to resolve this?

 

Regards,

snhase

0 Kudos