Community
cancel
Showing results for 
Search instead for 
Did you mean: 
SDwiv1
Beginner
448 Views

Converting model from TF to IR using multiple inputs and non image input shape

Hi,

I would like help to convert my model from TF to IR format

My models have multiple input nodes, input1 and input2 (matrix, not image) with a single output, defined as:

self.geo_inputs = tf.placeholder(dtype = tf.float32, shape=(None, n_frame, 182), name='input1') 
self.traj_inputs = tf.placeholder(dtype = tf.float32, shape = (None, n_frame, 28), name='input2')

The first dim of shape is left as None, now if I use the mo.py script without specifying the input shape parameter to convert my model, I get the following error:

Model Optimizer arguments:
Common parameters:
  - Path to the Input Model:   /home/sweta/Desktop/12-3-19/fall_v1.1/frozen_model.pb
  - Path for generated IR:   /home/sweta/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/irmodels
  - IR output name:   frozen_model
  - Log level:   ERROR
  - Batch:   Not specified, inherited from the model
  - Input layers:   input1,input2
  - Output layers:   output
  - Input shapes:   Not specified, inherited from the model
  - Mean values:   Not specified
  - Scale values:   Not specified
  - Scale factor:   Not specified
  - Precision of IR:   FP32
  - Enable fusing:   True
  - Enable grouped convolutions fusing:   True
  - Move mean values to preprocess section:   False
  - Reverse input channels:   False
TensorFlow specific parameters:
  - Input model in text protobuf format:   False
  - Offload unsupported operations:   False
  - Path to model dump for TensorBoard:   None
  - List of shared libraries with TensorFlow custom layers implementation:   None
  - Update the configuration file with input/output node names:   None
  - Use configuration file used to generate the model with Object Detection API:   None
  - Operations to offload:   None
  - Patterns to offload:   None
  - Use the config file:   None
Model Optimizer version:   1.5.12.49d067a0
[ ERROR ]  Shape [-1 30 28] is not fully defined for output 0 of "input2". Use —input_shape with positive integers to override model input shapes.
[ ERROR ]  Cannot infer shapes or values for node "input2".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "input2". 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40. 
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_placeholder_ext.<locals>.<lambda> at 0x7fe946875bf8>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via —input_shape).
[ ERROR ]  Run Model Optimizer with —log_level=DEBUG for more information.
[ ERROR ]  Stopped shape/value propagation at "input2" node. 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

If I try to change the command for mo.py script to supply the --input_shape parameter:

sweta@sweta-VirtualBox:~/intel/computer_vision_sdk/deployment_tools/model_optimizer$ sudo python3 mo.py —input_model ~/Desktop/12-3-19/fall_v1.1/frozen_model.pb —input input1,input2 —output output  —output_dir irmodels —input_shape [1,30,182],[1,30,28]

I get the following results:

Sweta, [15.03.19 10:59]
Model Optimizer arguments:
Common parameters:
  - Path to the Input Model:   /home/sweta/Desktop/12-3-19/fall_v1.1/frozen_model.pb
  - Path for generated IR:   /home/sweta/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/irmodels
  - IR output name:   frozen_model
  - Log level:   ERROR
  - Batch:   Not specified, inherited from the model
  - Input layers:   input1,input2
  - Output layers:   output
  - Input shapes:   [1,30,182],[1,30,28]
  - Mean values:   Not specified
  - Scale values:   Not specified
  - Scale factor:   Not specified
  - Precision of IR:   FP32
  - Enable fusing:   True
  - Enable grouped convolutions fusing:   True
  - Move mean values to preprocess section:   False
  - Reverse input channels:   False
TensorFlow specific parameters:
  - Input model in text protobuf format:   False
  - Offload unsupported operations:   False
  - Path to model dump for TensorBoard:   None
  - List of shared libraries with TensorFlow custom layers implementation:   None
  - Update the configuration file with input/output node names:   None
  - Use configuration file used to generate the model with Object Detection API:   None
  - Operations to offload:   None
  - Patterns to offload:   None
  - Use the config file:   None
Model Optimizer version:   1.5.12.49d067a0
[ ERROR ]  —---------------------------------------------—
[ ERROR ]  —--------------- INTERNAL ERROR —------------—
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID (<class 'extensions.middle.TF_lstm_cell_to_generic.TensorFlowLSTMtoGeneric'>)": 
[ ERROR ]  Traceback (most recent call last):
  File "/home/sweta/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 114, in apply_replacements
    replacer.find_and_replace_pattern(graph)
  File "/home/sweta/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo/utils/replacement_pattern.py", line 28, in find_and_replace_pattern
    apply_pattern(graph, **self.pattern(), action=self.replace_pattern)  # pylint: disable=no-member
  File "/home/sweta/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo/middle/pattern_match.py", line 98, in apply_pattern
    for_each_sub_graph(graph, lambda graph: apply_pattern(graph, nodes, edges, action, node_attrs, edge_attrs))
  File "/home/sweta/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo/middle/pattern_match.py", line 39, in for_each_sub_graph
    func(node[sub_graph_name])
  File "/home/sweta/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo/middle/pattern_match.py", line 98, in <lambda>
    for_each_sub_graph(graph, lambda graph: apply_pattern(graph, nodes, edges, action, node_attrs, edge_attrs))
  File "/home/sweta/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo/middle/pattern_match.py", line 95, in apply_pattern
    action(graph, match)
  File "/home/sweta/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/extensions/middle/TF_lstm_cell_to_generic.py", line 55, in replace_pattern
    assert len(weights_node.out_nodes()) == 1
AssertionError

The above exception was the direct cause of the following exception:

Sweta, [15.03.19 10:59]
Traceback (most recent call last):
  File "/home/sweta/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo/main.py", line 325, in main
    return driver(argv)
  File "/home/sweta/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo/main.py", line 267, in driver
    mean_scale_values=mean_scale)
  File "/home/sweta/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 328, in tf2nx
    class_registration.apply_replacements(graph, class_registration.ClassType.MIDDLE_REPLACER)
  File "/home/sweta/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 127, in apply_replacements
    )) from err
Exception: Exception occurred during running replacer "REPLACEMENT_ID (<class 'extensions.middle.TF_lstm_cell_to_generic.TensorFlowLSTMtoGeneric'>)": 

[ ERROR ]  —------------— END OF BUG REPORT —------------
[ ERROR ]  —---------------------------------------------—

 

Can anyone help me with the conversion of my model to IR format? 

Thank you,

Sweta

0 Kudos
10 Replies
Shubha_R_Intel
Employee
448 Views

Dear Sweta, please try with --log_level DEBUG and post the results here. Thanks !

Shubha

SDwiv1
Beginner
448 Views

Hi Shubha,

I used the following command to convert my TF model:

sudo python3 mo_tf.py —input_model ~/Desktop/12-3-19/fall_v1.1/frozen_model_1.pb —output_dir irmodels —input input1,input2 —output output —log_level=DEBUG

Since the log was too big including the [WARNING] and [INFO] logs, I will attach a text file rather than paste as code here.

Thank you,

Sweta 

 

arunav
Beginner
448 Views

Hello,

I am also facing the similar problem while converting the tensorflow graph using OpenVINO.

Could you please help me debugging the issue.

My logs and model (.meta) are attached.

mo.utils.error.Error: Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "AssignVariableOp_5" node. 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38. 

Kindly help

Regards

Arun

Shubha_R_Intel
Employee
448 Views

Dear Kumar, Arun and Dwivedi, Sweta,

Regarding 

Exception: Exception occurred during running replacer "REPLACEMENT_ID (<class 'extensions.middle.TF_lstm_cell_to_generic.TensorFlowLSTMtoGeneric'>)":

 

This is a known bug against OpenVino 2019R1.1 which I have filed. Hopefully it will be fixed in the R2 release.

Sorry about the inconvenience.

Thanks,

Shubha

arunav
Beginner
448 Views

Hi Shubha,

Thanks for the response.

It will be really helpful to know when R2 will be available.

It's important to know as we have some deadline associated with Intel Products.

 

Kind Regards

Arun

Katrichek__Igor
Beginner
448 Views

Hello.

I get the same error while converting the tensorflow graph using OpenVINO 2019R2.

At first I downloaded the model https://nomeroff.net.ua/models/mrcnn/mask_rcnn_numberplate_0700.pb ;

Then I downloaded the config https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/mask_rcnn...;

When a convert this model by command

python mo_tf.py --input_model=mask_rcnn_numberplate_0700.pb --tensorflow_use_custom_operations_config ./extensions/front/tf/mask_rcnn_support_api_v1.11.json --tensorflow_object_detection_api_pipeline_config mask_rcnn_inception_v2_coco.config --reverse_input_channels

 I get error

E0727 18:06:54.900050 10084 main.py:307] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.create_tensor_nodes.CreateTensorNodes'>): Graph contains 0 node after executing <cla
ss 'extensions.front.create_tensor_nodes.CreateTensorNodes'>. It considered as error because resulting IR will be empty which is not usual

 

Thank you,

Igor

Shubha_R_Intel
Employee
448 Views

Dear Katrichek, Igor,

Your particular issue is different from the original poster Dwivedi, Sweta. Dwivedi, Sweta issue should be fixed in OpenVino 2019R2. However you are using a model which is technically not one of the Model Optimizer Tensorflow Supported List . mask rcnn is one of the models on the supported list - perhaps you can try one of them ?

Thanks kindly,

Shubha

 

倪__嘉旻
Beginner
448 Views

Dwivedi, Sweta wrote:

Hi Shubha,

I used the following command to convert my TF model:

sudo python3 mo_tf.py —input_model ~/Desktop/12-3-19/fall_v1.1/frozen_model_1.pb —output_dir irmodels —input input1,input2 —output output —log_level=DEBUG

Since the log was too big including the [WARNING] and [INFO] logs, I will attach a text file rather than paste as code here.

Thank you,

Sweta 

 

 

Hi Sweta,

How do you save all the debug information in the txt file?

Look forward to your reply! Thank you!!

 

Best regards,

Kathryn 

倪__嘉旻
Beginner
448 Views

Dwivedi, Sweta wrote:

Hi Shubha,

I used the following command to convert my TF model:

sudo python3 mo_tf.py —input_model ~/Desktop/12-3-19/fall_v1.1/frozen_model_1.pb —output_dir irmodels —input input1,input2 —output output —log_level=DEBUG

Since the log was too big including the [WARNING] and [INFO] logs, I will attach a text file rather than paste as code here.

Thank you,

Sweta 

 

Hi Sweta,

How do you save all the debug logs in the txt file?

Look forward to your reply!

 

Best regards,

Kathryn

SSola8
New Contributor I
448 Views

Hi 倪, 嘉旻,

Please do the following:

1. Script filename.log

2. Do something here

3 exit

filename.log will have all information as displayed over command prompt. Thanks

 

Reply