Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Aditya_K_Intel
Employee
273 Views

OpenVino Model Optimizer Issue

Hi,

 

I am having issue while running Model Optimizer on the tensor flow generated model file. Below are the console statements:

 

C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer>python mo_tf.py --input_model "C:\Program Files\Intel\ImageAnalysisFramework\Model\HDMx_UnitEmpty_Mold_64x64x1_CNN_10000\simple_savemodel\saved_model.pb"
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      C:\Program Files\Intel\ImageAnalysisFramework\Model\HDMx_UnitEmpty_Mold_64x64x1_CNN_10000\simple_savemodel\saved_model.pb
        - Path for generated IR:        C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\.
        - IR output name:       saved_model
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  None
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  None
Model Optimizer version:        2019.1.1-83-g28dfbfd
WARNING: Logging before flag parsing goes to stderr.
E0714 22:37:32.726917 30624 main.py:320] Cannot load input model: TensorFlow cannot read the model file: "C:\Program Files\Intel\ImageAnalysisFramework\Model\HDMx_UnitEmpty_Mold_64x64x1_CNN_10000\simple_savemodel\saved_model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph

Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #43.

C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer>python mo_tf.py --input_model "C:\Program Files\Intel\ImageAnalysisFramework\Model\HDMx_UnitEmpty_Mold_64x64x1_CNN_10000\simple_savemodel\saved_model.pb" --tensorflow_use_custom_operations_config  yolo_v3_changed.json --batch 1
[ ERROR ]  The value for command line parameter "tensorflow_use_custom_operations_config" must be existing file/directory,  but "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\yolo_v3_changed.json" does not exist.

C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer>python mo_tf.py --input_model "C:\Program Files\Intel\ImageAnalysisFramework\Model\HDMx_UnitEmpty_Mold_64x64x1_CNN_10000\simple_savemodel\saved_model.pb" --tensorflow_use_custom_operations_config temp.txt --batch 1
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      C:\Program Files\Intel\ImageAnalysisFramework\Model\HDMx_UnitEmpty_Mold_64x64x1_CNN_10000\simple_savemodel\saved_model.pb
        - Path for generated IR:        C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\.
        - IR output name:       saved_model
        - Log level:    ERROR
        - Batch:        1
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  None
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\temp.txt
Model Optimizer version:        2019.1.1-83-g28dfbfd
WARNING: Logging before flag parsing goes to stderr.
E0714 22:45:57.818293 27420 main.py:320] Cannot load input model: TensorFlow cannot read the model file: "C:\Program Files\Intel\ImageAnalysisFramework\Model\HDMx_UnitEmpty_Mold_64x64x1_CNN_10000\simple_savemodel\saved_model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph

Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #43.

 

Can you please help me determine the root cause of this issue.

Thanks,

Aditya

11 Replies
Sahira_Intel
Moderator
273 Views

Hi Aditya,

It looks like you're converting a SavedModel. Your SavedModel directory should have these components:

  1. A SavedModel Protocol Buffer 
    • your saved_model.pb/.pbtxt 
    • graph definitions as MetaGraphDef
  2. Assets
    • contains auxiliary files such as vocabularies
  3. Extra Assets
    • Subfolder where higher-level libraries and users can add their own assets that coexist with the model, but aren't loaded by the graph.
  4. Variables
    • Includes outputs from Tensorflow Saver
    • variables.data & variables.index 

Can you please verify that your directory follows this structure? That could be what is causing the error here.

Best Regards,

Sahira 

Aditya_K_Intel
Employee
273 Views

Hi Sahira,

I have the structure and it seems like I have to use the checkpoint file with the pb file for model optimizer. I use the MetaGraph file for Model optimizer but now I am stuck with different error. Below are the log statements:

 

C:\Program Files (x86)\IntelSWTools\openvino_2019.1.148\deployment_tools\model_optimizer>python mo_tf.py --input_meta_graph "C:\Users\adityaku\Documents\Projects\OpenVINO\Model\EmptyPocket\My_Model-1000.meta" --input_shape [1,64,64,3]
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      None
        - Path for generated IR:        C:\Program Files (x86)\IntelSWTools\openvino_2019.1.148\deployment_tools\model_optimizer\.
        - IR output name:       My_Model-1000
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         [1,64,64,3]
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  None
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  None
Model Optimizer version:        2019.1.1-83-g28dfbfd
WARNING: Logging before flag parsing goes to stderr.
E0716 10:39:57.504062 23144 main.py:317] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.user_data_repack.UserDataRepack'>): No or multiple placeholders in the model, but only one shape is provided, cannot set it.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #32.

 

Let me know how to pass through this issue.

 

Regards,

Aditya

Aditya_K_Intel
Employee
273 Views

Hi Sahira,

Can you please let me know how to resolve above issue? I am stuck with this issue and can't move further in my task.

Thanks,

Aditya

Sahira_Intel
Moderator
273 Views

Hi Aditya,

I apologize for the delay in my response. Can you please provide your model? I can try to convert on my end to see if I'm also getting that error.

Sincerely,

Sahira 

Sahira_Intel
Moderator
273 Views

Hi Aditya,

Thanks for your model. I'm running into some errors while converting as well. Can you please provide more information about your model: 

How did you freeze the model? Is this a pre trained model or custom trained? What model architecture did you use?

Sincerely,

Sahira 

Aditya_K_Intel
Employee
273 Views

Hi Sahira,

The Model is Non Freeze mode. It is custom trained model. The model uses the CNN architecture.

Let me know if you need more information.

 

Thanks,
Aditya 

 

Sahira_Intel
Moderator
273 Views

Hi Aditya,

Thank you for the information! I am working on this and will get back to you asap.

Best Regards,

Sahira 

Sahira_Intel
Moderator
273 Views

Hi Aditya,

I apologize for the delay in my response. I looked at your model, and it looks like its has three inputs: x, and 2 TFRecordReaders. OpenVINO is detecting these other inputs and are throwing errors because only one input size is provided. Are the 2 TFRecordReader inputs needed in your model? 

Sincerely,

Sahira

Aditya_K_Intel
Employee
273 Views

Hi Sahira,

Can you explain me how I should use the model optimizer on my model created using TensorFlow. If I want to use this model to inference, I have to provide it the image with image size and channel. Do I have to provide the same input to the model optimizer?

 

Thanks,
Aditya 

Aditya_K_Intel
Employee
273 Views

Hi Sahira,

I have attached the model. Let me know if you need any information.

Thanks,
Aditya

Sahira_Intel
Moderator
273 Views

Hi Aditya,

It looks like OpenVINO does not support TFRecordReader as an input and that is why you are not able to convert your model. All three inputs should be supplied with Input Shapes. 

Please let me know if you have any further questions.

Best Regards,

Sahira