Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6433 Discussions

Error while convert tensorflow model To IR using Model Optimizer

Vina_Alvionita
Beginner
1,571 Views

Hello, i want to build mask clasifier model. so i trained tensorflow model using mobile-net v2 archiitecture. i had 2 output, mask and non mask. when i tried this model, it was running well. but i want to use in openvino. when i converted it to IR model using model optimizer i got error like this. 

 

C:\Program Files (x86)\Intel\openvino_2020.2.117\deployment_tools\model_optimizer>python mo.py --input_name "D:\saved_model.pb"
usage: mo.py [options]
mo.py: error: unrecognized arguments: --input_name D:\saved_model.pb

C:\Program Files (x86)\Intel\openvino_2020.2.117\deployment_tools\model_optimizer>python mo.py --input_name saved_model.pb
usage: mo.py [options]
mo.py: error: unrecognized arguments: --input_name saved_model.pb

C:\Program Files (x86)\Intel\openvino_2020.2.117\deployment_tools\model_optimizer>python mo.py --input_model saved_model.pb
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: C:\Program Files (x86)\Intel\openvino_2020.2.117\deployment_tools\model_optimizer\saved_model.pb
- Path for generated IR: C:\Program Files (x86)\Intel\openvino_2020.2.117\deployment_tools\model_optimizer\.
- IR output name: saved_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Use the config file: None
Model Optimizer version: 2020.2.0-60-g0bc66e26ff
[ FRAMEWORK ERROR ] Cannot load input model: TensorFlow cannot read the model file: "C:\Program Files (x86)\Intel\openvino_2020.2.117\deployment_tools\model_optimizer\saved_model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph

Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Truncated message..
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #43.

0 Kudos
6 Replies
Peh_Intel
Moderator
1,544 Views

Hi Vina,


Greetings to you. Firstly, I observed that you’re using an old version of Intel® Distribution of OpenVINO™ toolkit (2020.2). I would recommend you install the latest version of Intel® Distribution of OpenVINO™ toolkit which is 2021.2.


Secondly, I observed that you’re using this command line:

python mo.py --input_model saved_model.pb

This instruction makes the input_model points to the saved_model.pb, but in fact, it should point to the directory. The command line should be:

python mo.py --saved_model_dir <SAVED_MODEL_DIRECTORY>

 

You may refer to the link below under point 3 of the “Loading Non-Frozen Models to the Model Optimizer” section for the steps to store non-frozen TensorFlow models and load them to the Model Optimizer:

https://docs.openvinotoolkit.org/2020.2/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#loading-nonfrozen-models


For your information, SSD MobileNet V2 COCO is a TensorFlow Object Detection API model which is supported by Model Optimizer. TensorFlow Object Detection API model require more arguments in converting to IR format for example:

·      --reverse_input_channels

·      --tensorflow_object_detection_api_pipeline_config

·      --transformations_config


You may find all the required model_optimizer arguments for this SSD MobileNet V2 COCO in this file:

<path_to_openvino>\deployment_tools\open_model_zoo\models\public\ssd_mobilenet_v2_coco\model.yml

 

These information are also available here:

https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/ssd_mobilenet_v2_coco/model.yml#L36


For more details, you may refer to the following link:

https://docs.openvinotoolkit.org/2020.2/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html#how_to_convert_a_model




Regards,

Peh


0 Kudos
Vina_Alvionita
Beginner
1,526 Views

I have tried to retrained a simple model and i use tensorflow version 2.4.1. and then i convert to IR model used this command :

python mo.py --saved_model_dir "D:\CP"

but i got different error

[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.load.tf.loader.TFLoader'>): Unexpected exception happened during extracting attributes for node Adam/dense_27/bias/v/Read/ReadVariableOp.
Original exception message: 'ascii' codec can't decode byte 0xcc in position 1: ordinal not in range(128)

0 Kudos
Vina_Alvionita
Beginner
1,516 Views

UPDATE 

I am also tried freezing the model, but i got this error. which step i actually wrong? 

Model Optimizer arguments:
Common parameters:
- Path to the Input Model: D:\saved_model_2.pb
- Path for generated IR: C:\Program Files (x86)\Intel\openvino_2020.2.117\deployment_tools\model_optimizer\.
- IR output name: saved_model_2
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,3,224,224]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Use the config file: None
Model Optimizer version: 2020.2.0-60-g0bc66e26ff
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.BiasAddBroadcasting.BiasAddInputBroadcasting'>): After partial shape inference were found shape collision for node model_1/dense_2/BiasAdd/Add (old shape: [ 0 128], new shape: [ -1 128])

0 Kudos
Vina_Alvionita
Beginner
1,513 Views

Thankyou Peh, i read again carefully all of information you gave above. and i get the solution, it need to add --reverse_input_channels . the IR model succesfully converted

 

0 Kudos
Peh_Intel
Moderator
1,504 Views

Hi Vina,


Greetings to you. I’m glad to hear that you had successfully converted your model into IR format.


Shall we close this thread?



Regards,

Peh


0 Kudos
Peh_Intel
Moderator
1,479 Views

Hi Vina,


This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.



Regards,

Peh


0 Kudos
Reply